text
stringlengths
59
500k
subset
stringclasses
6 values
Pandora: nucleotide-resolution bacterial pan-genomics with reference graphs Rachel M. Colquhoun1,2,3, Michael B. Hall1, Leandro Lima1, Leah W. Roberts1, Kerri M. Malone1, Martin Hunt1,4, Brice Letcher1, Jane Hawkey5, Sophie George4, Louise Pankhurst4,6 & Zamin Iqbal ORCID: orcid.org/0000-0001-8466-75471 We present pandora, a novel pan-genome graph structure and algorithms for identifying variants across the full bacterial pan-genome. As much bacterial adaptability hinges on the accessory genome, methods which analyze SNPs in just the core genome have unsatisfactory limitations. Pandora approximates a sequenced genome as a recombinant of references, detects novel variation and pan-genotypes multiple samples. Using a reference graph of 578 Escherichia coli genomes, we compare 20 diverse isolates. Pandora recovers more rare SNPs than single-reference-based tools, is significantly better than picking the closest RefSeq reference, and provides a stable framework for analyzing diverse samples without reference bias. Bacterial genomes evolve by multiple mechanisms including mutation during replication, allelic and non-allelic homologous recombination. These processes result in a population of genomes that are mosaics of each other. Given multiple contemporary genomes, the segregating variation between them allows inferences to be made about their evolutionary history. These analyses are central to the study of bacterial genomics and evolution [1,2,3,4] with different questions requiring focus on separate aspects of the mosaic: fine-scale (mutations) or coarse (gene presence, synteny). In this paper, we provide a new and accessible conceptual model that combines both fine and coarse bacterial variation. Using this new understanding to better represent variation, we can access previously hidden single-nucleotide polymorphisms (SNPs), insertions and deletions (indels). This can be used to add resolution to phylogenetic analyses of diverse cohorts, to investigate selection and adaptation in the accessory genome, and to aid bacterial genome-wide association studies (GWAS). In the standard approach to analyzing genetic variation, a single genome is treated as a reference and all other genomes are interpreted as differences from it. This approach is problematic in bacteria, because while genes cover 85–90% of bacterial genomes [5], the full set of genes present in a bacterial species—the pan-genome—is in general much larger than the number found in any single genome. Further, the frequency distribution of genes has a characteristic asymmetric U-shaped curve [6,7,8,9,10], as shown in Fig. 1A. As a result, a single-reference genome will inevitably lack many of the genes in the pan-genome and completely miss genetic variation therein (Fig. 1B). We call this hard reference bias, to distinguish from the more common concern, that increased divergence of a reference from the genome under study leads to read-mapping problems, which we term soft reference bias. The standard workaround for these issues in bacterial genomics is to restrict analysis either to very similar genomes using a closely related reference (e.g., in an outbreak) or to analyze SNPs only in the core genome (present in most samples) and outside the core to simply study presence/absence of genes [11]. Universal gene frequency distribution in bacteria and the single-reference problem. A Frequency distribution of genes in 10 genomes of 6 bacterial species (Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Staphylococcus aureus, Salmonella enterica, and Streptococcus pneumoniae) showing the characteristic U-shaped curve—most genes are rare or common. B Illustrative depiction of the single-reference problem, a consequence of the U-shaped distribution. Each vertical column is a bacterial genome, and each colored bar is a gene. Numbers are identifiers for SNPs—there are 36 in total. For example the dark blue gene has 4 SNPs numbered 1–4. This figure does not detail which genome has which allele. Below each column is the proportion of SNPs that are discoverable when that genome is used as a reference genome. Because no single reference contains all the genes in the population, it can only access a fraction of the SNPs In this study, we address the variation deficit caused by a single-reference approach. Given Illumina or Nanopore sequence data from potentially divergent isolates of a bacterial species, we attempt to detect all of the variants between them. Our approach is to decompose the pan-genome into atomic units (loci) which tend to be preserved over evolutionary timescales. Our loci are genes and intergenic regions in this study, but the method is agnostic to such classifications, and one could add any other grouping wanted (e.g., operons or mobile genetic elements). Instead of using a single genome as a reference, we collect a panel of representative reference genomes and use them to construct a set of reference graphs, one for each locus. Reads are mapped to this set of graphs and from this we are able to discover and genotype variation. By letting go of prior information on locus ordering in the reference panel, we are able to recognize and genotype variation in a locus regardless of its wider context. Since Nanopore reads are typically long enough to encompass multiple loci, it is possible to subsequently infer the order of loci—although that is outside the scope of this study. The use of graphs as a generalization of a linear reference is an active and maturing field [12,13,14,15,16,17,18,19]. Much recent graph genome work has gone into showing that genome graphs reduce the impact of soft reference bias on mapping [12, 14, 15], and on generalizing alignment to graphs [16, 20]. These methods have almost universally been designed for the human pan-genome, where hard reference bias is a comparatively minor issue compared with bacteria (two human genomes are over 99% alignable, whereas two bacteria of the same species might be 50% alignable). In particular, all current graph methods (e.g., vg [12], Giraffe [21], GraphTyper [14, 15], paragraph [22], BayesTyper [23]) require a reference genome to be provided in advance to output genetic variants in the standard Variant Call Format (VCF) [24]—thus immediately inheriting a hard bias when applied to bacteria (see Fig. 1B). Thus, there has not yet been any study (to our knowledge) addressing SNP analysis across a diverse cohort, including more variants that can fit on any single reference. We have made a number of technical innovations. First, we use a recursive clustering algorithm that converts a multiple sequence alignment (MSA) of a locus into a graph. This avoids the complexity "blowups" that plague graph genome construction from unphased VCF files [14,15]. Second, we introduce a graph representation of genetic variation based on (w,k)-minimizers [25]. Third, using this representation, we avoid unnecessary full alignment to the graph and instead use quasi-mapping to genotype on the graph. Fourth, we use local assembly to discover variation missing from the reference graph. Fifth, we infer a canonical dataset-dependent reference genome designed to maximize clarity of description of variants (the value of this will be made clear in the main text). We describe these below and evaluate our implementation, pandora, on a diverse set of E. coli genomes with both Illumina and Nanopore data. We show that, compared with reference-based approaches, pandora recovers a significant proportion of the missing variation in rare loci, performs much more stably across a diverse dataset, successfully infers a better reference genome for VCF output, and outperforms current tools for Nanopore data. Pan-genome graph representation We set out to define a generalized reference structure which allows detection of SNPs and other variants across the whole pan-genome, without attempting to record long-range structure or coordinates. We define a Pan-genome Reference Graph (PanRG) as an unordered collection of sequence graphs, termed local graphs, each of which represents a locus, such as a gene or intergenic region. Each local graph is constructed from a MSA of known alleles of this locus, using a recursive cluster-and-collapse (RCC) algorithm (Additional file 1: Supplementary Animation 1: recursive clustering construction). The output is guaranteed to be a directed acyclic sequence graph allowing hierarchical nesting of genetic variation while meeting a "balanced parentheses" criterion (see Fig. 2B and "Methods"). Each path through the graph from source to sink represents a possible recombinant sequence for the locus. The disjoint nature of this pan-genome reference allows loci such as genes to be compared regardless of their wider genomic context. We implement this construction algorithm in the make_prg tool which outputs the graph as a file (see Fig. 2A–C, "Methods"). We also implement a PanRG update algorithm in make_prg which allows rapid augmentation of a pre-built PanRG with new alleles (see Fig. 2D, "Methods"). Subsequent operations, based on this PanRG, are implemented in the software package pandora. The overall workflow is shown in Fig. 2. The pandora workflow. A Reference panel of genomes; color signifies locus (gene or intergenic region) identifier, and blobs are SNPs. B The multiple sequence alignment (MSA) for each locus is converted into a directed acyclic graph (termed local graph). C Local graphs constructed from the loci in the reference panel. D Workflow: the collection of local graphs, termed the PanRG, is indexed. Reads from each sample under study are independently quasi-mapped to the graph, and a determination is made as to which loci are present in each sample. In this process, for each locus, a mosaic approximation of the sequence for that sample is inferred, and variants are genotyped. E Regions of low coverage are detected, and local de novo assembly is used to generate candidate novel alleles missing from the graph. Returning to D, the dotted line shows all the candidate alleles from all samples are then gathered and added to the PanRG. Then, reads are quasi-mapped one more time, to the augmented PanRG, generating new mosaic approximations for all samples and storing coverages across the graphs; no de novo assembly is done this time. A pan-genome matrix showing which input loci are present in each sample is created. Finally, all samples are compared, and a VCF file is produced, with a per-locus reference that is inferred by pandora To index a PanRG, we generalize a type of sparse marker k-mer ((w,k)-minimizer, also referred to as a minimizing k-mer), previously defined for strings, to directed acyclic graphs (see "Methods"). Informally, each minimizer is the smallest k-mer in a window of w consecutive k-mers. This has become a popular method for rapidly indexing and comparing genomes and long reads (e.g., MASH [26], minimap [27, 28]). It has the advantage that the number of indexing k-mers scales with the length of the sequence so that different length local graphs are each well represented in the index. In addition, it enables shorter indexing k-mers to be used, which improves mapping with noisy reads. Each local graph is sketched with minimizing k-mers, and these are then used to construct a new graph (the k-mer graph) for each local graph from the PanRG. Each minimizing k-mer is a node, and edges are added between two nodes if they are adjacent minimizers on a path through the original local graph. This k-mer graph is isomorphic to the original if w ≤ k (and outside the first and last w + k − 1 bases); all subsequent operations are performed on this graph, which, to avoid unnecessary new terminology, we also call the local graph. A global index maps each minimizing k-mer to a list of all local graphs containing that k-mer and the positions therein. Long or short reads are approximately mapped (quasi-mapped) to the PanRG by determining the minimizing k-mers in each read. Any of these read quasi-mappings found in a local graph are called hits, and any local graph with sufficient clustered hits on a read is considered present in the sample. Initial sequence approximation as a mosaic of references For each locus identified as present in a sample, we initially approximate the sample's sequence as a path through the local graph. The result is a mosaic of sequences from the reference panel. This path is chosen to have maximal support by reads, using a dynamic programming algorithm on the graph induced by its (w,k)-minimizers (details in "Methods"). The result of this process serves as our initial approximation to the genome under analysis. Improved sequence approximation: modify mosaic by local assembly At this point, we have quasi-mapped reads, and approximated the genome by finding the closest mosaic in the graph; however, we expect the genome under study to contain variants that are not present in the PanRG. Therefore, to allow discovery of novel SNPs and small indels that are not in the graph, for each sample and locus, we identify regions of the inferred mosaic sequence where there is a drop in read coverage (as shown in Fig. 2E). Slices of overlapping reads are extracted, and a form of de novo assembly is performed using a de Bruijn graph. Instead of trying to find a single correct path, the de Bruijn graph is traversed (see "Methods" for details) to obtain all feasible candidate novel alleles for the sample. These alleles are then added to the local graph. If comparing multiple samples, the graphs are augmented with all new alleles from all samples at the same time. Optimal VCF reference construction for multi-genome comparison In the compare step of pandora (see Fig. 2D), we enable continuity of downstream analysis by outputting genotype information in the conventional VCF [24]. In this format, each row (record) describes possible alternative allele sequence(s) at a position in a (single) reference genome and information about the type of sequence variant. A column for each sample details the allele seen in that sample, often along with details about the support from the data for each allele. To output graph variation, we first select a path through the graph to be the reference sequence and describe any variation within the graph with respect to this path as shown in Fig. 3. We use the chromosome field to detail the local graph within the PanRG in which a variant lies, and the position field to give the position in the chosen reference path sequence for that graph. In addition, we output the reference path sequences used as a separate file. The representation problem. A A local graph with sequence explicitly shown. B, C The same graph with black reference path and alternate alleles in different colors, and the corresponding VCF records. In B, the black reference path is distinct from both alleles. The blue/red SNP then requires flanking sequence in order to allow it to have a coordinate. The SNP is thus represented as two ALT alleles, each 3 bases long, and the user is forced to notice they only differ in one base. C The reference follows the blue path, thus enabling a more succinct and natural representation of the SNP For a collection of samples, we want small differences between samples to be recorded as short alleles in the VCF file rather than longer alleles with shared flanking sequence as shown in Fig. 3B. We therefore choose the reference path for each local graph to be maximally close to the sample mosaic paths. To do this, we make a copy of the k-mer graph and increment the coverage along each sample mosaic path, producing a graph with higher weights on paths shared by more samples. We reuse the mosaic path-finding algorithm (see "Methods") with a modified probability function defined such that the probability of a node is proportional to the number of samples covering it. This produces a dataset-dependent VCF reference able to succinctly describe segregating variation in the cohort of genomes under analysis. Constructing a PanRG of E. coli We chose to evaluate pandora on the recombining bacterial species, E. coli, whose pan-genome has been heavily studied [7, 29,30,31,32]. MSAs for gene clusters curated with panX [33] from 350 RefSeq assemblies were downloaded from http://pangenome.de on 3 May 2018. MSAs for intergenic region clusters based on 228 E. coli ST131 genome sequences were previously generated with Piggy [34] for their publication. While this panel of intergenic sequences does not reflect the full diversity within E. coli, we included them as an initial starting point. This resulted in an E. coli PanRG containing local graphs for 23,051 genes and 14,374 intergenic regions. Pandora took 15 m in CPU time (11 m in runtime with 16 threads) and 12.9 GB of RAM to index the PanRG. As one would expect from the U-shaped gene frequency distribution, many of the genes were rare in the 578 (=350 + 228) input genomes, and so 59%/44% of the genic/intergenic graphs were linear, with just a single allele. Constructing an evaluation set of diverse genomes We first demonstrate that using a PanRG reduces hard bias when comparing a diverse set of 20 E. coli samples by comparison with standard single-reference variant callers. We selected samples from across the phylogeny (including phylogroups A, B2, D and F [35]) where we were able to obtain both long- and short-read sequence data from the same isolate. We used Illumina-polished long-read assemblies as truth data, masking positions where the Illumina data did not support the assembly (see "Methods"). As comparators, we used SAMtools [36] (the "classical" variant caller based on pileups) and Freebayes [37] (a haplotype-based caller which reduces soft reference bias, wrapped by snippy [38]) for Illumina data, and medaka [39] and nanopolish [40] for Nanopore data. In all cases, we ran the reference-based callers with 24 carefully selected reference genomes (see "Methods" and Fig. 4). We defined a "truth set" of 618,305 segregating variants by performing all pairwise whole genome alignments of the 20 truth assemblies, collecting SNP variants between the pairs, and deduplicating them by clustering into equivalence classes. Each class, or pan-variant, represents the same variant found at different coordinates in different genomes (see "Methods"). We evaluated error rate (proportion of VCF records which are incorrect, see "Methods"), pan-variant recall (PVR, proportion of segregating sites in the truth set discovered) and average allelic recall (AvgAR, average of the proportion of alleles of each pan-variant that are found). To clarify the definitions, consider a toy example. Suppose we have three genes, each with one SNP between them. The first gene is rare, present in 2/20 genomes. The second gene is at an intermediate frequency, in 10/20 genomes. The third is a strict core gene, present in all genomes. The SNP in the first gene has alleles A,C at 50% frequency (1 A and 1 C). The SNP in the second gene has alleles G,T at 50% frequency (5 G and 5 T). The SNP in the third gene has alleles A,T with 15 A and 5 T. Suppose a variant caller found the SNP in the first gene, detecting the two correct alleles. For the second gene's SNP, it detected only one G and one T, failing to detect either allele in the other 8 genomes. For the third gene's SNP, it detected all the 5 T's, but no A. Here, the pan-variant recall would be: (1 + 1 + 0) / 3 = 0.66—i.e., score a 1 if both alleles are found, irrespective of how often- and the average allelic recall would be (2/2 + 2/10 + 5/20)/3 = 0.48. Thus PVR and AvgAR are pan-genome equivalents of standard site discovery power and genotyping accuracy. Phylogeny of 20 diverse E. coli along with references used for benchmarking single-reference variant callers. The 20 E. coli under study are labelled as samples in the left-hand of three vertical label-lines. Phylogroups (clades) are labelled by color of branch, with the key in the inset. References were selected from RefSeq as being the closest to one of the 20 samples as measured by Mash, or manually selected from a tree (see "Methods"). Two assemblies from phylogroup B1 are in the set of references, despite there being no sample in that phylogroup Pandora detects rare variation inaccessible to single-reference methods First, we evaluate the primary aim of pandora—the ability to access genetic variation within the accessory genome. Figure 5 shows the PVR of SNPs in the truth set broken down by the number of samples the SNP (either allele) is present in. Results are shown for pandora, medaka, and nanopolish using Nanopore sequence data, and Additional file 1: Supplementary Figure 1 shows an almost identical result for pandora, snippy, and SAMtools using Illumina sequence data. Pan-variant recall across the locus frequency spectrum. Every SNP occurs in a locus, which is present in some subset of the full set of 20 genomes. SNPs in the golden truth set are broken down by the number of samples the locus is present in. In panel A, we show the absolute count of pan-variants found and in panel B we show the proportion of pan-variants found (PVR) for pandora (dotted line), nanopolish, and medaka with Nanopore data If we restrict our attention to rare variants (present only in 2–5 genomes), we find pandora recovers at least 17.5/24.5/11.6/20.8 k more SNPs than SAMtools/snippy/medaka/nanopolish respectively. As a proportion of rare SNPs in the truth set, this is a lift in PVR of 10.9/15.3/7.2/13.0% respectively. If, instead of pan-variant recall, we look at the variation of AvgAR across the locus frequency spectrum (see Additional file 1: Supplementary Figure 2), the gap between pandora and the other tools on rare loci is even larger. These observations confirm and quantify the extent to which we are able to recover accessory genetic variation that is inaccessible to single-reference-based methods. Benchmarking recall, error rate, and dependence on reference We show in Fig. 6A,B the Illumina and Nanopore AvgAR/error rate plots for pandora and four single-reference tools with no filters applied. For all of these, we modify only the minimum genotype confidence to move up and down the curves (see "Methods"). Benchmarks of recall/error rate and dependence of precision on reference genome, for pandora and other tools on 20-way dataset. A The average allelic recall and error rate curve for pandora, SAMtools, and snippy on 100× of Illumina data. Snippy/SAMtools both run 24 times with the different reference genomes shown in Fig. 4, resulting in multiple lines for each tool (one for each reference). B The average allelic recall and error rate curve for pandora, medaka, and nanopolish on 100× of Nanopore data; multiple lines for medaka/nanopolish, one for each reference genome. Note panels A and B have the same y-axis scale and limits, but different x axes. C The precision of pandora, SAMtools, and snippy on 100× of Illumina data. The boxplots show the distribution of SAMtools' and snippy's precision depending on which of the 24 references was used, and the blue line connects pandora's results. D The precision of pandora (line plot), medaka, and nanopolish (both boxplots) on 100× of Nanopore data. Note different y-axis scale/limits in panels C and D We highlight three observations. Firstly, pandora achieves essentially the same recall and error rate for the Illumina and Nanopore data (85% AvgAR and 0.2–0.3% error rate at the top-right of the curve, completely unfiltered). Second, choice of reference has a significant effect on both AvgAR and error rate for the single-reference callers; the reference which enables the highest recall does not lead to the best error rate. Third, pandora achieves better AvgAR (85%) than all other tools (all between 73 and 84%, see Additional file 1: Supplementary Table 1), and a better error rate (0.2–0.3%) than SAMtools (1.0%), nanopolish (2.4%), and medaka (14.8%). However, snippy achieves a significantly better error rate than all other tools (0.01%). We confirmed that adding further filters slightly improved error rates, but did not change the overall picture (Additional file 1: Supplementary Figure 3, "Methods", Additional file 1: Supplementary Table 1). The results are also in broad agreement if the PVR is plotted instead of AvgAR (Additional file 1: Supplementary Figure 4). However, these AvgAR and PVR figures are hard to interpret because pandora and the reference-based tools have recall that varies differently across the locus frequency spectrum, as described above. We explore this further below. We ascribe the similarity between the Nanopore and Illumina performance of pandora to three reasons. First, the PanRG is a strong prior—our first approximation does not contain any Nanopore sequence, but simply uses quasi-mapped reads to find the nearest mosaic in the graph. Second, mapping long Nanopore reads which completely cover entire genes is easier than mapping Illumina data and allows us to filter out erroneous k-mers within reads after deciding when a gene is present. Third, this performance is only achieved when we use methylation-aware basecalling of Nanopore reads, presumably removing most systematic bias (see Additional file 1: Supplementary Figure 5). In Fig. 6C,D, we show for Illumina and Nanopore data, the impact of reference choice on the precision of calls on each of the 20 samples. While precision is consistent across all samples for pandora, we see a dramatic effect of reference choice on precision of SAMtools, medaka, and nanopolish. The effect is also detectable for snippy, but to a much lesser extent. Finally, we measured the performance of locus presence detection, restricting to genes/intergenic regions in the PanRG, so that in principle perfect recall would be possible (see "Methods"). In Additional file 1: Supplementary Figure 6, we show the distribution of locus presence calls by pandora, split by length of locus for Illumina and Nanopore data. Overall, 93.7%/94.3% of loci were correctly classified as present or absent for Illumina/Nanopore respectively. Misclassifications were concentrated on small loci (below 500 bp). While 59.5%/57.4% of all loci in the PanRG are small, 75.5%/75.7% of false positive calls and 99.1%/98.5% of false negative calls are small loci (see Additional file 1: Supplementary Figure 6). Pandora has consistent results across E. coli phylogroups We measure the impact of reference bias (and population structure) by quantifying how recall varies in phylogroups A, B2, D, and F depending on whether the reference genome comes from the same phylogroup. We plot the results for snippy with 5 exemplar references in Fig. 7A (results for all tools and for all references are in Additional file 1: Supplementary Figures 7-10), showing that single references give 5–10% higher recall for samples in their own phylogroup than other phylogroups. By comparison, pandora's recall is much more consistent, staying stable at ~ 90% for all samples regardless of phylogroup. References in phylogroups A and B2 achieve higher recall in their own phylogroup, but consistently worse than pandora for samples in the other phylogroups (in which the reference does not lie). References in the external phylogroup B1, for which we had no samples in our dataset, achieve higher recall for samples in the nearby phylogroup A (see inset, Fig. 4), but lower than pandora for all others. We also see that choosing a reference genome from phylogroup F, which sits intermediate to the other phylogroups, provides the most uniform recall across other groups—2–5% higher than pandora. Single-reference callers achieve higher recall for samples in the same phylogroup as the reference genome, but not for rare loci. A Pandora recall (black line) and snippy recall (colored bars) of pan-variants in each of the 20 samples; each histogram corresponds to the use of one of 5 exemplar references, one from each phylogroup. The background color denotes the reference's phylogroup (see Fig. 4 inset); note that phylogroup B1 (yellow background) is an outgroup, containing no samples in this dataset. B Same as A but restricted to SNPs present in precisely two samples (i.e., where 18 samples have neither allele because the entire locus is missing). Note the differing y-axis limits in the two panels These results will, however, be dominated by the shared, core genome. If we replot Fig. 7A, restricting to variants in loci present in precisely 2 genomes (abbreviated to 2-variants; Fig. 7B), we find that pandora achieves 49–86% recall for each sample (complete data in Additional file 1: Supplementary Figure 11). By contrast, for any choice of reference genome, the results for single-reference callers vary dramatically per sample, ranging from 4 to 83% for snippy for example. Most sample-reference pairs (388/480) have recall under 49% (the lower bound for pandora recall), and there is no pattern of improved recall for samples in the same phylogroup as the reference. Following up that last observation, if we look at which pairs of genomes share 2-variants (Fig. 8), we find there is no enrichment within phylogroups at all. This simply confirms in our data that presence of rare loci is not correlated with the overall phylogeny. For completeness, Additional file 1: Supplementary Animation 2 shows the pandora and snippy recall for all 24 references, split by variant frequency. Sharing of variants present in precisely 2 genomes, showing which pairs of genomes they lie in and which phylogroups; darker colors signify higher counts (log scale). Axes are labelled with genome identifiers, colored by their phylogroup (see Fig. 4 inset) Pandora VCF reference is closer to samples than any single reference The relationship between phylogenetic distance and gene repertoire similarity is not linear. In fact, 2 genomes in different phylogroups may have more similar accessory genes than 2 in the same phylogroup—as illustrated in the previous section (also see Fig. 3 in Rocha [3]). As a result, it is unclear a priori how to choose a good reference genome for comparison of accessory loci between samples. Pandora specifically aims to construct an appropriate reference for maximum clarity in VCF representation. We evaluate how well pandora is able to find a VCF reference close to the samples under study as follows. We first identified the location of all loci in all the 20 sample assemblies and the 24 references (see "Methods"). We then measured the edit distance between each locus in each of the references and the corresponding version in the 20 samples. We found that the pandora's VCF reference lies within 1% edit distance (scaled by locus length) of the sample far more than any of the references for loci present in ≤ 9 samples (Fig. 9; note the log scale). Additional file 1: Supplementary Figure 12 shows a similar result for 0% edit distance (exact match). In both cases, the improvement is much reduced in the core genome; essentially, in the core, a phylogenetically close reference provides a good approximation, but it is hard to choose a single reference that provides a close approximation to all rare loci. By contrast, pandora is able to leverage its reference panel, and the dataset under study, to find a good approximation. How often do references closely approximate a sample? Pandora aims to infer a reference for use in its VCF, which is as close as possible to all samples. We evaluate the success of this here. The x-axis shows the number of genomes in which a locus occurs. The y-axis shows the (log-scaled) count of loci in the 20 samples that are within 1% edit distance (scaled by locus length) of each reference—box plots for the reference genomes, and line plot for the VCF reference inferred by pandora Computational performance We report here the CPU time and maximum RAM consumed by the evaluated tools. All of the single-reference tools analyzed isolates independently, whereas pandora has a subsequent joint analysis step to compare them all; we therefore compare the end-to-end performance of pandora analyzing all 20 samples against the mean performance of each single-reference tool (summing all 20 samples, and then averaging over the different reference genomes). In short, pandora took 9.2 CPU hours to analyze the 20 isolates with Illumina data while snippy and SAMtools both took 0.4 CPU hours. With Nanopore data, pandora took 16.4 CPU hours, which is slower than medaka (0.7 CPU hours), but faster than nanopolish (84 CPU hours). In terms of memory usage, for the Illumina data, pandora used a maximum of 13.4 GB of RAM, compared with snippy (3.2 GB), and SAMtools (1.0 GB), whereas for the Nanopore data, pandora used a maximum of 15.7 GB of RAM, compared with medaka (5.9 GB) and nanopolish (10.4 GB). Bacteria are the most diverse and abundant cellular life form [41]. Some species are exquisitely tuned to a particular niche (e.g., obligate pathogens of a single host) while others are able to live in a wide range of environments (e.g., E. coli can live on plants, in the earth, or commensally in the gut of various hosts). Broadly speaking, a wider range of environments correlates with a larger pan-genome, and some parts of the gene repertoire are associated with specific niches [42]. Our perception of a pan-genome therefore depends on our sampling of the unknown underlying population structure, and similarly the effectiveness of a PanRG will depend on the choice of reference panel from which it is built. Many examples from different species have shown that bacteria are able to leverage this genomic flexibility, adapting to circumstance sometimes by using or losing novel genes acquired horizontally, and at other times by mutation. There are many situations where precise nucleotide-level variants matter in interpreting pan-genomes. Some examples include compensatory mutations in the chromosome reducing the fitness burden of new plasmids [43,44,45]; lineage-specific accessory genes with SNP mutations which distinguish carriage from infection [46]; SNPs within accessory drug resistance genes leading to significant differences in antibiograms [47]; and changes in CRISPR spacer arrays showing immediate response to infection [48, 49]. However, up until now, there has been no automated way of studying non-core gene SNPs at all; still less a way of integrating them with gene presence/absence information. Pandora solves these problems, allowing detection and genotyping of core and accessory variants. It also addresses the problem of what reference to use as a coordinate system, inferring a mosaic "VCF reference" which is as close as possible to all samples under study. We find this gives more consistent SNP calling than any single reference in our diverse dataset. We focussed primarily on Nanopore data when designing pandora and show it is possible to achieve higher quality SNP calling with this data than with current Nanopore tools. The impact of this approach does depend on the dataset under study. We find that, if analyzing closely related samples, then single-reference methods provide improved recall compared with pandora. However, if analyzing more diverse datasets, hard reference bias is a bigger issue for single-reference tools, and pandora offers improved recall. Prior graph genome work, focussing on soft reference bias (in humans), has evaluated different approaches for selecting alleles for addition to a population graph, based on frequency, avoiding creating new repeats, and avoiding exponential blow-up of haplotypes in clusters of variants [50]. This approach makes sense when you have unphased diploid VCF files and are considering all recombinants of clustered SNPs as possible. However, this is effectively saying we consider the recombination rate to be high enough that all recombinants are possible. Our approach, building from local MSAs and only collapsing haplotypes when they agree for a fixed number of bases, preserves more haplotype structure and avoids combinatorial explosion. Another alternative approach was recently taken by Norri et al. [51], inferring a set of pseudo founder genomes from which to build the graph. With pandora, we break the genome into atomic units which may reorder freely between samples, but within which the degree of variation is more constrained. This approach is directly motivated by our knowledge of the mechanisms underlying genomic flexibility in bacteria. The breadth of diversity we see in these bacteria arises primarily as a result of horizontally acquired DNA which is incorporated into the genome by different forms of recombination. Homologous recombination between closely related sequences may result in allele conversion and in some species contributes as many nucleotide changes as point mutation [52]. As a result, locally we expect sequences to look like mosaics of each other, possibly with additional novel mutations. Genes are acquired (and lost) as a result of homologous or site-specific recombination and at hotspots [53, 54]. The dynamics of this are organized [30], and result in global genome mosaicism. The choice of atomic unit used to build each local graph should again be motivated by this underlying biology. A locus should be large enough for its presence and sequence to be useful independently of other graphs, but small enough as to be typically inherited as an entire unit. Biologically speaking, genes fulfil this requirement and there already exist a plethora of tools designed to extract and align genes (and intergenic regions) in a set of bacterial genomes (prokka [55], panaroo [56], roary [57], panX [33], piggy [34]). Operons or groups of genes which co-occur contiguously might also make a good choice, although isolating a set of reference sequences for these regions would be more of a challenge. Another issue is how to select the reference panel of genomes in order to minimize hard reference bias. One cannot escape the U-shaped frequency distribution; whatever reference panel is chosen, future genomes under study will contain rare genes not present in the PanRG. Given the known strong population structure in bacteria, and the association of accessory repertoires with lifestyle and environment, we would advocate sampling by phylogeny, geography, host species (if appropriate), lifestyle (e.g., pathogenic versus commensal), and/or environment. In this study, we built our PanRG from a biased dataset (RefSeq) which does not attempt to achieve balance across phylogeny or ecology, limiting our pan-variant recall to 49% for rare variants (see Fig. 5B, Additional file 1: Supplementary Figure 1C). A larger, carefully curated input panel, such as that from Horesh et al. [58], would provide a better foundation and potentially improve results. A natural question is then to ask if the PanRG should continually grow, absorbing all variants ever encountered. From our perspective, the answer is no—a PanRG with variants at all non-lethal positions would be potentially intractable. The goal is not to have every possible allele in the PanRG—no more than a dictionary is required to contain absolutely every word that has ever been said in a language. As with dictionaries, there is a trade-off between completeness and utility, and in the case of bacteria, the language is far richer than English. The perfect PanRG contains the vast majority of the genes and intergenic regions you are likely to meet, and just enough breadth of allelic diversity to ensure reads map without too many mismatches. Missing alleles should be discoverable by local assembly and added to the graph, allowing multi-sample comparison of the cohort under study. This allows one to keep the main PanRG lightweight enough for rapid and easy use. For bacterial genomes, genotype calls are often used to perform phylogenetic analyses. By detecting accessory variation, three things become possible. First, pragmatically, one can choose clusters of similar genomes based on the cohort-wide core genome, and then by restricting the pandora VCF to genes present in each cluster, re-analyze based on the cluster-specific core genome. Normally this would require choosing a cluster-specific reference, remapping reads and re-running a variant caller, but with pandora all of the necessary data is provided in one step. Secondly, the accessory SNPs provide an extra level of resolution when comparing samples which are very close on a core genome tree, which may be useful. Finally, one cannot represent all pan-genome variation in a phylogeny as the evolutionary history is fundamentally not compatible with a simple vertical-inheritance model. However, the pandora output would make ideal material on which to build and test population genetic models. We finish with potential applications of pandora. First, the PanRG should provide a more interpretable substrate for pan-genome-wide genome-wide association studies, as current methods are forced to either ignore the accessory genome or reduce it to k-mers or unitigs [59,60,61] abstracted from their wider context. Second, it would allow investigation of selection and adaptation of accessory SNPs. Third, if performing prospective surveillance of microbial isolates taken in a hospital, the PanRG provides a consistent and unchanging reference, which will cope with the diversity of strains seen without requiring the user to keep switching reference genome. Finally, if studying a fixed dataset very carefully, then one may not want to use a population PanRG, as it necessarily will miss some rare accessory genes in the dataset. In these circumstances, one could construct a reference graph purely of the genes/intergenic regions present in this dataset. There are a number of limitations to this study. Firstly, although pandora achieves a gain of recall in rare variation compared with single-reference tools (at least 12–25 k more SNPs in loci present in 2–5 genomes out of 20 depending on choice of tool and reference—a lift of at least 7–15% in recall), this is offset by 11% loss of recall at core SNPs. However, the gain in recall of rare variants will increase both with dataset size (due to the U-shaped gene frequency curve) and with a PanRG constructed from either a better-sampled input reference panel, or the dataset itself. By contrast, there is no a priori reason why pandora should miss core SNPs, and this issue will need to be addressed in future work. Finally, by working in terms of atomic loci instead of a monolithic genome-wide graph, pandora opens up graph-based approaches to structurally diverse species (and eases parallelisation) but at the cost of losing genome-wide ordering. At present, ordering can be resolved by (manually) mapping pandora-discovered genes onto whole genome assemblies. However the design of pandora also allows for gene-ordering inference: when Nanopore reads cover multiple genes, the linkage between them is stored in a secondary de Bruijn graph where the alphabet consists of gene identifiers. This results in a huge alphabet, but the k-mers are almost always unique, dramatically simplifying "assembly" compared with normal DNA de Bruijn graphs. This work is still in progress and the subject of a future study. In the meantime, pandora provides new ways to access previously hidden variation. The algorithms implemented in pandora provide a solution to the problem of analyzing core and accessory genetic variation across a set of bacterial genomes. This study demonstrates as good SNP genotype error rates with Nanopore as with Illumina data and improved recall of accessory variants. It also shows the benefit of an inferred VCF reference genome over simply picking from RefSeq. The main limitations were the use of a biased reference panel (RefSeq) for building the PanRG, and a slightly lower recall for core SNPs than single-reference tools—both of which are addressable, not fundamental limitations. This work opens the door to improved analyses of many existing and future bacterial genomic datasets. Local graph construction We construct each local graph in the PanRG from an MSA using an iterative partitioning process. The resulting sequence graph contains nested bubbles representing alternative alleles. Let A be an MSA of length n. For each row of the MSA a = {a0, …, an−1} ∈ A let ai, j = {ai, …, aj−1} be the subsequence of a in interval [i, j). Let s(a) be the DNA sequence obtained by removing all non-AGCT symbols. We can partition alignment A either vertically by partitioning the interval [0, n) or horizontally by partitioning the set of rows of A. In both cases, the partition induces a number of sub-alignments. For vertical partitions, we define sliceA(i, j) = {ai,j : a ∈ A}. We say that interval [i, j) is a match interval if j − i ≥ m, where m = 7 is the default minimum match length, and there is a single non-trivial sequence in the slice, i.e., $$ \mid \left\{s(a):a\in {slice}_A\left(i,j\right)\ \mathrm{and}\ s(a)\ne ""\right\}\mid =1. $$ Otherwise, we call it a non-match interval. For horizontal partitions, we use a reference-based approach combined with K-means clustering [62] to divide sequences into increasing numbers of clusters K = 2, 3, … until each cluster meets a "one-reference-like" criterion or until K = 10. More formally, let U be the set of all m-mers (substrings of length m, the minimum match length) in {s(a) : a ∈ A}. For a ∈ A, we transform sequence s(a) into a count vector \( \overline{x_a}=\left\{{x_a}^1,\dots, {x_a}^{\left|U\right|}\right\} \) where xai is the count of unique m-mer i ∈ U. The K-means algorithm partitions {s(a) : a ∈ A} into K clusters \( \overline{C}=\left\{{C}_1,\dots, {C}_K\right\} \) by minimizing the inertia, defined as $$ \mathit{\arg}\ {\mathit{\min}}_C{\sum}_{j=1}^K\sum \limits_{\overline{x_a}\in {C}_j}{\left|\ \overline{x_a}-{\mu}_j\ \right|}^2 $$ where \( {\mu}_j=\frac{1}{\left|{C}_j\right|}{\sum}_{\overline{x_a}\in {C}_j}\overline{x_a} \)is the mean of cluster j. Given a (sub-)alignment A, we define the reference of A, ref(A), to be the concatenation of the most frequent nucleotide at each position of A. We say that a K-partition is one-reference-like if for the corresponding sub-alignments A1, …, AK the hamming distance between each sequence and its sub-alignment reference $$ \mid s(a)- ref\left({A}_i\right)\mid <d\ast len(A)\kern2em \forall a\in {A}_i $$ where | | denotes the Hamming distance, and d denotes a maximum hamming distance threshold, set at 0.2 by default. In this case, we accept the partition; otherwise, we look for a K + 1-partition. The recursive algorithm first partitions an MSA vertically into match and non-match intervals. Match intervals are collapsed down to the single sequence they represent. Independently for each non-match interval, the alignment slice is partitioned horizontally into clusters. The same process is then applied to each induced sub-alignment until a maximum number of recursion levels, r = 5, has been reached. For any remaining alignments, a node is added to the local graph for each unique sequence. See Additional file 1: Supplementary Animation 1 to see an example of this algorithm. We name this algorithm Recursive Cluster and Collapse (RCC). When building the local graph from a MSA, we record and serialize the recursion tree of the RCC algorithm, as well as memoizing all the data in each recursion node. We shall call this algorithm memoized RCC (MRCC). Once the MRCC recursion tree is generated, we can obtain the local graph through a pre-order traversal of the tree (which is equivalent to the call order of the recursive functions in an execution of the RCC algorithm). To update the local graph with new alleles using the MRCC algorithm, we can deserialize the recursion tree from disk, infer in which leaves of the recursion tree the new alleles should be added, add the new alleles in bulk, and then update each modified leaf. This leaf update operation consists of updating just the subaligment of the leaf (which is generally a small fraction of the whole MSA) with the new alleles using MAFFT [63] and recomputing the recursion at the leaf node. All of this is implemented in the make_prg repository (see "Code availability"). (w,k)-minimizers of graphs We define (w,k)-minimizers of strings as in Li [27]. Let \( \upvarphi :{\Sigma}^k\to \mathfrak{R} \) be a k-mer hash function and let π : Σ∗ × {0, 1} → Σ∗ be defined such that π(s, 0) = s and \( \pi \left(s,1\right)=\overline{s} \), where \( \overline{s} \) is the reverse complement of s. Consider any integers k ≥ w > 0. For window start position 0 ≤ j ≤ |s| − w − k + 1, let $$ {T}_j=\left\{\pi \left({s}_{p,p+k},r\right):j\le p<j+w,r\in \left\{0,1\right\}\right\} $$ be the set of forward and reverse-complement k-mers of s in this window. We define a (w,k)-minimizer to be any triple (h, p, r) such that $$ h=\varphi \left(\pi \left({s}_{p,p+k},r\right)\right)=\min \left\{\varphi (t):t\in {T}_j\right\}. $$ The set W(s) of (w,k)-minimizers for s, is the union of minimizers over such windows: $$ W(s)=\bigcup \limits_{0\le j\le \mid s\mid -w-k+1}\left\{\left(h,p,r\right):h=\min \left\{\upvarphi (t):t\in {T}_j\right\}\right\}. $$ We extend this definition intuitively to an acyclic sequence graph G = (V,E). Define |v| to be the length of the sequence associated with node v ∈ V and let i = (v, a, b), 0 ≤ a ≤ b ≤ |v| represent the sequence interval [a, b) on v. We define a path in G by $$ \overline{p}=\left\{\left({i}_1,\dots, {i}_m\right):\left({v}_j,{v}_{j+1}\right)\in E\ and\ {b}_j\equiv |{v}_j|\ for\ 1\le j<m\right\}. $$ This matches the intuitive definition for a path in a sequence graph except that we allow the path to overlap only part of the sequence associated with the first and last nodes. We will use \( {s}_{\overline{p}} \) to refer to the sequence along the path \( \overline{p} \) in the graph. Let \( \overline{q} \) be a path of length w + k − 1 in G. The string \( {s}_{\overline{q}} \) contains w consecutive k-mers for which we can find the (w,k)-minimizer(s) as before. We therefore define the (w,k)-minimizer(s) of the graph G to be the union of minimizers over all paths of length w + k − 1 in G: $$ W(G)=\bigcup \limits_{\overline{q}\in G:\mid \overline{q}\mid =w+k-1}\left\{\left(h,\overline{p},r\right):h=\min \left\{\upvarphi (t):t\in {T}_{\overline{q}}\right\}\right\}. $$ Local graph indexing with (w,k)-minimizers To find minimizers for a graph, we use a streaming algorithm as described in Additional file 1: Supplementary Algorithm 1. For each minimizer found, it simply finds the next minimizer(s) until the end of the graph has been reached. Let walk(v, i, w, k) be a function which returns all vectors of w consecutive k-mers in G starting at position i on node v. Suppose we have a vector of k-mers x. Let shift(x) be the function which returns all possible vectors of k-mers which extend x by one k-mer. It does this by considering possible ways to walk one letter in G from the end of the final k-mer of x. For a vector of k-mers of length w, the function minimize(x) returns the minimizing k-mers of x. We define K to be a k-mer graph with nodes corresponding to minimizers \( \left(h,\overline{p},r\right) \). We add edge (u,v) to K if there exists a path in G for which u and v are both minimizers and v is the first minimizer after u along the path. Let K ← add(s, t) denote the addition of nodes s and t to K and the directed edge (s,t). Let K ← add(s, T) denote the addition of nodes s and t ∈ T to K as well as directed edges (s,t) for t ∈ T, and define K ← add(S, t) similarly. The resulting PanRG index stores a map from each minimizing k-mer hash value to the positions in all local graphs where that (w,k)-minimizer occurred. In addition, we store the induced k-mer graph for each local graph. Quasi-mapping reads We infer the presence of PanRG loci in reads by quasi-mapping. For each read, a sketch of (w,k)-minimizers is made, and these are queried in the index. For every (w,k)-minimizer shared between the read and a local graph in the PanRG index, we define a hit to be the coordinates of the minimizer in the read and local graph and whether it was found in the same or reverse orientation. We define clusters of hits from the same read, local graph, and orientation if consecutive read coordinates are within a certain distance. If this cluster is of sufficient size, the locus is deemed to be present and we keep the hits for further analysis. Otherwise, they are discarded as noise. The default for this "sufficient size" is at least 10 hits and at least 1/5th the length of the shortest path through the k-mer graph (Nanopore) or the number of k-mers in a read sketch (Illumina). Note that there is no requirement for all these hits to lie on a single path through the local graph. A further filtering step is therefore applied after the sequence at a locus is inferred to remove false positive loci, as indicated by low mean or median coverage along the inferred sequence by comparison with the global average coverage. This quasi-mapping procedure is described in pseudocode in Additional file 1: Supplementary Algorithm 2. For each locus identified as present in the set of reads, quasi-mapping provides (filtered) coverage information for nodes of the directed acyclic k-mer graph. We use these to approximate the sequence as a mosaic of references as follows. We model k-mer coverage with a negative binomial distribution and use the simplifying assumption that k-mers are read independently. Let Θ be the set of possible paths through the k-mer graph, which could correspond to the true genomic sequence from which reads were generated. Let r + s be the number of times the underlying DNA was read by the machine, generating a k-mer coverage of s, and r instances where the k-mer was sequenced with errors. Let 1 − p be the probability that a given k-mer was sequenced correctly. For any path θ ∈ Θ, let {X1, …, XM} be independent and identically distributed random variables with probability distribution \( f\left({x}_i,r,p\right)=\frac{\Gamma \left(r+s\right)}{\Gamma (r)s!}{p}^r{\left(1-p\right)}^s \), representing the k-mer coverages along this path. Since the mean and variance are \( \frac{\left(1-p\right)r}{p} \) and \( \frac{\left(1-p\right)r}{p^2} \), we solve for r and p using the observed k-mer coverage mean and variance across all k-mers in all graphs for the sample. Let D be the k-mer coverage data seen in the read dataset. We maximize the score \( \hat{\theta}={\left\{\arg \max\ l\left(\theta |D\right)\right\}}_{\theta \in \Theta} \) where \( l\left(\theta |D\right)=\frac{1}{M}{\sum}_{i=1}^M\log f\left({s}_i,r,p\right) \), where si is the observed coverage of the i-th k-mer in θ. This score is an approximation to a log likelihood, but averages over (up to) a fixed number of k-mers in order to retain sensitivity over longer paths in our C++ implementation. By construction, the k-mer graph is directed and acyclic so this maximization problem can be solved with a dynamic programming algorithm (for pseudocode, see Additional file 1: Supplementary Algorithm 3). For choices of w ≤ k there is a unique sequence along the discovered path through the k-mer graph (except in rare cases within the first or last w − 1 bases). We use this closest mosaic of reference sequences as an initial approximation of the sample sequence. De novo variant discovery The first step in our implementation of local de novo variant discovery in genome graphs is finding the candidate regions of the graph that show evidence of dissimilarity from the sample's reads. Finding candidate regions The input required for finding candidate regions is a local graph, n, within the PanRG, the maximum likelihood path of both sequence and k-mers in n, lmpn and kmpn respectively, and a padding size w for the number of positions surrounding the candidate region to retrieve. We define a candidate region, r, as an interval within n where coverage on lmpn is less than a given threshold, c, for more than l and less than m consecutive positions. m acts to restrict the size of variants we are able to detect. If set too large, the following steps become much slower due to the combinatorial expansion of possible paths. For a given read, s, that has a mapping to r we define sr to be the subsequence of s that maps to r, including an extra w positions either side of the mapping. We define the pileup Pr as the set of all sr ∈ r. Enumerating paths through candidate regions For r ∈ R, where R is the set of all candidate regions, we construct a de Bruijn graph Gr from the pileup Pr using the GATB library [64]. AL and AR are defined as sets of k-mers to the left and right of r in the local graph. They are anchors to allow re-insertion of new sequences found by de novo discovery into the local graph. If we cannot find an anchor on both sides, then we abandon de novo discovery for r. We use sets of k-mers for AL and AR, rather than a single anchor k-mer, to provide redundancy in the case where sequencing errors cause the absence of some k-mers in Gr. Once Gr is built, we define the start anchor k-mer, aL, as the first k-mer in AL that is also in Gr. Likewise, we define the end anchor k-mer, aR, as the first k-mer in AR that is also in Gr. Tr is the spanning tree obtained by performing depth-first search (DFS) on Gr, beginning from node aL. We define pr as a path, from the root node aL of Tr and ending at node aR, which fulfils the two conditions: (1) pr is shorter than the maximum allowed path length; (2) no more than k nodes along pr have coverage < f er, where er is the expected k-mer coverage for r and f is nr ∗ s , where nr is the number of iterations of path enumeration for r and s is a step size (0.1 by default). Vr is the set of all pr. If |Vr| is greater than a predefined threshold, then we have too many candidate paths, and we decide to filter more aggressively: f is incremented by s—effectively requiring more coverage for each pr—and Vr is repopulated. If f > 1.0, then de novo discovery is abandoned for r. Pruning the path-space in a candidate region As we operate on both accurate and error-prone sequencing reads, the number of valid paths in Gr can be very large. Primarily, this is due to cycles that can occur in Gr and exploring paths that will never reach our required end anchor aR. In order to reduce the path-space within Gr, we prune paths based on multiple criteria. Critically, this pruning happens at each step of the graph walk (path-building). We used a distance-based optimisation based on Rizzi et al. [65]. In addition to Tr, obtained by performing DFS on Gr, we produce a distance map Dr that results from running reversed breadth-first search (BFS) on Gr, beginning from node aR. We say reversed BFS as we explore the predecessors of each node, rather than the successors. Dr is implemented as a binary search tree where each node in the tree represents a k-mer in Gr that is reachable from aR via reversed BFS. Each node additionally has an integer attached to it that describes the distance from that node to aR. We can use Dr to prune the path-space by (1) for each node n ∈ pr, we require n ∈ Dr and (2) requiring aR be reached from n in, at most, i nodes, where i is defined as the maximum allowed path length minus the number of nodes walked to reach n. If one of these conditions is not met, we abandon pr. The advantage of this pruning process is that we never explore paths that will not reach our required endpoint within the maximum allowed path length and when caught in a cycle, we abandon the path once we have made too many iterations around the cycle. Graph-based genotyping and optimal reference construction for multi-genome comparison We use graph-based genotyping to output a comparison of samples in a VCF. A path through the graph is selected to be the reference sequence, and graph variation is described with respect to this reference. The chromosome field then details the local graph and the position field gives the position within the chosen reference sequence for possible variant alleles. The reference path for each local graph is chosen to be maximally close to the set of sample mosaic paths. This is achieved by reusing the mosaic path-finding algorithm detailed in Additional file 1: Supplementary Algorithm 3 on a copy of the k-mer graph with coverages incremented along each sample mosaic path, and a modified probability function defined such that the probability of a node is proportional to the number of samples covering it. This results in an optimal path, which is used as the VCF reference for the multi-sample VCF file. For each sample and site in the VCF file, the mean forward and reverse coverage on k-mers tiling alleles is calculated. A likelihood is then independently calculated for each allele based on a Poisson model. An allele A in a site is called if: (1) A is on the sample mosaic path (i.e., it is on the maximum likelihood path for that sample); (2) A is the most likely allele to be called based on the previous Poisson model. Every allele not in the sample mosaic path will not satisfy (1) and will thus not be called. In the uncommon event where an allele satisfies (1), but not (2), we have an incompatibility between the global and the local choices, and then the site is genotyped as null. Comparison of variant callers on a diverse set of E. coli Sample selection We used a set of 20 diverse E. coli samples for which matched Nanopore and Illumina data and a high-quality assembly were available. These are distributed across 4 major phylogroups of E. coli as shown in Fig. 4. Of these, 16 were isolated from clinical infections and rectal screening swabs in ICU patients in an Australian hospital [66]. One is the reference strain CFT073 that was resequenced and assembled by the REHAB consortium [67]. One is from an ST216 cardiac ward outbreak (identifier: H131800734); the Illumina data was previously obtained [68], and we did the Nanopore sequencing (see below). The two final samples were obtained from Public Health England: one is a Shiga toxin encoding E. coli (we used the identifier O63) [69], and the other an enteroaggregative E. coli (we used the identifier ST38) [70]. Coverage data for these samples can be found in Additional file 1: Supplementary Table 2. PanRG construction MSAs for gene clusters curated with panX [33] from 350 RefSeq assemblies were downloaded from http://pangenome.de on 3 May 2018. MSAs for intergenic region clusters based on 228 E. coli ST131 genome sequences were previously generated with Piggy [34] for their publication. The PanRG was built using make_prg. Three loci (GC00000027_2, GC00004221 and GC00000895_r1_r1_1) out of 37,428 were excluded because pandora did not complete in reasonable time (~ 24 h) once de novo variants were added. Nanopore sequencing of sample H131800734 DNA was extracted using a Blood & Cell Culture DNA Midi Kit (Qiagen, Germany) and prepared for Nanopore sequencing using kits EXP-NBD103 and SQK-LSK108. Sequencing was performed on a MinION Mk1 Shield device using a FLO-MIN106 R9.4 Spoton flowcell and MinKNOW version 1.7.3, for 48 h. Nanopore basecalling Recent improvements to the accuracy of Nanopore reads have been largely driven by improvements in basecalling algorithms [71]. For comparison, 4 samples were basecalled with the default (methylation unaware) model and with the methylation-aware, high-accuracy model provided with the proprietary guppy basecaller (version 3.4.5). Additional file 1: Supplementary Figure 5 shows the effect of methylation-aware Nanopore basecalling on the AvgAR/error rate curve for pandora with/without novel variant discovery via local assembly for those 4 samples. With normal basecalling, local de novo assembly increases the error rate substantially from 0.22 to 0.60%, with a negligible increase in recall, from 89.1 to 90.1%, whereas with methylation-aware basecalling it increases the recall from 89.5 to 90.6% and just slightly increases the error rate from 0.18 to 0.22%. On the basis of this, demultiplexing of the subsequent basecalled data was performed using the same methylation-aware version of the guppy software suite with barcode kits EXP-NBD104 and EXP-NBD114 and an option to trim the barcodes from the output. Phylogenetic tree construction Chromosomes were aligned using MAFFT [63] v7.467 as implemented in Parsnp [72] v1.5.3. Gubbins v2.4.1 was used to filter for recombination (default settings), and phylogenetic construction was carried out using RAxML [73] v8.2.12 (GTR + GAMMA substitution model, as implemented in Gubbins [74]). Reference selection for mapping-based callers A set of references was chosen for testing single-reference variant callers using two standard approaches, as follows. First, a phylogeny was built containing our 20 samples and 243 reference genomes from RefSeq. Then, for each of our 20 samples, the nearest RefSeq E. coli reference was found using Mash [26]. Second, for each of the 20 samples, the nearest RefSeq reference in the phylogeny was manually selected; sometimes one RefSeq assembly was the closest to more than one of the 20. At an earlier stage of the project, there had been another sample (making a total of 21) in phylogroup B1; this was discarded when it failed quality filters (data not shown). Despite this, the Mash/manual selected reference genomes were left in the set of mapping references, to evaluate the impact of mapping to a reference in a different phylogroup to all 20 of our samples. Construction of truth assemblies 16/20 samples were obtained with matched Illumina and Nanopore data and a hybrid assembly. Sample H131800734 was assembled using the hybrid assembler Unicycler [75] with PacBio and Illumina reads followed by polishing with the PacBio reads using Racon [76], and finally with Illumina reads using Pilon [77]. A small 1 kb artefactual contig was removed from the H131800734 assembly due to low quality and coverage. In all cases, we mapped the Illumina data to the assembly and masked all positions where the pileup of Illumina reads did not support the assembly. Construction of a comprehensive and filtered truth set of pairwise SNPs All pairwise comparisons of the 20 truth assemblies were performed with varifier (https://github.com/iqbal-lab-org/varifier), using subcommand make_truth_vcf. In summary, varifier compares two given genomes (referenced as G1 and G2) twice—first using dnadiff [78] and then using minimap2/paftools [28]. The two output sets of pairwise SNPs are then joined and filtered. We create one sequence probe for each allele (a sequence composed of the allele and 50 bases of flank on either side taken from G1) and then map both to G2 using minimap2. We then evaluate these mappings to verify if the variant found is indeed correct (TP) or not (FP) as follows. If the mapping quality is zero, the variant is discarded to avoid paralogs/duplicates/repeats that are inherently hard to assess. We then check for mismatches in the allele after mapping and confirm that the called allele is the better match. Constructing a set of ground truth pan-genome variants When seeking to construct a truth set of all variants within a set of bacterial genomes, there is no universal coordinate system. We start by taking all pairs of genomes and finding the variants between them, and then need to deduplicate them—e.g., when a variant between genomes 1 and 2 is the same as a variant between genomes 3 and 4, they should be identified; we define "the same" in terms of genome, coordinate and allele. An allele A in a position PA of a chromosome CA in a genome GA is defined as a triple A = (GA, CA, PA). A pairwise variant PwV = {A1, A2} is defined as a pair of alleles that describes a variant between two genomes, and a pan-genome variant PgV = {A1, A2, …, An} is defined as a set of two or more alleles that describes the same variant between two or more genomes. A pan-genome variant PgV can also be defined as a set of pairwise variants PgV = {PwV1, PwV2, …, PwVn}, as we can infer the set of alleles of PgV from the pairs of alleles in all these pairwise variants. Note that pan-genome variants are thus able to represent rare and core variants. Given a set of pairwise variants, we seek a set of pan-genome variants satisfying the following properties: [Surjection]: Each pairwise variant is in exactly one pan-genome variant; A pan-genome variant contains at least one pairwise variant; [Transitivity]: if two pairwise variants PwV1 and PwV2 share an allele, then PwV1 and PwV2 are in the same pan-genome variant PgV. We model the above problem as a graph problem. We represent each pairwise variant as a node in an undirected graph G. There is an edge between two nodes n1 and n2 if n1 and n2 share an allele. Each component (maximal connected subgraph) of G then defines a pan-genome variant, built from the set of pairwise variants in the component, satisfying all the properties previously described. Therefore, the set of components of G defines the set of pan-genome variants P. However, a pan-genome variant in P could (i) have more than one allele stemming from a single genome, due to a duplication/repeat; (ii) represent biallelic, triallelic, or tetrallelic SNPs/indels. For this evaluation, we chose to have a smaller, but more reliable set of pan-genome variants, and thus we filtered P by restricting it to the set of pan-genome variants P′ defined by the variants PgV ∈ P such that (i) PgV has at most one allele stemming from each genome; (ii) PgV is a biallelic SNP. P′ is the set of 618,305 ground truth filtered pan-genome variants that we extracted by comparing and deduplicating the pairwise variants present in our 20 samples and that we use to evaluate the recall of all the tools in this paper. Additional file 1: Supplementary Figure 13 shows an example summarizing the described process of building pan-genome variants from a set of pairwise variants. Subsampling read data and running all tools All read data was randomly subsampled to 100× coverage using rasusa—the pipeline is available at https://github.com/iqbal-lab-org/subsampler. A snakemake [79] pipeline to run the pandora workflow with and without de novo discovery (see Fig. 2D) is available at https://github.com/iqbal-lab-org/pandora_workflow. A snakemake pipeline to run snippy, SAMtools, nanopolish, and medaka on all pairwise combinations of 20 samples and 24 references is available at https://github.com/iqbal-lab-org/variant_callers_pipeline. Evaluating VCF files Calculating precision Given a variant/VCF call made by any of the evaluated tools, where the input were reads from a sample (or several samples, in the case of pandora) and a reference sequence (or a PanRG, in the case of pandora), we perform the following steps to assess how correct a call is: Construct a probe for the called allele, consisting of the sequence of the allele flanked by 150 bp on both sides from the reference sequence. This reference sequence is one of the 24 chosen references for snippy, SAMtools, nanopolish, and medaka; or the multi-sample inferred VCF reference for pandora; Map the probe to the sample sequence using BWA-MEM [80]; Remove multi-mappings by looking at the Mapping Quality (MAPQ) measure [36] of the SAM records. If the probe is mapped uniquely, then its mapping passes the filter. If there are multiple mappings for the probe, we select the mapping m1 with the highest MAPQ if the difference between its MAPQ and the second highest MAPQ exceeds 10. If m1 does not exist, then there are at least two good enough mappings, and it is ambiguous to choose which one to evaluate. In this case, we prefer to be conservative and filter this call (and all its related mappings) out of the evaluation; We further remove calls mapping to masked regions of the sample sequence, in order to not evaluate calls lying on potentially misassembled regions; Now we evaluate the mapping, giving the call a continuous precision score between 0 and 1. We look only at the alignment of the called allele (i.e., we ignore the flanking sequences alignment) and give a score as follows: number of matches / alignment length. Finally, we compute the precision for the tool by summing the score of all evaluated calls and dividing by the number of evaluated calls. Note that here we evaluate all types of variants, including SNPs and indels. Calculating recall We perform the following steps to calculate the recall of a tool: Apply the VCF calls to the associated reference using the VCF consensus builder (https://github.com/leoisl/vcf_consensus_builder), creating a mutated reference with the variants identified by the tool; Build probes for each allele of each pan-genome variant previously computed (see section "Constructing a set of ground truth pan-genome variants"); Map all pan-genome variants' probes to the mutated reference using BWA-MEM; Evaluate each probe mapping, which is classified as a TP only if all bases of the allele were correctly mapped to the mutated reference. In the uncommon case where a probe multimaps, it is enough that one of the mappings are classified as TP; Finally, as we now know for each pan-genome variant which of its alleles were found, we calculate both the pan-variant recall and the average allelic recall as per section "Pandora detects rare variation inaccessible to single-reference methods." Given a VCF file with likelihoods for each genotype, the genotype confidence is defined as the log likelihood of the maximum likelihood genotype minus the log likelihood of the next best genotype. Thus a confidence of zero means the two most likely alleles are equally likely, and high-quality calls have higher confidences. In the recall/error rate plots of Fig. 6A,B, each point corresponds to the error rate and recall computed as previously described, on a genotype confidence (gt-conf) filtered VCF file with a specific threshold for minimum confidence. We also show the same plot with further filters applied in Additional file 1: Supplementary Figure 3. The filters were as follows. For Illumina data: for pandora, a minimum coverage filter of 5×, a strand bias filter of 0.05 (minimum 5% of reads on each strand), and a gaps filter of 0.8 were applied. The gaps filter means at least 20% of the minimizer k-mers on the called allele must have coverage above 10% of the expected depth. As snippy has its own internal filtering, no filters were applied. For SAMtools, a minimum coverage filter of 5× was used. For Nanopore data, for pandora, a minimum coverage filter of 10×, a strand bias filter of 0.05, and a gaps filter of 0.6 were used. For nanopolish, we applied a coverage filter of 10×. We were unable to apply a minimum coverage filter for medaka due to a software bug that prevents annotating the VCF file with coverage information. Locus presence and distance evaluation For all loci detected as present in at least one sample by pandora, we mapped the multi-sample inferred reference to all 20 sample assemblies and 24 reference sequences, to identify their true locations. To be confident of these locations, we employed a strict mapping using bowtie2 [81] and requiring end-to-end alignments. From the mapping of all loci to all samples, we computed a truth locus presence-absence matrix and compared it with pandora's locus presence-absence matrix, classifying each pandora locus call as true/false positive/negative. Additional file 1: Supplementary Figure 6 shows these classifications split by locus length. Having the location of all loci in all the 20 sample assemblies and the 24 references, we then computed the edit distance between them. All input data for our analyses, including panX's and Piggy's MSAs, PanRG, reference sequences, and sample data are publicly available (see "Data availability" below). The pandora code, as well as all code needed to reproduce these analyses are also publicly available (see "Code availability" below). Software environment reproducibility is achieved using Python virtual environments if all dependencies and source code are in Python, and otherwise using Docker [82] containers run with Singularity [83]. We ran pandora version 0.9.1 and make_prg version 0.2.0 in this study. The exact commit of all repositories used to obtain the results in this paper can be retrieved with the git branch or tag pandora_paper_update_31_03_2021. Gene MSAs from panX, and intergenic MSAs from Piggy: [84] E. coli PanRG: [85] Accession identifiers or Figshare links for the sample and reference assemblies, and Illumina and Nanopore reads are listed in Section D of Additional file 1. Input packages containing all data to reproduce all analyses described in "Results" are also available in Section D of Additional file 1. All code is open source and available under an MIT license: make_prg (RCC graph construction and update algorithm): https://github.com/leoisl/make_prg pandora: https://github.com/rmcolq/pandora varifier: https://github.com/iqbal-lab-org/varifier Pan-genome variations pipeline taking a set of assemblies and returning a set of filtered pan-genome variants: https://github.com/iqbal-lab-org/pangenome_variations pandora workflow: https://github.com/iqbal-lab-org/pandora_workflow Run snippy, SAMtools, nanopolish and medaka pipeline: https://github.com/iqbal-lab-org/variant_callers_pipeline Evaluation pipeline (recall/error rate curves, etc.): https://github.com/iqbal-lab-org/pandora_paper_roc Locus presence and distance from reference pipeline: https://github.com/iqbal-lab-org/pandora_gene_distance A master repository to reproduce everything in this paper, marshalling all of the above: https://github.com/iqbal-lab-org/paper_pandora2020_analyses Although all containers are hosted on https://hub.docker.com/ (for details, see https://github.com/iqbal-lab-org/paper_pandora2020_analyses/blob/master/scripts/pull_containers/pull_containers.sh), and are downloaded automatically during the pipelines' execution, we also provide Singularity [83] containers (converted from Docker containers) at https://doi.org/10.6084/m9.figshare.14779257.v1. Frozen packages with all the code repositories for pandora and the analysis framework can be found at [86]. Lynch M, Ackerman MS, Gout J-F, Long H, Sung W, Thomas WK, et al. Genetic drift, selection and the evolution of the mutation rate. Nat Rev Genet. Nature Publishing Group. 2016;17(11):704–14. https://doi.org/10.1038/nrg.2016.104. Didelot X, Maiden MCJ. Impact of recombination on bacterial evolution. Trends Microbiol. 2010;18(7):315–22. https://doi.org/10.1016/j.tim.2010.04.002. Rocha EPC. Neutral Theory, Microbial practice: challenges in bacterial population genetics. Mol Biol Evol. Oxford Academic. 2018;35(6):1338–47. https://doi.org/10.1093/molbev/msy078. Fraser C, Alm EJ, Polz MF, Spratt BG, Hanage WP. The bacterial species challenge: making sense of genetic and ecological diversity. Science. American Association for the Advancement of Science. 2009;323(5915):741–6. https://doi.org/10.1126/science.1159388. Mira A, Ochman H, Moran NA. Deletional bias and the evolution of bacterial genomes. Trends Genet. Elsevier. 2001;17(10):589–96. https://doi.org/10.1016/S0168-9525(01)02447-7. Domingo-Sananes MR, McInerney JO. Selection-based model of prokaryote pangenomes. bioRxiv. Cold Spring Harbor Laboratory; 2019;782573. Gordienko EN, Kazanov MD, Gelfand MS. Evolution of pan-genomes of Escherichia coli, Shigella spp., and Salmonella enterica. J Bacteriol. American Society for Microbiology Journals. 2013;195:2786–92. Lobkovsky AE, Wolf YI, Koonin EV. Gene frequency distributions reject a neutral model of genome evolution. Genome Biol Evol. 2013;5(1):233–42. https://doi.org/10.1093/gbe/evt002. Bolotin E, Hershberg R. Gene loss dominates as a source of genetic variation within clonal pathogenic bacterial species. Genome Biol Evol. Oxford Academic. 2015;7(8):2173–87. https://doi.org/10.1093/gbe/evv135. Haegeman B, Weitz JS. A neutral theory of genome evolution and the frequency distribution of genes. BMC Genomics. 2012;13(1):196. https://doi.org/10.1186/1471-2164-13-196. Hadfield J, Croucher NJ, Goater RJ, Abudahab K, Aanensen DM, Harris SR. Phandango: an interactive viewer for bacterial population genomics. Bioinformatics. Oxford Academic. 2018;34(2):292–3. https://doi.org/10.1093/bioinformatics/btx610. Garrison E, Sirén J, Novak AM, Hickey G, Eizenga JM, Dawson ET, et al. Variation graph toolkit improves read mapping by representing genetic variation in the reference. Nat Biotechnol. Nature Publishing Group. 2018;36(9):875–9. https://doi.org/10.1038/nbt.4227. Maciuca S, Elias C Del O, Mcvean G, Iqbal Z. A natural encoding of genetic variation in a Burrows-Wheeler transform to enable mapping and genome inference. algorithms in bioinformatics [Internet]. Springer, Cham; 2016 [cited 2020 Dec 9]. p. 222–33. Available from: https://doi.org/10.1007/978-3-319-43681-4_18 Eggertsson HP, Jonsson H, Kristmundsdottir S, Hjartarson E, Kehr B, Masson G, et al. Graphtyper enables population-scale genotyping using pangenome graphs. Nat Genet. Nature Publishing Group. 2017;49(11):1654–60. https://doi.org/10.1038/ng.3964. Eggertsson HP, Kristmundsdottir S, Beyter D, Jonsson H, Skuladottir A, Hardarson MT, et al. GraphTyper2 enables population-scale genotyping of structural variation using pangenome graphs. Nature Communications. Nature Publishing Group; 2019;10:5402. Rautiainen M, Marschall T. GraphAligner: rapid and versatile sequence-to-graph alignment. bioRxiv. Cold Spring Harbor Laboratory; 2019;810812. Schneeberger K, Hagmann J, Ossowski S, Warthmann N, Gesing S, Kohlbacher O, et al. Simultaneous alignment of short reads against multiple genomes. Genome Biology. 2009;10(9):R98. https://doi.org/10.1186/gb-2009-10-9-r98. Rabbani L, Müller J, Weigel D. An algorithm to build a multi-genome reference. bioRxiv. Cold Spring Harbor Laboratory. 2020;2020(04):11.036871. The Computational Pan-Genomics Consortium. Computational pan-genomics: status, promises and challenges. Briefings Bioinformatics. 2018;19:118–35. Rautiainen M, Marschall T. Aligning sequences to general graphs in O(V + mE) time. bioRxiv. Cold Spring Harbor Laboratory; 2017;216127. Sirén J, Monlong J, Chang X, Novak AM, Eizenga JM, Markello C, et al. Genotyping common, large structural variations in 5,202 genomes using pangenomes, the Giraffe mapper, and the vg toolkit. bioRxiv. Cold Spring Harbor Laboratory. 2021;2020(12):04.412486. Chen S, Krusche P, Dolzhenko E, Sherman RM, Petrovski R, Schlesinger F, et al. Paragraph: a graph-based structural variant genotyper for short-read sequence data. Genome Biology. 2019;20(1):291. https://doi.org/10.1186/s13059-019-1909-7. Sibbesen JA, Maretty L, Krogh A. Accurate genotyping across variant classes and lengths using variant graphs. Nat Genet. 2018;50(7):1054–9. https://doi.org/10.1038/s41588-018-0145-5. Danecek P, Auton A, Abecasis G, Albers CA, Banks E, DePristo MA, et al. The variant call format and VCFtools. Bioinformatics. Oxford Academic. 2011;27(15):2156–8. https://doi.org/10.1093/bioinformatics/btr330. Roberts M, Hayes W, Hunt BR, Mount SM, Yorke JA. Reducing storage requirements for biological sequence comparison. Bioinformatics. Oxford Academic. 2004;20(18):3363–9. https://doi.org/10.1093/bioinformatics/bth408. Ondov BD, Treangen TJ, Melsted P, Mallonee AB, Bergman NH, Koren S, et al. Mash: fast genome and metagenome distance estimation using MinHash. Genome Biology. 2016;17(1):132. https://doi.org/10.1186/s13059-016-0997-x. Li H. Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences. Bioinformatics. Oxford Academic. 2016;32(14):2103–10. https://doi.org/10.1093/bioinformatics/btw152. Li H. Minimap2: pairwise alignment for nucleotide sequences. Bioinformatics. 2018;34(18):3094–100. https://doi.org/10.1093/bioinformatics/bty191. Touchon M, Perrin A, De SJAM, Vangchhia B, Burn S, O'Brien CL, et al. Phylogenetic background and habitat drive the genetic diversification of Escherichia coli. PLOS Genetics. Public Library of Science. 2020;16:e1008866. Touchon M, Hoede C, Tenaillon O, Barbe V, Baeriswyl S, Bidet P, et al. Organised genome dynamics in the Escherichia coli species results in highly diverse adaptive paths. PLOS Genetics. Public Library of Science. 2009;5:e1000344. Decano AG, Downing T. An Escherichia coli ST131 pangenome atlas reveals population structure and evolution across 4,071 isolates. Sci Rep. Nature Publishing Group. 2019;9:17394. Rasko DA, Rosovitz MJ, Myers GSA, Mongodin EF, Fricke WF, Gajer P, et al. The pangenome structure of Escherichia coli: comparative genomic analysis of E. coli commensal and pathogenic isolates. J Bacteriol. American Society for Microbiology Journals. 2008;190:6881–93. Ding W, Baumdicker F, Neher RA. panX: pan-genome analysis and exploration. Nucleic Acids Res. Oxford Academic. 2018;46:e5–5. Thorpe HA, Bayliss SC, Sheppard SK, Feil EJ. Piggy: a rapid, large-scale pan-genome analysis tool for intergenic regions in bacteria. Gigascience [Internet]. Oxford Academic; 2018 [cited 2020 Jul 3];7. Available from: https://academic.oup.com/gigascience/article/7/4/giy015/4919733 Clermont O, Christenson JK, Denamur E, Gordon DM. The Clermont Escherichia coli phylo-typing method revisited: improvement of specificity and detection of new phylo-groups. Environ Microbiol Rep. 2013;5(1):58–65. https://doi.org/10.1111/1758-2229.12019. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The Sequence Alignment/Map format and SAMtools. Bioinformatics. Oxford Academic. 2009;25(16):2078–9. https://doi.org/10.1093/bioinformatics/btp352. Garrison E, Marth G. Haplotype-based variant detection from short-read sequencing. arXiv:12073907 [q-bio] [Internet]. 2012 [cited 2020 Jul 3]; Available from: http://arxiv.org/abs/1207.3907 Snippy [Internet]. Available from: https://github.com/tseemann/snippy Medaka [Internet]. Available from: https://github.com/Nanoporetech/medaka Loman NJ, Quick J, Simpson JT. A complete bacterial genome assembled de novo using only nanopore sequencing data. Nat Methods. Nature Publishing Group. 2015;12(8):733–5. https://doi.org/10.1038/nmeth.3444. Louca S, Mazel F, Doebeli M, Parfrey LW. A census-based estimate of Earth's bacterial and archaeal diversity. PLOS Biology. Public Library of Science. 2019;17:e3000106. Brockhurst MA, Harrison E, Hall JPJ, Richards T, McNally A, MacLean C. The ecology and evolution of pangenomes. Curr Biol. 2019;29(20):R1094–103. https://doi.org/10.1016/j.cub.2019.08.012. Harrison E, Brockhurst MA. Plasmid-mediated horizontal gene transfer is a coevolutionary process. Trends Microbiol. Elsevier. 2012;20(6):262–7. https://doi.org/10.1016/j.tim.2012.04.003. Harrison E, Dytham C, Hall JPJ, Guymer D, Spiers AJ, Paterson S, et al. Rapid compensatory evolution promotes the survival of conjugative plasmids. Mobile Genet Elements. 2016;6(3):e1179074. https://doi.org/10.1080/2159256X.2016.1179074. Loftie-Eaton W, Bashford K, Quinn H, Dong K, Millstein J, Hunter S, et al. Compensatory mutations improve general permissiveness to antibiotic resistance plasmids. Nat Ecol Evol. 2017;1(9):1354–63. https://doi.org/10.1038/s41559-017-0243-2. Gori A, Harrison OB, Mlia E, Nishihara Y, Chan JM, Msefula J, et al. Pan-GWAS of Streptococcus agalactiae highlights lineage-specific genes associated with virulence and niche adaptation. mBio [Internet]. American Society for Microbiology; 2020 [cited 2020 Jul 16];11. Available from: https://mbio.asm.org/content/11/3/e00728-20 Bonnet R. Growing group of extended-spectrum β-lactamases: the CTX-M enzymes. Antimicrob Agents Chemother. 2004;48(1):1–14. https://doi.org/10.1128/AAC.48.1.1-14.2004. Louwen R, Staals RHJ, Endtz HP, van Baarlen P, van der Oost J. The role of CRISPR-Cas systems in virulence of pathogenic bacteria. Microbiol Mol Biol Rev. American Society for Microbiology. 2014;78(1):74–88. https://doi.org/10.1128/MMBR.00039-13. Horvath P, Romero DA, Coûté-Monvoisin A-C, Richards M, Deveau H, Moineau S, et al. Diversity, activity, and evolution of CRISPR loci in Streptococcus thermophilus. J Bacteriol. American Society for Microbiology Journals. 2008;190:1401–12. Pritt J, Chen N-C, Langmead B. FORGe: prioritizing variants for graph genomes. Genome Biology. 2018;19(1):220. https://doi.org/10.1186/s13059-018-1595-x. Norri T, Cazaux B, Kosolobov D, Mäkinen V. Linear time minimum segmentation enables scalable founder reconstruction. Algorithms for Molecular Biology. 2019;14(1):12. https://doi.org/10.1186/s13015-019-0147-6. Vos M, Didelot X. A comparison of homologous recombination rates in bacteria and archaea. ISME J. 2009;3(2):199–208. https://doi.org/10.1038/ismej.2008.93. Oliveira PH, Touchon M, Cury J, Rocha EPC. The chromosomal organization of horizontal gene transfer in bacteria. Nat Commun. 2017;8(1):841. https://doi.org/10.1038/s41467-017-00808-w. Didelot X, Méric G, Falush D, Darling AE. Impact of homologous and non-homologous recombination in the genomic evolution of Escherichia coli. BMC Genomics. 2012;13(1):256. https://doi.org/10.1186/1471-2164-13-256. Seemann T. Prokka: rapid prokaryotic genome annotation. Bioinformatics. 2014;30(14):2068–9. https://doi.org/10.1093/bioinformatics/btu153. Tonkin-Hill G, MacAlasdair N, Ruis C, Weimann A, Horesh G, Lees JA, et al. Producing polished prokaryotic pangenomes with the Panaroo pipeline. Genome Biology. 2020;21(1):180. https://doi.org/10.1186/s13059-020-02090-4. Page AJ, Cummins CA, Hunt M, Wong VK, Reuter S, Holden MTG, et al. Roary: rapid large-scale prokaryote pan genome analysis. Bioinformatics. 2015;31(22):3691–3. https://doi.org/10.1093/bioinformatics/btv421. Horesh G, Blackwell G, Tonkin-Hill G, Corander J, Heinz E, Thomson NR. A comprehensive and high-quality collection of E. coli genomes and their genes. bioRxiv. Cold Spring Harbor Laboratory. 2020;2020(09):21.293175. Lees JA, Vehkala M, Välimäki N, Harris SR, Chewapreecha C, Croucher NJ, et al. Sequence element enrichment analysis to determine the genetic basis of bacterial phenotypes. Nat Commun. Nature Publishing Group. 2016;7:1–8. Earle SG, Wu C-H, Charlesworth J, Stoesser N, Gordon NC, Walker TM, et al. Identifying lineage effects when controlling for population structure improves power in bacterial association studies. Nat Microbiol. Nature Publishing Group. 2016;1:1–8. Jaillard M, Lima L, Tournoud M, Mahé P, Van BA, Lacroix V, et al. A fast and agnostic method for bacterial genome-wide association studies: bridging the gap between k-mers and genetic events. PLOS Genetics. Public Library of Science. 2018;14:e1007758. MacQueen J. Some methods for classification and analysis of multivariate observations. The Regents of the University of California; 1967 [cited 2020 Jul 6]. Available from: https://projecteuclid.org/euclid.bsmsp/1200512992 Katoh K, Misawa K, Kuma K, Miyata T. MAFFT: a novel method for rapid multiple sequence alignment based on fast Fourier transform. Nucleic Acids Res. 2002;30(14):3059–66. https://doi.org/10.1093/nar/gkf436. Drezen E, Rizk G, Chikhi R, Deltel C, Lemaitre C, Peterlongo P, et al. GATB: Genome Assembly & Analysis Tool Box. Bioinformatics. Oxford Academic. 2014;30(20):2959–61. https://doi.org/10.1093/bioinformatics/btu406. Rizzi R, Sacomoto G, Sagot M-F. Efficiently listing bounded length st-paths. In: Jan K, Miller M, Froncek D, editors. Combinatorial Algorithms. Cham: Springer International Publishing; 2015. p. 318–29. https://doi.org/10.1007/978-3-319-19315-1_28. Wyres KL, Nguyen TNT, Lam MMC, Judd LM, van Vinh CN, Dance DAB, et al. Genomic surveillance for hypervirulence and multi-drug resistance in invasive Klebsiella pneumoniae from South and Southeast Asia. Genome Medicine. 2020;12(1):11. https://doi.org/10.1186/s13073-019-0706-y. De Maio N, Shaw LP, Hubbard A, George S, Sanderson ND, Swann J, et al. Comparison of long-read sequencing technologies in the hybrid assembly of complex bacterial genomes. Microb Genom. 2019;5. Decraene V, Phan HTT, George R, Wyllie DH, Akinremi O, Aiken Z, et al. A large, refractory nosocomial outbreak of Klebsiella pneumoniae carbapenemase-producing Escherichia coli demonstrates carbapenemase gene outbreaks involving sink sites require novel approaches to infection control. Antimicrob Agents Chemother. 2018;62(12). https://doi.org/10.1128/AAC.01689-18. Greig D, Dallman T, Jenkins C. Oxford Nanopore sequencing elucidates a novel stx2f carrying prophage in a Shiga toxin producing Escherichia coli(STEC) O63:H6 associated with a case of haemolytic uremic syndrome (HUS). Access Microbiology. Microbiology Society; 2019;1:782. Greig DR, Dallman TJ, Hopkins KL, Jenkins C. MinION nanopore sequencing identifies the position and structure of bacterial antibiotic resistance determinants in a multidrug-resistant strain of enteroaggregative Escherichia coli. Microbial Genomics. Microbiology Society; 2018;4:e000213. Rang FJ, Kloosterman WP, de Ridder J. From squiggle to basepair: computational approaches for improving nanopore sequencing read accuracy. Genome Biology. 2018;19(1):90. https://doi.org/10.1186/s13059-018-1462-9. Treangen TJ, Ondov BD, Koren S, Phillippy AM. The Harvest suite for rapid core-genome alignment and visualization of thousands of intraspecific microbial genomes. Genome Biology. 2014;15(11):524. https://doi.org/10.1186/s13059-014-0524-x. Stamatakis A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014;30(9):1312–3. https://doi.org/10.1093/bioinformatics/btu033. Croucher NJ, Page AJ, Connor TR, Delaney AJ, Keane JA, Bentley SD, et al. Rapid phylogenetic analysis of large samples of recombinant bacterial whole genome sequences using Gubbins. Nucleic Acids Res. 2015;43(3):e15. https://doi.org/10.1093/nar/gku1196. Wick RR, Judd LM, Gorrie CL, Holt KE. Unicycler: Resolving bacterial genome assemblies from short and long sequencing reads. PLOS Computational Biology. Public Library of Science; 2017;13:e1005595. Vaser R, Sović I, Nagarajan N, Šikić M. Fast and accurate de novo genome assembly from long uncorrected reads. Genome Res. 2017;27(5):737–46. https://doi.org/10.1101/gr.214270.116. Walker BJ, Abeel T, Shea T, Priest M, Abouelliel A, Sakthikumar S, et al. Pilon: an integrated tool for comprehensive microbial variant detection and genome assembly improvement. PLOS ONE. Public Library of Science. 2014;9:e112963. Kurtz S, Phillippy A, Delcher AL, Smoot M, Shumway M, Antonescu C, et al. Versatile and open software for comparing large genomes. Genome Biol. 2004;5(2):R12. https://doi.org/10.1186/gb-2004-5-2-r12. Köster J, Rahmann S. Snakemake-a scalable bioinformatics workflow engine. Bioinformatics. 2018;34(20):3600. https://doi.org/10.1093/bioinformatics/bty350. Li H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv:13033997 [q-bio] [Internet]. 2013 [cited 2020 Nov 2]; Available from: http://arxiv.org/abs/1303.3997 Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012;9(4):357–9. https://doi.org/10.1038/nmeth.1923. Merkel D. Docker: lightweight Linux containers for consistent development and deployment. Linux J. 2014;2014:2:2. Kurtzer GM, Sochat V, Bauer MW. Singularity: scientific containers for mobility of compute. PLOS ONE. Public Library of Science; 2017;12:e0177459. Colquhoun R, Hall M, Lima L, Roberts L, Malone K, Hunt M, et al. Pandora: nucleotide-resolution bacterial pan-genomics with reference graphs. Datasets. Gene MSAs. Available from: https://doi.org/10.6084/m9.figshare.14781732.v1 Colquhoun R, Hall M, Lima L, Roberts L, Malone K, Hunt M, et al. Pandora: nucleotide-resolution bacterial pan-genomics with reference graphs. Datasets. E coli PanRG. Available from: https://doi.org/10.6084/m9.figshare.14781756.v1 Colquhoun R, Hall M, Lima L, Roberts L, Malone K, Hunt M, et al. Pandora: nucleotide-resolution bacterial pan-genomics with reference graphs. Software. Code repositories for pandora and the paper analysis framework. Available from: https://doi.org/10.6084/m9.figshare.14815899.v2 We are grateful to the REHAB consortium (https://modmedmicro.nsms.ox.ac.uk/rehab/) and the Transmission of Carbapenemase-producing Enterobacteriaceae (TRACE) study investigators for sharing sequencing data (for CFT073 and H131800734) in support of this work. We would like to thank Sion Bayliss and Ed Thorpe for discussions and help with Piggy. We are grateful to Kelly Wyres for sharing sequence data for the Australian samples, and to Tim Dallman and David Greig for sharing their data from Public Health England. We would like to thank the following for helpful conversations during the prolonged genesis of this project: Gil McVean, Derrick Crook, Eduardo Rocha, Bill Hanage, Ed Feil, Sion Bayliss, Ed Thorpe, Richard Neher, Camille Marchet, Rayan Chikhi, Kat Holt, Claire Gorrie, Rob Patro, Fatemeh Almodaresi, Nicole Stoesser, Liam Shaw, Phelim Bradley, and Sorina Maciuca. We would also like to thank the reviewers, with whose help the study was significantly improved. The review history is available as additional file 2. Andrew Cosgrove was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. RMC was funded by a Wellcome Trust PhD studentship (105279/Z/14/Z), and ZI was partially funded by a Wellcome Trust/Royal Society Sir Henry Dale Fellowship (102541/Z/13/Z). Open Access funding enabled and organized by Projekt DEAL. European Bioinformatics Institute, Hinxton, Cambridge, CB10 1SD, UK Rachel M. Colquhoun, Michael B. Hall, Leandro Lima, Leah W. Roberts, Kerri M. Malone, Martin Hunt, Brice Letcher & Zamin Iqbal Wellcome Trust Centre for Human Genetics, University of Oxford, Roosevelt Drive, Oxford, UK Rachel M. Colquhoun Institute of Evolutionary Biology, Ashworth Laboratories, University of Edinburgh, Edinburgh, UK Nuffield Department of Medicine, University of Oxford, Oxford, UK Martin Hunt, Sophie George & Louise Pankhurst Department of Infectious Diseases, Central Clinical School, Monash University, Melbourne, Victoria, 3004, Australia Jane Hawkey Department of Zoology, University of Oxford, Mansfield Road, Oxford, UK Louise Pankhurst Michael B. Hall Leandro Lima Leah W. Roberts Kerri M. Malone Martin Hunt Brice Letcher Sophie George Zamin Iqbal RMC designed and implemented the fundamental data structures, and RCC, map and compare algorithms. MBH designed and implemented the de novo variant discovery component. LL designed and implemented the update algorithm for RCC. LL optimized the codebase. RMC, MBH, and LL designed and implemented (several iterations of) the evaluation pipeline, one component of which was written by MH. BL reimplemented and improved the RCC codebase. JH, SG, and LP sequenced 18/20 of the samples. LWR, MBH, LL, KM, and ZI analyzed and visualized the 20-way data. ZI designed the study. ZI and RMC wrote the bulk of the paper, LL and MBH wrote sections, and all authors read and improved drafts. The author(s) read and approved the final manuscript. Correspondence to Zamin Iqbal. The authors declare that they have no competing interests Additional file 1: A : Supplementary figures 1-13. B: Supplementary tables 1-2. C: Supplementary algorithms 1-3. D: Detailed data availability. E: Supplementary animations 1-2. Review history. Colquhoun, R.M., Hall, M.B., Lima, L. et al. Pandora: nucleotide-resolution bacterial pan-genomics with reference graphs. Genome Biol 22, 267 (2021). https://doi.org/10.1186/s13059-021-02473-1 Pan-genome Genome graph Accessory genome Graph genomes
CommonCrawl
Sample records for alamos neutron scattering Neutron Scattering Activity at Los Alamos National Laboratory Bourke, M.A.M. The nondestructive and bulk penetrating aspects of neutron scattering techniques make them well suited to the study of materials from the nuclear energy sector (particularly those which are radioactive). This report provides a summary of the facility, LANSCE, which is used at Los Alamos National laboratory for these studies. It also provides a brief description of activities related to line broadening studies of radiation damage and recent imaging and offers observations about the outlook for future activity. The work alluded to below was performed during the period of the CRP by researchers that included but were not limited to; Sven Vogel and Don Brown of Los Alamos National Laboratory; and Anton Tremsin of the University of California, Berkeley. (author) LANSCE (Los Alamos Neutron Scattering Center) target system performance Russell, G.J.; Gilmore, J.S.; Robinson, H.; Legate, G.L.; Bridge, A.; Sanchez, R.J.; Brewton, R.J.; Woods, R.; Hughes, H.G. III The authors measured neutron beam fluxes at LANSCE using gold foil activation techniques. They did an extensive computer simulation of the as-built LANSCE Target/Moderator/Reflector/Shield geometry. They used this mockup in a Monte Carlo calculation to predict LANSCE neutronic performance for comparison with measured results. For neutron beam fluxes at 1 eV, the ratio of measured data to calculated varies from ∼0.6-0.9. The computed 1 eV neutron leakage at the moderator surface is 3.9 x 10 10 n/eV-sr-s-μA for LANSCE high-intensity water moderators. The corresponding values for the LANSCE high-resolution water moderator and the liquid hydrogen moderator are 3.3 and 2.9 x 10 10 , respectively. LANSCE predicted moderator intensities (per proton) for a tungsten target are essentially the same as ISIS predicted moderator intensities for a depleted uranium target. The calculated LANSCE steady state unperturbed thermal (E 13 n/cm 2 -s. The unique LANSCE split-target/flux-trap-moderator system is performing exceedingly well. The system has operated without a target or moderator change for over three years at nominal proton currents of 25 μA of 800-MeV protons. 17 refs., 8 figs., 3 tabs The LANSCE (Los Alamos Neutron Scattering Center) target data collection system Kernodle, A.K. The Los Alamos Neutron Scattering Center (LANSCE) Target Data Collection System is the result of an effort to provide a base of information from which to draw conclusions on the performance and operational condition of the overall LANSCE target system. During the conceptualization of the system, several goals were defined. A survey was made of both custom-made and off-the-shelf hardware and software that were capable of meeting these goals. The first stage of the system was successfully implemented for the LANSCE run cycle 52. From the operational experience gained thus far, it appears that the LANSCE Target Data Collection System will meet all of the previously defined requirements Opportunities for research program development at LANSCE (Los Alamos Neutron Scattering Center Bowman, C.D. The availability of intense neutron beams from facilities associated with the Proton Storage Ring and LANSCE has stimulated the development of neutron research well beyond the mainstream of neutron scattering. A description of this extended program is given along with prospects for further growth. 23 refs., 11 figs., 4 tabs The use of contrast variation in small angle neutron scattering on the low-Q diffractometer at the Manuel Lujuan Jr. Neutron Scattering Center at Los Alamos National Laboratory (LANSCE) Spaccavento, J. As a Department of Energy Teacher Research Associate at Los Alamos National Laboratory this past summer, the author was given the opportunity to exit the class-room and enter the world of intense scientific research for an eight week period. In this paper the author briefly describes the Manual Lujan Jr. Neutron Scattering Center at Los Alamos, then focuses specifically on the Low-Q Diffractometer which was the instrument he worked on. The author details one specific experimental technique namely open-quotes Contrast Variation,close quotes and closes by briefly presenting several other interesting applications of neutron scattering Lujan at Los Alamos Neutron Science Center (LANSCE) Data.gov (United States) Federal Laboratory Consortium — The Lujan Neutron Scattering Center (Lujan Center) at Los Alamos National Laboratory is an intense pulsed neutrons source operating at a power level of 80 -100 kW.... Solutions for implementing time-of-flight techniques in low-angle neutron scattering, as realized on the Low-Q Diffractometer at Los Alamos Hjelm, R.P. Jr.; Seeger, P.A. The implementation of small-angle (Low-momentum transfer) neutron scattering at pulsed spallation sources, using time of flight methods, has meant the introduction of some new ideas in instrument design, data acquisition, data reduction and computer management of the experiment and the data. Here we recount some of the salient aspects of solutions for implementing time of fight small-angle neutron scattering instruments at pulsed sources, as realized on the Low-Q Diffractometer, LQD, at Los Alamos. We consider, fortlier, some of the problems that are yet to be solved, and take a short excursion into the future of SANS instrumentation at pulsed sources Time-of-flight small-angle-neutron-scattering data reduction and analysis at LANSCE (Los Alamos Neutron Scattering Center) with program SMR Hjelm, R.P. Jr.; Seegar, P.A. A user-friendly, integrated system, SMR, for the display, reduction and analysis of data from time-of-flight small-angle neutron diffractometers is described. Its purpose is to provide facilities for data display and assessment and to provide these facilities in near real time. This allows the results of each scattering measurement to be available almost immediately, and enables the experimenter to use the results of a measurement as a basis for other measurements in the same instrument allocation. 8 refs., 11 figs The Los Alamos Neutron Science Center Lisowski, Paul W.; Schoenberg, Kurt F. The Los Alamos Neutron Science Center, or LANSCE, uses the first truly high-current medium-energy proton linear accelerator, which operated originally at a beam power of 1 MW for medium-energy nuclear physics. Today LANSCE continues operation as one of the most versatile accelerator-based user facilities in the world. During eight months of annual operation, scientists from around the world work at LANSCE to execute an extraordinarily broad program of defense and civilian research. Several areas operate simultaneously. The Lujan Neutron Scattering Center (Lujan Center) is a moderated spallation source (meV to keV), the Weapons Neutron Research Facility (WNR) is a bare spallation neutron source (keV to 800 MeV), and a new ultra-cold neutron source will be operational in 2005. These sources give LANSCE the ability to produce and use neutrons with energies that range over 14 orders of magnitude. LANSCE also supplies beam to WNR and two other areas for applications requiring protons. In a proton radiography (pRad) area, a sequence of narrow proton pulses is transmitted through shocked materials and imaged to study dynamic properties. In 2005, LANSCE began operating a facility that uses 100-MeV protons to produce medical radioisotopes. To sustain a vigorous program beyond this decade, LANSCE has embarked on a project to refurbish key elements of the facility and to plan capabilities beyond those that presently exist Los Alamos Neutron Science Center Kippen, Karen Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) For more than 30 years the Los Alamos Neutron Science Center (LANSCE) has provided the scientific underpinnings in nuclear physics and material science needed to ensure the safety and surety of the nuclear stockpile into the future. In addition to national security research, the LANSCE User Facility has a vibrant research program in fundamental science, providing the scientific community with intense sources of neutrons and protons to perform experiments supporting civilian research and the production of medical and research isotopes. Five major experimental facilities operate simultaneously. These facilities contribute to the stockpile stewardship program, produce radionuclides for medical testing, and provide a venue for industrial users to irradiate and test electronics. In addition, they perform fundamental research in nuclear physics, nuclear astrophysics, materials science, and many other areas. The LANSCE User Program plays a key role in training the next generation of top scientists and in attracting the best graduate students, postdoctoral researchers, and early-career scientists. The U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) —the principal sponsor of LANSCE—works with the Office of Science and the Office of Nuclear Energy, which have synergistic long-term needs for the linear accelerator and the neutron science that is the heart of LANSCE. The Los Alamos Neutron Science Center Spallation Neutron Sources Nowicki, Suzanne F.; Wender, Stephen A.; Mocko, Michael The Los Alamos Neutron Science Center (LANSCE) provides the scientific community with intense sources of neutrons, which can be used to perform experiments supporting civilian and national security research. These measurements include nuclear physics experiments for the defense program, basic science, and the radiation effect programs. This paper focuses on the radiation effects program, which involves mostly accelerated testing of semiconductor parts. When cosmic rays strike the earth's atmosphere, they cause nuclear reactions with elements in the air and produce a wide range of energetic particles. Because neutrons are uncharged, they can reach aircraft altitudes and sea level. These neutrons are thought to be the most important threat to semiconductor devices and integrated circuits. The best way to determine the failure rate due to these neutrons is to measure the failure rate in a neutron source that has the same spectrum as those produced by cosmic rays. Los Alamos has a high-energy and a low-energy neutron source for semiconductor testing. Both are driven by the 800-MeV proton beam from the LANSCE accelerator. The high-energy neutron source at the Weapons Neutron Research (WNR) facility uses a bare target that is designed to produce fast neutrons with energies from 100 keV to almost 800 MeV. The measured neutron energy distribution from WNR is very similar to that of the cosmic-ray-induced neutrons in the atmosphere. However, the flux provided at the WNR facility is typically 5×107 times more intense than the flux of the cosmic-ray-induced neutrons. This intense neutron flux allows testing at greatly accelerated rates. An irradiation test of less than an hour is equivalent to many years of neutron exposure due to cosmic-ray neutrons. The low-energy neutron source is located at the Lujan Neutron Scattering Center. It is based on a moderated source that provides useful neutrons from subthermal energies to ~100 keV. The characteristics of these sources The Los Alamos Neutron Science Center (LANSCE) provides the scientific community with intense sources of neutrons, which can be used to perform experiments supporting civilian and national security research. These measurements include nuclear physics experiments for the defense program, basic science, and the radiation effect programs. This paper focuses on the radiation effects program, which involves mostly accelerated testing of semiconductor parts. When cosmic rays strike the earth's atmosphere, they cause nuclear reactions with elements in the air and produce a wide range of energetic particles. Because neutrons are uncharged, they can reach aircraft altitudes and sea level. These neutrons are thought to be the most important threat to semiconductor devices and integrated circuits. The best way to determine the failure rate due to these neutrons is to measure the failure rate in a neutron source that has the same spectrum as those produced by cosmic rays. Los Alamos has a high-energy and a low-energy neutron source for semiconductor testing. Both are driven by the 800-MeV proton beam from the LANSCE accelerator. The high-energy neutron source at the Weapons Neutron Research (WNR) facility uses a bare target that is designed to produce fast neutrons with energies from 100 keV to almost 800 MeV. The measured neutron energy distribution from WNR is very similar to that of the cosmic-ray-induced neutrons in the atmosphere. However, the flux provided at the WNR facility is typically 5×107 times more intense than the flux of the cosmic-ray-induced neutrons. This intense neutron flux allows testing at greatly accelerated rates. An irradiation test of less than an hour is equivalent to many years of neutron exposure due to cosmic-ray neutrons. The low-energy neutron source is located at the Lujan Neutron Scattering Center. It is based on a moderated source that provides useful neutrons from subthermal energies to ∼100 keV. The characteristics of these sources, and The annual report on hand gives an overview of the research work carried out in the Laboratory for Neutron Scattering (LNS) of the ETH Zuerich in 1990. Using the method of neutron scattering, it is possible to examine in detail the static and dynamic properties of the condensed material. In accordance with the multidisciplined character of the method, the LNS has for years maintained a system of intensive co-operation with numerous institutes in the areas of biology, chemistry, solid-state physics, crystallography and materials research. In 1990 over 100 scientists from more than 40 research groups both at home and abroad took part in the experiments. It was again a pleasure to see the number of graduate students present, who were studying for a doctorate and who could be introduced into the neutron scattering during their stay at the LNS and thus were in the position to touch on central ways of looking at a problem in their dissertation using this modern experimental method of solid-state research. In addition to the numerous and interesting ways of formulating the questions to explain the structure, nowadays the scientific programme increasingly includes particularly topical studies in connection with high temperature-supraconductors and materials research Neutron scattering. Lectures Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner The following topics are dealt with: Neutron scattering in contemporary research, neutron sources, symmetry of crystals, diffraction, nanostructures investigated by small-angle neutron scattering, the structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic scattering, strongly correlated electrons, dynamics of macromolecules, applications of neutron scattering. (HSI) Operational status of the Los Alamos neutron science center (LANSCE) Jones, Kevin W [Los Alamos National Laboratory; Erickson, John L [Los Alamos National Laboratory; Schoenberg, Kurt F [Los Alamos National Laboratory The Los Alamos Neutron Science Center (LANSCE) accelerator and beam delivery complex generates the proton beams that serve three neutron production sources; the thermal and cold source for the Manuel Lujan Jr. Neutron Scattering Center, the Weapons Neutron Research (WNR) high-energy neutron source, and a pulsed Ultra-Cold Neutron Source. These three sources are the foundation of strong and productive multi-disciplinary research programs that serve a diverse and robust user community. The facility also provides multiplexed beams for the production of medical radioisotopes and proton radiography of dynamic events. The recent operating history of these sources will be reviewed and plans for performance improvement will be discussed, together with the underlying drivers for the proposed LANSCE Refurbishment project. The details of this latter project are presented in a separate contribution. The following topics are dealt with: Neutron sources, symmetry of crystals, nanostructures investigated by small-angle neutron scattering, structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic neutron scattering, strongly correlated electrons, polymer dynamics, applications of neutron scattering. (HSI) Workshop on Probing Frontiers in Matter with Neutron Scattering, Wrap-up Session Chaired by John C. Browne on December 14, 1997, at Fuller Lodge, Los Alamos, New Mexico Mezei, F.; Thompson, J. The Workshop on Probing Frontiers in Matter with Neutron Scattering consisted of a series of lectures and discussions about recent highlights in neutron scattering. In this report, we present the transcript of the concluding discussion session (wrap-up session) chaired by John C. Browne, Director of Los Alamos National Laboratory. The workshop had covered a spectrum of topics ranging from high T c superconductivity to polymer science, from glasses to molecular biology, a broad review aimed at identifying trends and future needs in condensed matter research. The focus of the wrap-up session was to summarize the workshop participants' views on developments to come. Most of the highlights presented during the workshop were the result of experiments performed at the leading reactor-based neutron scattering facilities. However, recent advances with very high power accelerators open up opportunities to develop new approaches to spallation technique that could decisively advance neutron scattering research in areas for which reactor sources are today by far the best choice. The powerful combination of neutron scattering and increasingly accurate computer modeling emerged as another area of opportunity for research in the coming decades The Workshop on Probing Frontiers in Matter with Neutron Scattering consisted of a series of lectures and discussions about recent highlights in neutron scattering. In this report, we present the transcript of the concluding discussion session (wrap-up session) chaired by John C. Browne, Director of Los Alamos National Laboratory. The workshop had covered a spectrum of topics ranging from high T{sub c} superconductivity to polymer science, from glasses to molecular biology, a broad review aimed at identifying trends and future needs in condensed matter research. The focus of the wrap-up session was to summarize the workshop participants' views on developments to come. Most of the highlights presented during the workshop were the result of experiments performed at the leading reactor-based neutron scattering facilities. However, recent advances with very high power accelerators open up opportunities to develop new approaches to spallation technique that could decisively advance neutron scattering research in areas for which reactor sources are today by far the best choice. The powerful combination of neutron scattering and increasingly accurate computer modeling emerged as another area of opportunity for research in the coming decades. Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner (eds.) The following topics are dealt with: Neutron sources, neutron properties and elastic scattering, correlation functions measured by scattering experiments, symmetry of crystals, applications of neutron scattering, polarized-neutron scattering and polarization analysis, structural analysis, magnetic and lattice excitation studied by inelastic neutron scattering, macromolecules and self-assembly, dynamics of macromolecules, correlated electrons in complex transition-metal oxides, surfaces, interfaces, and thin films investigated by neutron reflectometry, nanomagnetism. (HSI) Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner [eds. The following topics are dealt with: Neutron sources, symmetry of crystals, diffraction, nanostructures investigated by small-angle neutron scattering, the structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic scattering, strongly correlated electrons, dynamics of macromolecules, applications of neutron scattering. (HSI) The Los Alamos Intense Neutron Source Nebel, R.A.; Barnes, D.C.; Bollman, R.; Eden, G.; Morrison, L.; Pickrell, M.M.; Reass, W. The Intense Neutron Source (INS) is an Inertial Electrostatic Confinement (IEC) fusion device presently under construction at Los Alamos National Laboratory. It is designed to produce 10 11 neutrons per second steady-state using D-T fuel. Phase 1 operation of this device will be as a standard three grid IEC ion focus device. Expected performance has been predicted by scaling from a previous IEC device. Phase 2 operation of this device will utilize a new operating scheme, the Periodically Oscillating Plasma Sphere (POPS). This scheme is related to both the Spherical Reflect Diode and the Oscillating Penning Trap. With this type of operation the authors hope to improve plasma neutron production to about 10 13 neutrons/second Furrer, A. This report contains the text of 16 lectures given at the Summer School and the report on a panel discussion entitled ''the relative merits and complementarities of x-rays, synchrotron radiation, steady- and pulsed neutron sources''. figs., tabs., refs Fayer, Michael J.; Gee, Glendon W. The neutron probe is a standard tool for measuring soil water content. This article provides an overview of the underlying theory, describes the methodology for its calibration and use, discusses example applications, and identifies the safety issues. Soil water makes land-based life possible by satisfying plant water requirements, serving as a medium for nutrient movement to plant roots and nutrient cycling, and controlling the fate and transport of contaminants in the soil environment. Therefore, a successful understanding of the dynamics of plant growth, nutrient cycling, and contaminant behavior in the soil requires knowledge of the soil water content as well as its spatial and temporal variability. After more than 50 years, neutron probes remain the most reliable tool available for field monitoring of soil water content. Neutron probes provide integrated measurements over relatively large volumes of soil and, with proper access, allow for repeated sampling of the subsurface at the same locations. The limitations of neutron probes include costly and time-consuming manual operation, lack of data automation, and costly regulatory requirements. As more non-radioactive systems for soil water monitoring are developed to provide automated profiling capabilities, neutron-probe usage will likely decrease. Until then, neutron probes will continue to be a standard for reliable measurements of field water contents in soils around the globe Neutron Scattering Software Home Page | Facilities | Reference | Software | Conferences | Announcements | Mailing Lists Neutron Scattering Banner Neutron Scattering Software A new portal for neutron scattering has just been established sets KUPLOT: data plotting and fitting software ILL/TAS: Matlab probrams for analyzing triple axis data Polarized Neutron Scattering Roessli, B.; Böni, P. The technique of polarized neutron scattering is reviewed with emphasis on applications. Many examples of the usefulness of the method in various fields of physics are given like the determination of spin density maps, measurement of complex magnetic structures with spherical neutron polarimetry, inelastic neutron scattering and separation of coherent and incoherent scattering with help of the generalized XYZ method. Plans for an Ultra Cold Neutron source at Los Alamos Seestrom, S.J.; Bowles, T.J.; Hill, R.; Greene, G.L. [Los Alamos National Lab., NM (United States) Ultra Cold Neutrons (UCN) can be produced at spallation sources using a variety of techniques. To date the technique used has been to Bragg scatter and Doppler shift cold neutrons into UCN from a moving crystal. This is particularly applicable to short-pulse spallation sources. We are presently constructing a UCN source at LANSCE using method. In addition, large gains in UCN density should be possible using cryogenic UCN sources. Research is under way at Gatchina to demonstrate technical feasibility of be a frozen deuterium source. If successful, a source of this type could be implemented at future spallation source, such as the long pulse source being planned at Los Alamos, with a UCN density that may be two orders of magnitude higher than that presently available at reactors. (author) Neutron scattering and magnetism Mackintosh, A.R. Those properties of the neutron which make it a unique tool for the study of magnetism are described. The scattering of neutrons by magnetic solids is briefly reviewed, with emphasis on the information on the magnetic structure and dynamics which is inherent in the scattering cross-section. The contribution of neutron scattering to our understanding of magnetic ordering, excitations and phase transitions is illustrated by experimental results on a variety of magnetic crystals. (author) Diffuse scattering of neutrons Novion, C.H. de. The use of neutron scattering to study atomic disorder in metals and alloys is described. The diffuse elastic scattering of neutrons by a perfect crystal lattice leads to a diffraction spectrum with only Bragg spreads. the existence of disorder in the crystal results in intensity and position modifications to these spreads, and above all, to the appearance of a low intensity scatter between Bragg peaks. The elastic scattering of neutrons is treated in this text, i.e. by measuring the number of scattered neutrons having the same energy as the incident neutrons. Such measurements yield information on the static disorder in the crystal and time average fluctuations in composition and atomic displacements [fr Neutron scattering. Experiment manuals The following topics are dealt with: The thermal triple axis spectrometer PUMA, the high-resolution powder diffractometer SPODI, the hot single-crystal diffractometer HEiDi for structure analysis with neutrons, the backscattering spectrometer SPHERES, neutron polarization analysis with tht time-of-flight spectrometer DNS, the neutron spin-echo spectrometer J-NSE, small-angle neutron scattering with the KWS-1 and KWS-2 diffractometers, the very-small-angle neutron scattering diffractrometer with focusing mirror KWS-3, the resonance spin-echo spectrometer RESEDA, the reflectometer TREFF, the time-of-flight spectrometer TOFTOF. (HSI) Scattering with polarized neutrons Schweizer, J. In the history of neutron scattering, it was shown very soon that the use of polarized neutron beams brings much more information than usual scattering with unpolarized neutrons. We shall develop here the different scattering methods that imply polarized neutrons: 1) polarized beams without polarization analysis, the flipping ratio method; 2) polarized beams with a uniaxial polarization analysis; 3) polarized beams with a spherical polarization analysis. For all these scattering methods, we shall give examples of the physical problems which can been solved by these methods, particularly in the field of magnetism: investigation of complex magnetic structures, investigation of spin or magnetization densities in metals, insulators and molecular compounds, separation of magnetic and nuclear scattering, investigation of magnetic properties of liquids and amorphous materials and even, for non magnetic material, separation between coherent and incoherent scattering. (author) Introduction to neutron scattering Fischer, W E [Paul Scherrer Inst. (PSI), Villigen (Switzerland) We give here an introduction to the theoretical principles of neutron scattering. The relationship between scattering- and correlation-functions is particularly emphasized. Within the framework of linear response theory (justified by the weakness of the basic interaction) the relation between fluctuation and dissipation is discussed. This general framework explains the particular power of neutron scattering as an experimental method. (author) 4 figs., 4 refs. Neutron-proton scattering Doll, P. Neutron-proton scattering as fundamental interaction process below and above hundred MeV is discussed. Quark model inspired interactions and phenomenological potential models are described. The seminar also indicates the experimental improvements for achieving new precise scattering data. Concluding remarks indicate the relevance of nucleon-nucleon scattering results to finite nuclei. (orig.) [de Polarimetric neutron scattering Tasset, F. Polarimetric Neutron Scattering in introduced, both by, explaining methodological issues and the corresponding instrumental developments. After a short overview of neutron spin polarization and the neutron polarization 3d-vector a pictorial approach of the microscopic theory is used to show how a polarized beam interacts with lattice and magnetic Fourier components in a crystal. Examples are given of using Spherical Neutron Polarimetry (SNP) and the corresponding Cryopad polarimeter for the investigation of non-collinear magnetic structures. (R.P.) Los Alamos National Laboratory Weapons Neutron Research Facility Woods, R. The Weapons Neutron Research (WNR) spallation neutron source utilizes 800-MeV protons from the Los Alamos Meson Physics linac. The proton beam transport system, the target systems, and the data acquisition and control system are described. Operating experience, present status, and planned improvements are discussed Virtual neutron scattering experiments Overgaard, Julie Hougaard; Bruun, Jesper; May, Michael . In the last week of the course, students travel to a large-scale neutron scattering facility to perform real neutron scattering experiments. Through student interviews and survey answers, we argue, that the virtual training prepares the students to engage more fruitfully with experiments by letting them focus......We describe how virtual experiments can be utilized in a learning design that prepares students for hands-on experiments at large-scale facilities. We illustrate the design by showing how virtual experiments are used at the Niels Bohr Institute in a master level course on neutron scattering... Deep inelastic neutron scattering Mayers, J. The report is based on an invited talk given at a conference on ''Neutron Scattering at ISIS: Recent Highlights in Condensed Matter Research'', which was held in Rome, 1988, and is intended as an introduction to the techniques of Deep Inelastic Neutron Scattering. The subject is discussed under the following topic headings:- the impulse approximation I.A., scaling behaviour, kinematical consequences of energy and momentum conservation, examples of measurements, derivation of the I.A., the I.A. in a harmonic system, and validity of the I.A. in neutron scattering. (U.K.) The following topics are dealt with: The thermal triple-axis spectrometer PUMA, the high-resolution powder diffractometer SPODI, the hot-single-crystal diffractometer HEiDi, the three-axis spectrometer PANDA, the backscattering spectrometer SPHERES, the DNS neutron-polarization analysis, the neutron spin-echo spectrometer J-NSE, small-angle neutron scattering at KWS-1 and KWS-2, a very-small-angle neutron scattering diffractometer with focusing mirror, the reflectometer TREFF, the time-of-flight spectrometer TOFTOF. (HSI) Symposium on neutron scattering Lehmann, M.S.; Saenger, W.; Hildebrandt, G.; Dachs, H. Extended abstracts of the named symposium are presented. The first part of this report contains the abstracts of the lectures, the second those of the posters. Topics discussed on the symposium include neutron diffraction and neutron scattering studies in magnetism, solid state chemistry and physics, materials research. Some papers discussing instruments and methods are included too. (GSCH) Introductory theory of neutron scattering Gunn, J.M.F. The paper comprises a set of six lecture notes which were delivered to the summer school on 'Neutron Scattering at a pulsed source', Rutherford Laboratory, United Kingdom, 1986. The lectures concern the physical principles of neutron scattering. The topics of the lectures include: diffraction, incoherent inelastic scattering, connection with the Schroedinger equation, magnetic scattering, coherent inelastic scattering, and surfaces and neutron optics. (UK) MAGNETIC NEUTRON SCATTERING ZALIZNYAK,I.A.; LEE,S.H. Much of our understanding of the atomic-scale magnetic structure and the dynamical properties of solids and liquids was gained from neutron-scattering studies. Elastic and inelastic neutron spectroscopy provided physicists with an unprecedented, detailed access to spin structures, magnetic-excitation spectra, soft-modes and critical dynamics at magnetic-phase transitions, which is unrivaled by other experimental techniques. Because the neutron has no electric charge, it is an ideal weakly interacting and highly penetrating probe of matter's inner structure and dynamics. Unlike techniques using photon electric fields or charged particles (e.g., electrons, muons) that significantly modify the local electronic environment, neutron spectroscopy allows determination of a material's intrinsic, unperturbed physical properties. The method is not sensitive to extraneous charges, electric fields, and the imperfection of surface layers. Because the neutron is a highly penetrating and non-destructive probe, neutron spectroscopy can probe the microscopic properties of bulk materials (not just their surface layers) and study samples embedded in complex environments, such as cryostats, magnets, and pressure cells, which are essential for understanding the physical origins of magnetic phenomena. Neutron scattering is arguably the most powerful and versatile experimental tool for studying the microscopic properties of the magnetic materials. The magnitude of the cross-section of the neutron magnetic scattering is similar to the cross-section of nuclear scattering by short-range nuclear forces, and is large enough to provide measurable scattering by the ordered magnetic structures and electron spin fluctuations. In the half-a-century or so that has passed since neutron beams with sufficient intensity for scattering applications became available with the advent of the nuclear reactors, they have became indispensable tools for studying a variety of important areas of modern We describe how virtual experiments can be utilized in a learning design that prepares students for hands-on experiments at large-scale facilities. We illustrate the design by showing how virtual experiments are used at the Niels Bohr Institute in a master level course on neutron scattering....... In the last week of the course, students travel to a large-scale neutron scattering facility to perform real neutron scattering experiments. Through student interviews and survey answers, we argue, that the virtual training prepares the students to engage more fruitfully with experiments by letting them focus...... on physics and data rather than the overwhelming instrumentation. We argue that this is because they can transfer their virtual experimental experience to the real-life situation. However, we also find that learning is still situated in the sense that only knowledge of particular experiments is transferred... Slow neutron scattering experiments Moon, R.M. Neutron scattering is a versatile technique that has been successfully applied to condensed-matter physics, biology, polymer science, chemistry, and materials science. The United States lost its leadership role in this field to Western Europe about 10 years ago. Recently, a modest investment in the United States in new facilities and a positive attitude on the part of the national laboratories toward outside users have resulted in a dramatic increase in the number of US scientists involved in neutron scattering research. Plans are being made for investments in new and improved facilities that could return the leadership role to the United States. 23 references, 4 figures, 3 tables The Manuel Lujan Jr. Neutron Scattering Center Goldstone, J.A. High in the northcentral mountains of Los Alamos, New Mexico, is the Manuel Lujan Jr. Neutron Scattering Center (LANSCE), a pulsed-spallation neutron source located at Los Alamos National Laboratory. At LANSCE, neutrons are produced by spallation when a pulsed 800-MeV proton beam impinges on a tungsten target. The proton pulses are provided by a linear accelerator and an associated Proton Storage Ring (PSR), which alters the intensity, time structure, and repetition rate of the pulses. In October 1986, LANSCE was designated a national user facility, with a formal user program initiated in 1988. In July 1989, the LANSCE facility was dedicated as the Manuel Lujan Jr. Neutron Scattering Center in honor of the long-term Congressman from New Mexico. At present, the PSR operates with a proton pulse width of 0.27 μs at 20 Hz and 80 μA, attaining the highest peak neutron flux in the world and close to its goal of 100 μA, which would yield a peak thermal neutron flux of 10 16 n/cm -2 s -1 . This paper discusses the target/moderator/reflector shield system, the LANSCE instruments, the facility improvement projects, and user programs Neutron scattering in Australia Knott, R.B. Neutron scattering techniques have been part of the Australian scientific research community for the past three decades. The High Flux Australian Reactor (HIFAR) is a multi-use facility of modest performance that provides the only neutron source in the country suitable for neutron scattering. The limitations of HIFAR have been recognized and recently a Government initiated inquiry sought to evaluate the future needs of a neutron source. In essence, the inquiry suggested that a delay of several years would enable a number of key issues to be resolved, and therefore a more appropriate decision made. In the meantime, use of the present source is being optimized, and where necessary research is being undertaken at major overseas neutron facilities either on a formal or informal basis. Australia has, at present, a formal agreement with the Rutherford Appleton Laboratory (UK) for access to the spallation source ISIS. Various aspects of neutron scattering have been implemented on HIFAR, including investigations of the structure of biological relevant molecules. One aspect of these investigations will be presented. Preliminary results from a study of the interaction of the immunosuppressant drug, cyclosporin-A, with reconstituted membranes suggest that the hydrophobic drug interdigitated with lipid chains Knott, R.B. [Australian Nuclear Science and Technology Organisation, Menai (Australia) Neutron scattering techniques have been part of the Australian scientific research community for the past three decades. The High Flux Australian Reactor (HIFAR) is a multi-use facility of modest performance that provides the only neutron source in the country suitable for neutron scattering. The limitations of HIFAR have been recognized and recently a Government initiated inquiry sought to evaluate the future needs of a neutron source. In essence, the inquiry suggested that a delay of several years would enable a number of key issues to be resolved, and therefore a more appropriate decision made. In the meantime, use of the present source is being optimized, and where necessary research is being undertaken at major overseas neutron facilities either on a formal or informal basis. Australia has, at present, a formal agreement with the Rutherford Appleton Laboratory (UK) for access to the spallation source ISIS. Various aspects of neutron scattering have been implemented on HIFAR, including investigations of the structure of biological relevant molecules. One aspect of these investigations will be presented. Preliminary results from a study of the interaction of the immunosuppressant drug, cyclosporin-A, with reconstituted membranes suggest that the hydrophobic drug interdigitated with lipid chains. Small angle neutron scattering Bernardini, G.; Cherubini, G.; Fioravanti, A.; Olivi, A. A method for the analysis of the data derived from neutron small angle scattering measurements has been accomplished in the case of homogeneous particles, starting from the basic theory without making any assumption on the form of particle size distribution function. The experimental scattering curves are interpreted with the aid the computer by means of a proper routine. The parameters obtained are compared with the corresponding ones derived from observations at the transmission electron microscope LANSCE: Los Alamos Neutron Science Center Kippen, Karen Elizabeth The principle goals of this project is to increase flux and improve resolution for neutron energies above 1 keV for nuclear physics experiments; and preserve current strong performance at thermal energies for material science. Los Alamos Neutron Science Center (LANSCE) Nuclear Science Facilities Nelson, Ronald Owen [Los Alamos National Laboratory; Wender, Steve [Los Alamos National Laboratory The Los Alamos Neutron Science Center (LANSCE) facilities for Nuclear Science consist of a high-energy "white" neutron source (Target 4) with 6 flight paths, three low-energy nuclear science flight paths at the Lujan Center, and a proton reaction area. The neutron beams produced at the Target 4 complement those produced at the Lujan Center because they are of much higher energy and have shorter pulse widths. The neutron sources are driven by the 800-MeV proton beam of the LANSCE linear accelerator. With these facilities, LANSCE is able to deliver neutrons with energies ranging from a milli-electron volt to several hundreds of MeV, as well as proton beams with a wide range of energy, time and intensity characteristics. The facilities, instruments and research programs are described briefly. Fundamental symmetry studies at Los Alamos using epithermal neutrons Bowman, C.D.; Bowman, J.D.; Yuan, V.W. Fundamental symmetry studies using intense polarized beams of epithermal neutrons are underway at the LANSCE facility of the Los Alamos National Laboratory. Three classes of symmetry experiments can be explored: parity violation, and time reversal invariance violation for both parity-violating and parity-conserved observables. The experimental apparatus is described and performance illustrated with examples of recent measurements. Possible improvements in the facilities and prospective experiments are discussed. 15 refs., 10 figs Cousin Fabrice Full Text Available Small Angle Neutron Scattering (SANS is a technique that enables to probe the 3-D structure of materials on a typical size range lying from ∼ 1 nm up to ∼ a few 100 nm, the obtained information being statistically averaged on a sample whose volume is ∼ 1 cm3. This very rich technique enables to make a full structural characterization of a given object of nanometric dimensions (radius of gyration, shape, volume or mass, fractal dimension, specific area… through the determination of the form factor as well as the determination of the way objects are organized within in a continuous media, and therefore to describe interactions between them, through the determination of the structure factor. The specific properties of neutrons (possibility of tuning the scattering intensity by using the isotopic substitution, sensitivity to magnetism, negligible absorption, low energy of the incident neutrons make it particularly interesting in the fields of soft matter, biophysics, magnetic materials and metallurgy. In particular, the contrast variation methods allow to extract some informations that cannot be obtained by any other experimental techniques. This course is divided in two parts. The first one is devoted to the description of the principle of SANS: basics (formalism, coherent scattering/incoherent scattering, notion of elementary scatterer, form factor analysis (I(q→0, Guinier regime, intermediate regime, Porod regime, polydisperse system, structure factor analysis (2nd Virial coefficient, integral equations, characterization of aggregates, and contrast variation methods (how to create contrast in an homogeneous system, matching in ternary systems, extrapolation to zero concentration, Zero Averaged Contrast. It is illustrated by some representative examples. The second one describes the experimental aspects of SANS to guide user in its future experiments: description of SANS spectrometer, resolution of the spectrometer, optimization of Neutron resonance radiography: Report of a workshop, Los Alamos, NM: July 27-29, 1987 Neutron resonance radiography is a new technique with great potential for non-destructive analysis and testing. This technique has been under research and development in a number of major research laboratories for some time. Unlike thermal neutron radiography, which is primarily oriented towards imaging hydrogen and a number of other highly neutron-absorptive materials without necessarily distinguishing between them, neutron resonance radiography has the capability of uniquely identifying many kinds of chemical elements and their individual isotopes. It also has the potential for temperature imaging in materials containing heavy elements and for certain dynamic features such as stroboscopic imaging. Although neutron resonance radiography has not yet been taken up in a systematic way for technological applications, significant development of ideas and instrumentation at the research level has blossomed. There have also been major developments in the availability of powerful pulsed-neutron sources. In light of these developments, the Los Alamos Neutron Scattering Center sponsored a workshop with the general aims of reviewing scientific and technical progress, discussing and highlighting future developments, and stimulating interest in technological exploitation of the methods. In addition to the techniques and instrumentation required for the field, the applications of neutron resonance radiography in some of the following industrial and manufacturing areas were discussed: nuclear fuel assay; nuclear safeguards in general; aerospace development (aeroengine blade temperature, stroboscopic techniques); diagnostics; non-nuclear industry (especially metallurgy); temperature imaging; use of mobile pulsed-neutron sources; and practical use of major pulsed-neutron facilities Inelastic scattering of neutrons Sal'nikov, O.A. The paper reviews the main problems concerning the mechanism of the inelastic scatterings of neutrons by nuclei, concentrating on the different models which calculate the angular distributions. In the region of overlapping levels, both the compound nucleus mechanism and the preequilibrium Griffin (exciton) model are discussed, and their contribution relative to that of a direct mechanism is considered. The parametrization of the level density and of the nuclear moment of inertia are also discussed. The excitation functions of discrete levels are also presented, and the importance of elucidating their five structure (for practical calculations, such as for shielding) is pointed out Electrical Engineering in Los Alamos Neutron Science Center Accelerator Silva, Michael James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) The field of electrical engineering plays a significant role in particle accelerator design and operations. Los Alamos National Laboratories LANSCE facility utilizes the electrical energy concepts of power distribution, plasma generation, radio frequency energy, electrostatic acceleration, signals and diagnostics. The culmination of these fields produces a machine of incredible potential with uses such as isotope production, neutron spallation, neutron imaging and particle analysis. The key isotope produced in LANSCE isotope production facility is Strontium-82 which is utilized for medical uses such as cancer treatment and positron emission tomography also known as PET scans. Neutron spallation is one of the very few methods used to produce neutrons for scientific research the other methods are natural decay of transuranic elements from nuclear reactors. Accelerator produce neutrons by accelerating charged particles into neutron dense elements such as tungsten imparting a neutral particle with kinetic energy, this has the benefit of producing a large number of neutrons as well as minimizing the waste generated. Utilizing the accelerator scientist can gain an understanding of how various particles behave and interact with matter to better understand the natural laws of physics and the universe around us. Single Crystal Diffuse Neutron Scattering Richard Welberry Full Text Available Diffuse neutron scattering has become a valuable tool for investigating local structure in materials ranging from organic molecular crystals containing only light atoms to piezo-ceramics that frequently contain heavy elements. Although neutron sources will never be able to compete with X-rays in terms of the available flux the special properties of neutrons, viz. the ability to explore inelastic scattering events, the fact that scattering lengths do not vary systematically with atomic number and their ability to scatter from magnetic moments, provides strong motivation for developing neutron diffuse scattering methods. In this paper, we compare three different instruments that have been used by us to collect neutron diffuse scattering data. Two of these are on a spallation source and one on a reactor source. Scheduling at the Los Alamos Neutron Science Center (LANSCE) Gallegos, F.R. The centerpieces of the Los Alamos Neutron Science Center (LANSCE) are a half-mile long 800-MeV proton linear accelerator and proton storage ring. The accelerator, storage ring, and target stations provide the protons and spallation neutrons that are used in the numerous basic research and applications experimental programs supported by the US Department of Energy. Experimental users, facility maintenance personnel, and operations personnel must work together to achieve the most program benefit within defined budget and resource constraints. In order to satisfy the experimental users programs, operations must provide reliable and high quality beam delivery. Effective and efficient scheduling is a critical component to achieve this goal. This paper will detail how operations scheduling is presently executed at the LANSCE accelerator facility Calculations of neutron spectra after neutron-neutron scattering Crawford, B E [Gettysburg College, Box 405, Gettysburg, PA 17325 (United States); Stephenson, S L [Gettysburg College, Box 405, Gettysburg, PA 17325 (United States); Howell, C R [Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Mitchell, G E [North Carolina State University, Raleigh, NC 27695-8202 (United States); Tornow, W [Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Furman, W I [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Lychagin, E V [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Muzichka, A Yu [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Nekhaev, G V [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Strelkov, A V [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Sharapov, E I [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Shvetsov, V N [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation) A direct neutron-neutron scattering length, a{sub nn}, measurement with the goal of 3% accuracy (0.5 fm) is under preparation at the aperiodic pulsed reactor YAGUAR. A direct measurement of a{sub nn} will not only help resolve conflicting results of a{sub nn} by indirect means, but also in comparison to the proton-proton scattering length, a{sub pp}, shed light on the charge-symmetry of the nuclear force. We discuss in detail the analysis of the nn-scattering data in terms of a simple analytical expression. We also discuss calibration measurements using the time-of-flight spectra of neutrons scattered on He and Ar gases and the neutron activation technique. In particular, we calculate the neutron velocity and time-of-flight spectra after scattering neutrons on neutrons and after scattering neutrons on He and Ar atoms for the proposed experimental geometry, using a realistic neutron flux spectrum-Maxwellian plus epithermal tail. The shape of the neutron spectrum after scattering is appreciably different from the initial spectrum, due to collisions between thermal-thermal and thermal-epithermal neutrons. At the same time, the integral over the Maxwellian part of the realistic scattering spectrum differs by only about 6 per cent from that of a pure Maxwellian nn-scattering spectrum. Neutron scattering instrumentation for biology at spallation neutron sources Pynn, R. [Los Alamos National Laboratory, NM (United States) Conventional wisdom holds that since biological entities are large, they must be studied with cold neutrons, a domain in which reactor sources of neutrons are often supposed to be pre-eminent. In fact, the current generation of pulsed spallation neutron sources, such as LANSCE at Los Alamos and ISIS in the United Kingdom, has demonstrated a capability for small angle scattering (SANS) - a typical cold- neutron application - that was not anticipated five years ago. Although no one has yet built a Laue diffractometer at a pulsed spallation source, calculations show that such an instrument would provide an exceptional capability for protein crystallography at one of the existing high-power spoliation sources. Even more exciting is the prospect of installing such spectrometers either at a next-generation, short-pulse spallation source or at a long-pulse spallation source. A recent Los Alamos study has shown that a one-megawatt, short-pulse source, which is an order of magnitude more powerful than LANSCE, could be built with today`s technology. In Europe, a preconceptual design study for a five-megawatt source is under way. Although such short-pulse sources are likely to be the wave of the future, they may not be necessary for some applications - such as Laue diffraction - which can be performed very well at a long-pulse spoliation source. Recently, it has been argued by Mezei that a facility that combines a short-pulse spallation source similar to LANSCE, with a one-megawatt, long-pulse spallation source would provide a cost-effective solution to the global shortage of neutrons for research. The basis for this assertion as well as the performance of some existing neutron spectrometers at short-pulse sources will be examined in this presentation. Applications of thermal neutron scattering Kostorz, G. Although in the past neutrons have been used quite frequently in the study of condensed matter, a more recent development has lead to applications of thermal neutron scattering in the investigation of more practical rather than purely academic problems. Physicists, chemists, materials scientists, biologists, and others have recognized and demonstrated that neutron scattering techniques can yield supplementary information which, in many cases, could not be obtained with other methods. The paper illustrates the use of neutron scattering in these areas of applied research. No attempt is made to present all the aspects of neutron scattering which can be found in textbooks. From the vast amount of experimental data, only a few examples are presented for the study of structure and atomic arrangement, ''extended'' structure, and dynamic phenomena in substances of current interest in applied research. (author) Neutron Tomography at the Los Alamos Neutron Science Center Myers, William Riley Neutron imaging is an incredibly powerful tool for non-destructive sample characterization and materials science. Neutron tomography is one technique that results in a three-dimensional model of the sample, representing the interaction of the neutrons with the sample. This relies both on reliable data acquisition and on image processing after acquisition. Over the course of the project, the focus has changed from the former to the latter, culminating in a large-scale reconstruction of a meter-long fossilized skull. The full reconstruction is not yet complete, though tools have been developed to improve the speed and accuracy of the reconstruction. This project helps to improve the capabilities of LANSCE and LANL with regards to imaging large or unwieldy objects. Myers, William Riley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) Huang diffuse scattering of neutrons Burkel, E.; Guerard, B. v.; Metzger, H.; Peisl, J. Huang diffuse neutron scattering was measured for the first time on niobium with interstitially dissolved deuterium as well as on MgO after neutron irradiation and Li 7 F after γ-irradiation. With Huang diffuse scattering the strength and symmetry of the distortion field around lattice defects can be determined. Our results clearly demonstrate that this method is feasible with neutrons. The present results are compared with X-ray experiments and the advantages of using neutrons is discussed in some detail. (orig.) Commercial applications of neutron scattering Hutchings, M.T. The fact that industry is now willing to pay the full commercial cost for certain neutron scattering experiments aimed at solving its urgent materials - related problems is a true testimony to the usefulness of neutrons as microscopic probes. This paper gives examples of such use of three techniques drawn mainly from our experience at AEA Technology Harwell Laboratory. These are diffraction to measure residual stress, small angle neutron scattering to examine hardening precipitates in ferritic steels brought about by irradiation, and reflectivity to study amorphous diamond layers deposited on silicon. In most cases it is the penetrative power of the neutron which proves to be its best asset for commercial industrial applicaitons. (author) Neutron scattering and physisorption Marlow, I.; Thomas, R.K.; Trewern, T.D. Neutron scattering experiments on methane and ammonia adsorbed on a graphitized carbon black are described. Diffraction from adsorbed deuterated methane shows that, at a coverage of 0.7, it forms an epitaxial layer with a √3x√3 structure. Between 50 and 60 K it undergoes a phase transition from two-dimensional solid to liquid (bulk melting point=89.7 K). Similar results are obtained for deuterated methane on a sample of graphon intercalated with potassium. From the effect of adsorbed methane on the intensities of 001 peaks of both substrates the carbon atom of the methane is estimated to be 3.3+-0.2 A from the surface. Ammonia-d 3 on graphon behaves quite differently from methane. It follows a type III isotherm and at low temperatures desorbs from the surface to form bulk ammonia. This has anomalous melting properties which are shown to be related to adsorption isobars for the system. The detailed interpretation of the results emphasizes the close link between adsorption and heterogeneous nucleation. Quasielastic experiments on the ammonia-graphon system show that the adsorbed ammonia is undergoing translational diffusion on the surface which is much faster than in the bulk. This is attributed to the breaking up of the hydrogen bonded network normally present in t Bibliography for thermal neutron scattering Sakamoto, M.; Chihara, J.; Nakahara, Y.; Kadotani, H.; Sekiya, T. It contains bibliographical references to measurements, calculations, reviews and basic studies on thermal neutron scatterings and dynamical properties of condensed matter. About 2,700 documents up to the end of 1975 are covered. (auth.) Neutron scattering studies in the actinide region Kegel, G.H.R.; Egan, J.J. This report discusses the following topics: Prompt fission neutron energy spectra for 235 U and 239 Pu; Two-parameter measurement of nuclear lifetimes; ''Black'' neutron detector; Data reduction techniques for neutron scattering experiments; Inelastic neutron scattering studies in 197 Au; Elastic and inelastic scattering studies in 239 Pu; and neutron induced defects in silicon dioxide MOS structures The following topics are dealt with: The thermal triple-axis spectrometer PUMA, the high-resolution powder diffractometer SPODI, the hot single-crystal diffractometer HEiDi for structure analysis with neutrons, the backscattering spectrometer SPHERES, the neutron polarization analyzer DNS, the neutron spin-echo spectrometer J-NSE, the small-angle neutron diffractometers KWS-1/-2, the very-small-angle neutron diffractometer with focusing mirror KWS-3, the resonance spin-echo spectrometer RESEDA, the reflectometer TREFF, the time-of-flight spectrometer TOFTOF. (HSI) Neutron scattering science in Australia Knott, Robert [Australian Nuclear Science and Technology Organisation, Menai, NSW (Australia) Neutron scattering science in Australia is making an impact on a number of fields in the scientific and industrial research communities. The unique properties of the neutron are being used to investigate problems in chemistry, materials science, physics, engineering and biology. The reactor HIFAR at the Australian Nuclear Science and Technology Organisation research laboratories is the only neutron source in Australia suitable for neutron scattering science. A suite of instruments provides a wide range of opportunities for the neutron scattering community that extends throughout universities, government and industrial research laboratories. Plans are in progress to replace the present research reactor with a modern multi-purpose research reactor to offer the most advanced neutron scattering facilities. The experimental and analysis equipment associated with a modern research reactor will permit the establishment of a national centre for world class neutron science research focussed on the structure and functioning of materials, industrial irradiations and analyses in support of Australian manufacturing, minerals, petrochemical, pharmaceuticals and information science industries. (author) Knott, Robert New techniques in neutron scattering Hayter, J.B. New neutron sources being planned, such as the Advanced Neutron Source (ANS) or the European Spallation Source (ESS), will provide an order of magnitude flux increase over what is available today, but neutron scattering will still remain a signal-limited technique. At the same time, the development of new materials, such as polymer and ceramic composites or a variety of complex fluids, will increasingly require neutron-based research. This paper will discuss some of the new techniques which will allow us to make better use of the available neutrons, either through improved instrumentation or through sample manipulation. Discussion will center primarily on unpolarized neutron techniques since polarized neutrons will be the subject of the next paper. (author) Neutron scattering from fractals Kjems, Jørgen; Freltoft, T.; Richter, D. The scattering formalism for fractal structures is presented. Volume fractals are exemplified by silica particle clusters formed either from colloidal suspensions or by flame hydrolysis. The determination of the fractional dimensionality through scattering experiments is reviewed, and recent small... Phase transitions and neutron scattering Shirane, G. A review is given of recent advances in neutron scattering studies of solid state physics. I have selected the study of a structural phase transition as the best example to demonstrate the power of neutron scattering techniques. Since energy analysis is relatively easy, the dynamical aspects of a transition can be elucidated by the neutron probe. I shall discuss in some detail current experiments on the 100 K transition in SrTiO 3 , the crystal which has been the paradigm of neutron studies of phase transitions for many years. This new experiment attempts to clarify the relation between the neutron central peak, observed in energy scans, and the two length scales observed in recent x-ray diffraction studies where only scans in momentum space are possible. (author) Some Notes on Neutron Up-Scattering and the Doppler-Broadening of High-Z Scattering Resonances Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) When neutrons are scattered by target nuclei at elevated temperatures, it is entirely possible that the neutron will actually gain energy (i.e., up-scatter) from the interaction. This phenomenon is in addition to the more usual case of the neutron losing energy (i.e., down-scatter). Furthermore, the motion of the target nuclei can also cause extended neutron down-scattering, i.e., the neutrons can and do scatter to energies lower than predicted by the simple asymptotic models. In recent years, more attention has been given to temperature-dependent scattering cross sections for materials in neutron multiplying systems. This has led to the inclusion of neutron up-scatter in deterministic codes like Partisn and to free gas scattering models for material temperature effects in Monte Carlo codes like MCNP and cross section processing codes like NJOY. The free gas scattering models have the effect of Doppler Broadening the scattering cross section output spectra in energy and angle. The current state of Doppler-Broadening numerical techniques used at Los Alamos for scattering resonances will be reviewed, and suggestions will be made for further developments. The focus will be on the free gas scattering models currently in use and the development of new models to include high-Z resonance scattering effects. These models change the neutron up-scattering behavior. Advances in neutron scattering spectroscopy White, J.W. Some aspects of the application of neutron scattering to problems in polymer science, surface chemistry, and adsorption phenomena, as well as molecular biology, are reviewed. In all these areas, very significant work has been carried out using the medium flux reactors at Harwell, Juelich and Risoe, even without the use of advanced multidetector techniques or of a neutron cold source. A general tendency can also be distinguished in that, for each of these new fields, a distinct preference for colder neutrons rather than thermal neutron beams can be seen. (author) Birk, Jonas Okkels potential performance than any existing facility, however in order to use this pulse structure optimally many existing neutron scattering instruments will need to be redesigned. This defense will concentrate on the design and optimization of the inverse time-of-flight cold neutron spectrometer CAMEA......, simulations and prototyping to optimize the instrument and ensure that it will deliver the predicted performance when constructed. During the design a new prismatic analyser concept that can be of interest to many other neutron spectrometers was developed. The design work was compiled into an instrument......Neutron scattering is an important experimental technique in amongst others solid state physics, biophysics, and engineering. This year construction of European Spallation Source (ESS) was commenced in Lund, Sweeden. The facility will use a new long pulsed source principle to obtain higher... Material science and neutron scattering Neutron scattering experiments complete and extend the condensed matter studies made with X and gamma rays. Then story show a permanent evolution of the instrumentation, methods and experimental techniques to improve the result quality. This is more especially important as neutron sources are weaker than photon and electron sources. Progress in this research domain is due, in most part, to discovery and development of materials for the different measurement device components [fr Theoretical challenges in neutron scattering Lovesey, S.W. Topics in the interpretation of neutron scattering experiments from paramagnets and quantum fluids are used to illustrate the puissance of the technique in condensed matter research, and to record some fundamental shortcomings in the available theory of many-particle systems. (author) Sakamoto, Masanobu; Chihara, Junzo; Gotoh, Yorio; Kadotani, Hiroyuki; Sekiya, Tamotsu. Bibliographic references are given for measurements, calculations, reviews and basic studies of thermal neutron scattering and dynamical properties of condensed matter. This is the sixth edition covering 3,326 articles collected up to 1978. The edition being the final issue of the present bibliography series, a forthcoming edition will be published in a new form of bibliography. (author) Recent development in magnetic neutron scattering studies Endoh, Yasuo Neutron scattering results contain many new concepts in modern magnetism. We review here the most recent neutron magnetic scattering studies from so called '214' copper oxide lamellar materials, because a number of important developments in magnetism are condensed in this novel subject. We show that neutron scattering has played crucial role in our understanding of modern magnetism. (author) Operational status and future plans for the Los Alamos Neutron Science Center Jones, Kevin W.; Schoenberg, Kurt F. The Los Alamos Neutron Science Center (LANSCE) continues to be a signature experimental science facility at Los Alamos National Laboratory (LANL). The 800 MeV linear proton accelerator provides multiplexed beams to five unique target stations to produce medical radioisotopes, ultra-cold neutrons, thermal and high energy neutrons for material and nuclear science, and to conduct proton radiography of dynamic events. Recent operating experience will be reviewed and the role of an enhanced LANSCE facility in LANL's new signature facility initiative, Matter and Radiation in Extremes (MaRIE) will be discussed. Interface detection by neutron scattering De Monchy, A.R.; Kok, C.A.; Dorrepaal, J. A method and apparatus for detecting an interface of materials having different hydrogen content present in a metal vessel or pipe eg. made of steel, are described. Steel walls of columns, reactors, pipelines etc can be monitored. It is very suitable for detection of liquid water or hydrocarbons present in gas pipelines and also for the detection of a liquid hydrocarbon in a vessel or column. A series of measurements of the hydrogen density of the contents of a vessel or pipe are made using at least one californium-252 neutron source located near the outer side of the pipe. Neutrons are emitted and are scattered by the contents of the pipe. At least one neutron detector is located near the outer side of the metal wall. The detectors have a higher sensitivity for scattered neutrons (from the light hydrogen nuclei present in water or hydrocarbons). A source of 0.1 - 1 micrograms produces enough neutrons for most technical applications so the handling is relatively safe although shielding is advocated. The detectors contain helium-3 at a pressure of about 10 bar. Current pulses from the detector are counted. (U.K.) Detection of explosives by neutron scattering Brooks, F.D.; Buffler, A.; Allie, M.S.; Nchodu, M.R.; Bharuth-Ram, K. For non-intrusive detection of hidden explosives or other contraband such as narcotics a fast neutron scattering analysis (FNSA) technique is proposed. An experimental arrangement uses a collimated, pulsed beam of neutrons directed at the sample. Scattered neutrons are detected by liquid scintillation counters at different scattering angles. A scattering signature is derived from two-parameter data, counts vs pulse height and time-of-flight measured for each element (H, C, N or O) at each of two scattering angles and two neutron energies. The elemental signatures are very distinctive and constitute a good response matrix for unfolding elemental components from the scattering signatures measured for different compounds Summary of neutron scattering lengths Koester, L. All available neutron-nuclei scattering lengths are collected together with their error bars in a uniform way. Bound scattering lengths are given for the elements, the isotopes, and the various spin-states. They are discussed in the sense of their use as basic parameters for many investigations in the field of nuclear and solid state physics. The data bank is available on magnetic tape, too. Recommended values and a map of these data serve for an uncomplicated use of these quantities. (orig.) Inelastic neutron scattering from clusters Gudel, H.U. Magnetic excitations in clusters of paramagnetic ions have non-vanishing cross-sections for inelastic neutron scattering (INS). Exchange splittings can be determined, the temperature dependence of exchange can be studied, intra- and intercluster effects can be separated and magnetic form factors determined. INS provides a more direct access to the molecular properties than bulk techniques. Its application is restricted to complexes with no or few (< 10%) hydrogen atoms Polymer research by neutron scattering Richter, D. Polymer physics aims on an understanding of the macroscopic behavior of polymer systems on the basis of their molecular structure and dynamics. For this purpose neutrons serve as a unique probe, allowing a simultaneous investigation of polymer structure and dynamics on a molecular scale. Furthermore, hydrogen deuterium exchange facilitates molecular labeling and offers the possibility to observe selected chains or chain parts in dense systems. Neutron small angle scattering reveals information on the conformation and possible aggregation of polymer chains. Data on linear and star like molecules are shown as examples. High resolution neutron spin-echospectroscopy observes the molecular dynamics of long chain molecules. Results on the large scale motion of chins in polymer melts are presented. finally, experiments on chain relaxation close to the glass transition are displayed. Three distinctly different relaxation processes are revealed. (author) Neutron scattering and models: Silver Smith, A.B. Differential neutron elastic-scattering cross sections of elemental silver were measured from 1.5 → 10 MeV at ∼ 100 keV intervals up to 3 MeV, at ∼ 200 keV intervals from 3 → 4 MeV, and at ∼ 500 keV intervals above 4 MeV. At ≤ 4 MeV the angular range of the measurements was ∼ 20 0 → 160 0 with 10 measured values below 3 MeV and 20 from 3 → 4 MeV at each incident energy. Above 4 MeV ≥ 40 scattering angles were used distributed between ∼ 17 0 and 16 0 All of the measured elastic distributions included some contributions due to inelastic scattering. Below 4 MeV the measurements determined cross sections for ten inelastically-scattered neutron groups corresponding to observed excitations of 328 ± 13, 419 ± 50, 748 ± 25, 908 ± 26, 115 ± 38, 1286 ± 25, 1507 ± 20, 1632 ± 30, 1835 ± 20 and 1944 ± 26 keV. All of these inelastic groups probably were composites of contributions from the two isotopes 107 Ag and 109 Ag. The experimental results were interpreted in terms of the spherical optical model and of rotational and vibrational coupled-channels models, and physical implications are discussed. In particular, the neutron-scattering results are consistent with a ground-state rotational band with a quadrupole deformation Β 2 = 0.20 ± ∼ 10% for both of the naturally-occurring silver isotopes Neutron scattering lengths of 3He Alfimenkov, V.P.; Akopian, G.G.; Wierzbicki, J.; Govorov, A.M.; Pikelner, L.B.; Sharapov, E.I. The total neutron scattering cross-section of 3 He has been measured in the neutron energy range from 20 meV to 2 eV. Together with the known value of coherent scattering amplitude it leads to the two sts of n 3 He scattering lengths A workshop on enhanced national capability for neutron scattering Hurd, Alan J [Los Alamos National Laboratory; Rhyne, James J [Los Alamos National Laboratory; Lewis, Paul S [Los Alamos National Laboratory This two-day workshop will engage the international neutron scattering community to vet and improve the Lujan Center Strategic Plan 2007-2013 (SP07). Sponsored by the LANL SC Program Office and the University of California, the workshop will be hosted by LANSCE Professor Sunny Sinha (UCSD). Endorsement by the Spallation Neutron Source will be requested. The discussion will focus on the role that the Lujan Center will play in the national neutron scattering landscape assuming full utilization of beamlines, a refurbished LANSCE, and a 1.4-MW SNS. Because the Lujan Strategic Plan is intended to set the stage for the Signature Facility era at LANSCE, there will be some discussion of the long-pulse spallation source at Los Alamos. Breakout groups will cover several new instrument concepts, upgrades to present instruments, expanded sample environment capabilities, and a look to the future. The workshop is in keeping with a request by BES to update the Lujan strategic plan in coordination with the SNS and the broader neutron community. Workshop invitees will be drawn from the LANSCE User Group and a broad cross section of the US, European, and Pacific Rim neutron scattering research communities. Instruments and accessories for neutron scattering research Ishii, Yoshinobu; Morii, Yukio This report describes neutron scattering instruments and accessories installed by four neutron scattering research groups at the ASRC (Advanced Science Research Center) of the JAERI and the recent topics of neutron scattering research using these instruments. The specifications of nine instruments (HRPD, BIX-I, TAS-1 and PNO in the reactor hall, RESA, BIX-II, TAS-2, LTAS and SANS-J in the guide hall of the JRR-3M) are summarized in this booklet. (author) Current status of neutron scattering in Thailand Ampornrat, Pantip Thailand's neutron spectrometer has been installed soon after the startup of the reactor. The neutron scattering experiments have been done continuously, although there were some problems involving the neutron intensity and instruments. Development program has been planned for better experimental result. This paper reports the past and present status of neutron scattering equipment and experiments in Thailand. In addition, installation of a HRPD (High Resolution Powder Diffraction) system is included within the scope of the Ongkharak Nuclear Research Center project. (author) Beghian, L.E.; Kegel, G.H.R. During the report period we have investigated the following areas: Neutron elastic and inelastic scattering measurements on 14 N, 181 Ta, 232 Th, 238 U and 239 Pu; Prompt fission spectra for 232 Th, 235 U, 238 U and 239 Pu; Theoretical studies of neutron scattering; Neutron filters; New detector systems; and Upgrading of neutron target assembly, data acquisition system, and accelerator/beam-line apparatus Recent high-accuracy measurements of the 1S0 neutron-neutron scattering length Howell, C.R.; Chen, Q.; Gonzalez Trotter, D.E.; Salinas, F.; Crowell, A.S.; Roper, C.D.; Tornow, W.; Walter, R.L.; Carman, T.S.; Hussein, A.; Gibbs, W.R.; Gibson, B.F.; Morris, C.; Obst, A.; Sterbenz, S.; Whitton, M.; Mertens, G.; Moore, C.F.; Whiteley, C.R.; Pasyuk, E.; Slaus, I.; Tang, H.; Zhou, Z.; Gloeckle, W.; Witala, H. This paper reports two recent high-accuracy determinations of the 1 S 0 neutron-neutron scattering length, a nn . One was done at the Los Alamos National Laboratory using the π - d capture reaction to produce two neutrons with low relative momentum. The neutron-deuteron (nd) breakup reaction was used in other measurement, which was conducted at the Triangle Universities Nuclear Laboratory. The results from the two determinations were consistent with each other and with previous values obtained using the π - d capture reaction. The value obtained from the nd breakup measurements is a nn = -18.7 ± 0.1 (statistical) ± 0.6 (systematic) fm, and the value from the π - d capture experiment is a nn = -18.50 ± 0.05 ± 0.53 fm. The recommended value is a nn = -18.5 ± 0.3 fm. (author) Basic and Applied Research at the Los Alamos Neutron Science Center Lisowski, P.W. The Los Alamos Neutron Science Center, or LANSCE, is an accelerator-based national user facility for research in basic and applied science. At present LANSCE has two experimental areas primarily using neutrons generated by 800-MeV protons striking tungsten target systems. A third area uses the proton beam for radiography. This paper describes the three LANSCE experimental areas, gives highlights of the past operating period, and discusses plans for the future Lectures on neutron scattering techniques: 1 Carlile, C.J. The lecture on the production of neutrons was presented at a Summer School on neutron scattering, Rome, 1986. A description is given of the production of neutrons by natural radioactive sources, fission, and particle accelerator sources. Modern neutron sources with high intensities are discussed including the ISIS pulsed neutron source at the Rutherford Appleton Laboratory and the High Flux Reactor at the Institut Laue Langevin. (U.K.) Refinements in the Los Alamos model of the prompt fission neutron spectrum Madland, D.G., E-mail: [email protected]; Kahler, A.C. This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. They are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integral cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributions in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum. Application of neutron scattering in polymers Han, C.C. Full text: Neutron scattering offers many opportunities in sciences and technology. This is particularly true in the field of polymer sciences and materials. It is mainly because that the scattering length scales (q -1 ) and scattering contrast (scattering cross-sections) makes neutron a perfect tool for polymer studies. Several examples will be used to illustrate the importance of the small angle neutron scattering and the neutron reflection studies in polymer physics. These include the determination of phase diagram, interaction parameter, and spinodal decomposition kinetics by SANS. In the dynamics area, examples will be given to illustrate the critical temperature shift and mixing of polymer blends under shear flow. Also, the confinement effect on the phase separated structure of polymer blend films will be used to demonstrate the importance of the neutron reflectivity measurement American Conference on Neutron Scattering 2014 Dillen, J. Ardie Scientists from the around the world converged in Knoxville, TN to have share ideas, present technical information and contribute to the advancement of neutron scattering. Featuring over 400 oral/poster presentations, ACNS 2014 offered a strong program of plenary, invited and contributed talks and poster sessions covering topics in soft condensed matter, hard condensed matter, biology, chemistry, energy and engineering applications in neutron physics - confirming the great diversity of science that is enabled by neutron scattering. Inelastic neutron scattering from superconducting rings Agafonov, A.I. For the first time the differential cross section for the inelastic magnetic neutron scattering by superconducting rings is derived taking account of the interaction of the neutron magnetic moment with the magnetic field generated by the superconducting current. Calculations of the scattering cross section are carried out for cold neutrons and thin film rings from type-II superconductors with the magnetic fields not exceeding the first critical field. Dillen, J. Ardie [Materials Research Society, Warrendale, PA (United States) Scientists from the around the world converged in Knoxville, TN to have share ideas, present technical information and contribute to the advancement of neutron scattering. Featuring over 400 oral/poster presentations, ACNS 2014 offered a strong program of plenary, invited and contributed talks and poster sessions covering topics in soft condensed matter, hard condensed matter, biology, chemistry, energy and engineering applications in neutron physics – confirming the great diversity of science that is enabled by neutron scattering. Neutron scattering and models: molybdenum A comprehensive interpretation of the fast-neutron interaction with elemental and isotopic molybdenum at energies of le 30 MeV is given. New experimental elemental-scattering information over the incident energy range 4.5 r a rrow 10 MeV is presented. Spherical, vibrational and dispersive models are deduced and discussed, including isospin, energy-dependent and mass effects. The vibrational models are consistent with the ''Lane potential''. The importance of dispersion effects is noted. Dichotomies that exist in the literature are removed. The models are vehicles for fundamental physical investigations and for the provision of data for applied purposes. A ''regional'' molybdenum model is proposed. Finally, recommendations for future work are made Neutron scattering equipments in JAERI. Current status Hamaguchi, Yoshikazu; Minakawa, Nobuaki 24 neutron scattering instruments are installed in the JRR-3M research reactor. Among them JAERI has 12 neutron scattering instruments. Those instruments are HRPD for high-resolution structural analysis, TAS-1 and TAS-2 for elastic and inelastic scattering and for magnetic scattering measurements by the polarized neutron, LTAS for elastic and inelastic scattering measurement at a low energy region, and for neutron device development, PNO for topography and for very small angle scattering measurement in a small Q range, NRG for neutron radiography, RESA for internal strain measurements, SANS for the molecule and semi-macroscopic magnetic structural analysis, BIX-2 and BIX-3 for the biological structural analysis research, and PGA for the research of prompt gamma-ray analysis. The university groups have 12 neutron scattering instruments. Since those instruments were installed at the period when JRR-3M was completed, about 10 years have passed. In order to match the old control systems with the progress of recent computer technologies, and peripheral equipment, numbers of instruments are being renewed. In the neutron guide hall of JRR-3M, the Ni mirror guide tube was replaced by a super mirror guide tube to increase neutron flux. The intensity of 2A flux was increased by a factor of about two. (J.P.N.) German neutron scattering conference. Programme and abstracts Brueckel, Thomas (ed.) The German Neutron Scattering Conference 2012 - Deutsche Neutronenstreutagung DN 2012 offers a forum for the presentation and critical discussion of recent results obtained with neutron scattering and complementary techniques. The meeting is organized on behalf of the German Committee for Research with Neutrons - Komitee Forschung mit Neutronen KFN - by the Juelich Centre for Neutron Science JCNS of Forschungszentrum Juelich GmbH. In between the large European and international neutron scattering conferences ECNS (2011 in Prague) and ICNS (2013 in Edinburgh), it offers the vibrant German and international neutron community an opportunity to debate topical issues in a stimulating atmosphere. Originating from &apos;&apos;BMBF Verbundtreffen&apos;&apos; - meetings for projects funded by the German Federal Ministry of Education and Research - this conference series has a strong tradition of providing a forum for the discussion of collaborative research projects and future developments in the field of research with neutrons in general. Neutron scattering, by its very nature, is used as a powerful probe in many different disciplines and areas, from particle and condensed matter physics through to chemistry, biology, materials sciences, engineering sciences, right up to geology and cultural heritage; the German Neutron Scattering Conference thus provides a unique chance for exploring interdisciplinary research opportunities. It also serves as a showcase for recent method and instrument developments and to inform users of new advances at neutron facilities. Brueckel, Thomas The German Neutron Scattering Conference 2012 - Deutsche Neutronenstreutagung DN 2012 offers a forum for the presentation and critical discussion of recent results obtained with neutron scattering and complementary techniques. The meeting is organized on behalf of the German Committee for Research with Neutrons - Komitee Forschung mit Neutronen KFN - by the Juelich Centre for Neutron Science JCNS of Forschungszentrum Juelich GmbH. In between the large European and international neutron scattering conferences ECNS (2011 in Prague) and ICNS (2013 in Edinburgh), it offers the vibrant German and international neutron community an opportunity to debate topical issues in a stimulating atmosphere. Originating from ''BMBF Verbundtreffen'' - meetings for projects funded by the German Federal Ministry of Education and Research - this conference series has a strong tradition of providing a forum for the discussion of collaborative research projects and future developments in the field of research with neutrons in general. Neutron scattering, by its very nature, is used as a powerful probe in many different disciplines and areas, from particle and condensed matter physics through to chemistry, biology, materials sciences, engineering sciences, right up to geology and cultural heritage; the German Neutron Scattering Conference thus provides a unique chance for exploring interdisciplinary research opportunities. It also serves as a showcase for recent method and instrument developments and to inform users of new advances at neutron facilities. Neutron spin echo scattering angle measurement (SESAME) Pynn, R.; Fitzsimmons, M.R.; Fritzsche, H.; Gierlings, M.; Major, J.; Jason, A. We describe experiments in which the neutron spin echo technique is used to measure neutron scattering angles. We have implemented the technique, dubbed spin echo scattering angle measurement (SESAME), using thin films of Permalloy electrodeposited on silicon wafers as sources of the magnetic fields within which neutron spins precess. With 30-μm-thick films we resolve neutron scattering angles to about 0.02 deg. with neutrons of 4.66 A wavelength. This allows us to probe correlation lengths up to 200 nm in an application to small angle neutron scattering. We also demonstrate that SESAME can be used to separate specular and diffuse neutron reflection from surfaces at grazing incidence. In both of these cases, SESAME can make measurements at higher neutron intensity than is available with conventional methods because the angular resolution achieved is independent of the divergence of the neutron beam. Finally, we discuss the conditions under which SESAME might be used to probe in-plane structure in thin films and show that the method has advantages for incident neutron angles close to the critical angle because multiple scattering is automatically accounted for Neutron scattering from quantum liquids Cowley, R.A. Recent neutron scattering measurements on the quantum liquids 4 He and 3 He are described. In the Bose superfluid there is a well-defined excitation for wave vectors less than 3.6 A -1 . In the Fermi liquid measurements are much more difficult because of the large absorption cross section, but measurements at the Institute Laue-Langevin have shown that there are no well-defined excitations at 0.63 0 K for wave vectors between 1.0 and 2.6 A -1 . The difference between these results is due to the existence of particle-hole excitations in the Fermi liquid into which collective excitations can decay. Because of the simplicity of the excitations in 4 He, it has become a testing ground for the effects of the interactions between the excitations. Measurements are described which show that while roton-roton interactions are attractive at small wave vectors they are repulsive at larger wave vectors. The scattering at large momentum transfer in 4 He has been measured, but its interpretation is still open to question La nouvelle vague in polarized neutron scattering Mezei, F. Polarized neutron research, like many other subjects in neutron scattering developed in the footsteps of Cliff Shull. The classical polarized neutron technique he pioneered was generalized around 1970 to vectorial beam polarizations and this opened up the way to a ''nouvelle vague'' of neutron scattering experiments. In this paper I will first reexamine the old controversy on the question whether the nature of the neutron magnetic moment is that of a microscopic dipole or of an Amperian current loop. The problem is not only of historical interest, but also of relevance to modern applications. This will be followed by a review of the fundamentals on spin coherence effects in neutron beams and scattering, which are the basis of vectorial beam polarization work. As an example of practical importance, paramagnetic scattering will be discussed. The paper concludes with some examples of applications of the vector polarization techniques, such as study of ferromagnetic domains by neutron beam depolarization and Neutron Spin Echo high resolution inelastic spectroscopy. The sample results presented demonstrate the new opportunities this novel approach opened up in neutrons scattering research. (orig.) Neutron scattering in Indonesia. Country report Ikram, Abarrul [Neutron Scattering Laboratory, R and D Center for Materials Science and Technology, National Nuclear Energy Agency, Serpong (Indonesia) Neutron scattering in Indonesia is still alienated due to some reasons and conditions which are discussed. The reactor and its latest operation mode are also described. The neutron beam facilities which include one diffractometer for residual stress measurement, one diffractometer for single crystal structural determination and texture measurement, one high resolution powder diffractometer, one neutron radiography facility, one triple axis spectrometer, one small angle neutron scattering spectrometer and one high resolution small angle neutron scattering spectrometer were presented briefly together with improvements of neutron intensities at some spectrometers in connection with the setting of main beam shutter position. Special attention is given for four instruments mostly related to this workshop. Their performances and problems faced in the past 9 months are presented as well as the future plan for refurbishment and development. (author) Neutrons scattering studies in the actinide region During the report period were investigated the following areas: prompt fission neutron energy spectra measurements; neutron elastic and inelastic scattering from 239 Pu; neutron scattering in 181 Ta and 197 Au; response of a 235 U fission chamber near reaction thresholds; two-parameter data acquisition system; ''black'' neutron detector; investigation of neutron-induced defects in silicon dioxide; and multiple scattering corrections. Four Ph.D. dissertations and one M.S. thesis were completed during the report period. Publications consisted of three journal articles, four conference papers in proceedings, and eleven abstracts of presentations at scientific meetings. There are currently four Ph.D. and one M.S. candidates working on dissertations directly associated with the project. In addition, three other Ph.D. candidates are working on dissertations involving other aspects of neutron physics in this laboratory Scattering of fast neutrons from elemental molybdenum Smith, A.B.; Guenther, P.T. Differential broad-resolution neutron-scattering cross sections of elemental molybdenum were measured at 10 to 20 scattering angles distributed between 20 and 160 degrees and at incident-neutron energy intervals of approx. = 50 to 200 keV from 1.5 to 4.0 MeV. Elastically-scattered neutrons were fully resolved from inelastic events. Lumped-level inelastic-neutron-scattering cross sections were determined corresponding to observed excitation energies of; 789 +- 23, 195 +- 23, 1500 +- 34, 1617 +- 12, 1787, 1874, 1991, 2063 +- 24, 2296, 2569 and 2802 keV. An optical-statistical model was deduced from the measured elastic-scattering results. The experimental values were compared with the respective quantities given in ENDF/B-V Very High Energy Neutron Scattering from Hydrogen Cowley, R A; Stock, C; Bennington, S M; Taylor, J; Gidopoulos, N I The neutron scattering from hydrogen in polythene has been measured with the direct time-of flight spectrometer, MARI, at the ISIS facility of the Rutherford Appleton Laboratory with incident neutron energies between 0.5 eV and 600 eV. The results of experiments using the spectrometer, VESUVIO, have given intensities from hydrogen containing materials that were about 60% of the intensity expected from hydrogen. Since VESUVIO is the only instrument in the world that routinely operates with incident neutron energies in the eV range we have chosen to measure the scattering from hydrogen at high incident neutron energies with a different type of instrument. The MARI, direct time-of-flight, instrument was chosen for the experiment and we have studied the scattering for several different incident neutron energies. We have learnt how to subtract the gamma ray background, how to calibrate the incident energy and how to convert the spectra to an energy plot . The intensity of the hydrogen scattering was independent of the scattering angle for scattering angles from about 5 degrees up to 70 degrees for at least 3 different incident neutron energies between 20 eV and 100 eV. When the data was put on an absolute scale, by measuring the scattering from 5 metal foils with known thicknesses under the same conditions we found that the absolute intensity of the scattering from the hydrogen was in agreement with that expected to an accuracy of ± 5.0% over a wide range of wave-vector transfers between 1 and 250 A -1 . These measurements show that it is possible to measure the neutron scattering with incident neutron energies up to at least 100 eV with a direct geometry time-of-flight spectrometer and that the results are in agreement with conventional scattering theory. Neutron scattering studies of solid electrolytes Shapiro, S.M. The role which neutron scattering can play in determining the nature of the disorder and the conducting mechanism in the solid electrolytes is discussed. First, some of the general formalism for elastic and inelastic neutron scattering is reviewed, and the quantities which can be measured are pointed out. Then the application of neutron scattering to the studies of three different problems is examined; the anion disorder in the fluorite system, the dynamical behavior in beta-alumina, and the cation diffusion in αAgI are discussed. 8 figures Techniques in high pressure neutron scattering Klotz, Stefan Drawing on the author's practical work from the last 20 years, Techniques in High Pressure Neutron Scattering is one of the first books to gather recent methods that allow neutron scattering well beyond 10 GPa. The author shows how neutron scattering has to be adapted to the pressure range and type of measurement.Suitable for both newcomers and experienced high pressure scientists and engineers, the book describes various solutions spanning two to three orders of magnitude in pressure that have emerged in the past three decades. Many engineering concepts are illustrated through examples of rea Monte Carlo simulations of neutron scattering instruments Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K. A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.) Kornduangkaeo, Areeratt; Pongkasem, Somchai; Putchar, Suriya; Ampornrat, Pantip; Kajornrith, Varavuth; Chamchang, Jipawat The current neutron powder diffractometer at the Thai Research Reactor-1/Modification 1 (TRR-1/M1) has been modified from the obsolete neutron diffractometer which had been used during 1968-1975. The upgraded diffractometer has medium resolution and is appropriate for studying samples with small unit cell dimensions and training university students in the field of neutron scattering. This paper describes the current activities of neutron scattering research in Thailand, the current status of a new research reactor project at Ongkarak for enlarging the perspectives of its utilization in the future as well as the organizational reformation of the Office of Atomic Energy for Peace (OAEP). (author) Kornduangkaeo, Areeratt; Pongkasem, Somchai; Putchar, Suriya; Ampornrat, Pantip; Kajornrith, Varavuth; Sangariyavanich, Archara The current neutron powder diffractometer at the Thai Research Reactor-1/M1 (TRR-1/M1) has been modified from the obsolete neutron diffractometer which had been used during 1968-1975. The upgraded diffractometer has medium resolution and is appropriate for studying samples with small unit cell dimensions and training university students in the field of neutron scattering. This paper describes the current activities of neutron scattering research in Thailand as well as a new research reactor for enlarging the perspectives of its utilization in the future. (author) Studies of the dynamic properties of materials using neutron scattering Lovesey, S.W.; Windsor, C.G. The dynamic properties of materials using the neutron scattering technique is reviewed. The basic properties of both nuclear scattering and magnetic scattering are summarized. The experimental methods used in neutron scattering are described, along with access to neutron sources, and neutron inelastic instruments. Applied materials science using inelastic neutron scattering; rotational tunnelling of a methyl group; molecular diffusion from quasi-elastic scattering; and the diffusion of colloidal particles and poly-nuclear complexes; are also briefly discussed. (U.K.) A Long-Pulse Spallation Source at Los Alamos: Facility description and preliminary neutronic performance for cold neutrons Russell, G.J.; Weinacht, D.J.; Pitcher, E.J.; Ferguson, P.D. The Los Alamos National Laboratory has discussed installing a new 1-MW spallation neutron target station in an existing building at the end of its 800-MeV proton linear accelerator. Because the accelerator provides pulses of protons each about 1 msec in duration, the new source would be a Long Pulse Spallation Source (LPSS). The facility would employ vertical extraction of moderators and reflectors, and horizontal extraction of the spallation target. An LPSS uses coupled moderators rather than decoupled ones. There are potential gains of about a factor of 6 to 7 in the time-averaged neutron brightness for cold-neutron production from a coupled liquid H 2 moderator compared to a decoupled one. However, these gains come at the expense of putting ''tails'' on the neutron pulses. The particulars of the neutron pulses from a moderator (e.g., energy-dependent rise times, peak intensities, pulse widths, and decay constant(s) of the tails) are crucial parameters for designing instruments and estimating their performance at an LPSS. Tungsten is the reference target material. Inconel 718 is the reference target canister and proton beam window material, with Al-6061 being the choice for the liquid H 2 moderator canister and vacuum container. A 1-MW LPSS would have world-class neutronic performance. The authors describe the proposed Los Alamos LPSS facility, and show that, for cold neutrons, the calculated time-averaged neutronic performance of a liquid H 2 moderator at the 1-MW LPSS is equivalent to about 1/4th the calculated neutronic performance of the best liquid D 2 moderator at the Institute Laue-Langevin reactor. They show that the time-averaged moderator neutronic brightness increases as the size of the moderator gets smaller Basic and Applied Science Research at the Los Alamos Neutron Science Center Lisowski, Paul W. The Los Alamos Neutron Science Center, or LANSCE, is an accelerator-based national user facility for research in basic and applied science using four experimental areas. LANSCE has two areas that provide neutrons generated by the 800-MeV proton beam striking tungsten target systems. A third area uses the proton beam for radiography. The fourth area uses 100 MeV protons to produce medical radioisotopes. This paper describes the four LANSCE experimental areas, gives nuclear science highlights of the past operating period, and discusses plans for the future Advantages of neutron scattering for biological structure analysis Schoenborn, B.P. The advantages and disadvantages of neutron scattering for protein crystallography, scattering from oriented systems, and solution scattering are summarized. Techniques for minimizing the disadvantages are indicated Anomalous neutron scattering and feroelectric modes Viswanathan, K.S. It is suggested that anomalous neutron scattering could prove a powerful experimental tool in studying ferroelectric phase transition, the sublattice displacements of the soft modes as well as their symmetry characteristics. (author) Thermal-neutron multiple scattering: critical double scattering Holm, W.A. A quantum mechanical formulation for multiple scattering of thermal-neutrons from macroscopic targets is presented and applied to single and double scattering. Critical nuclear scattering from liquids and critical magnetic scattering from ferromagnets are treated in detail in the quasielastic approximation for target systems slightly above their critical points. Numerical estimates are made of the double scattering contribution to the critical magnetic cross section using relevant parameters from actual experiments performed on various ferromagnets. The effect is to alter the usual Lorentzian line shape dependence on neutron wave vector transfer. Comparison with corresponding deviations in line shape resulting from the use of Fisher's modified form of the Ornstein-Zernike spin correlations within the framework of single scattering theory leads to values for the critical exponent eta of the modified correlations which reproduce the effect of double scattering. In addition, it is shown that by restricting the range of applicability of the multiple scattering theory from the outset to critical scattering, Glauber's high energy approximation can be used to provide a much simpler and more powerful description of multiple scattering effects. When sufficiently close to the critical point, it provides a closed form expression for the differential cross section which includes all orders of scattering and has the same form as the single scattering cross section with a modified exponent for the wave vector transfer Neutron Scattering studies of magnetic molecular magnets Chaboussant, G. This work deals with inelastic neutron scattering studies of magnetic molecular magnets and focuses on their magnetic properties at low temperature and low energies. Several molecular magnets (Mn 12 , V 15 , Ni 12 , Mn 4 , etc.) are reviewed. Inelastic neutron scattering is shown to be a perfectly suited spectroscopy tool to -a) probe magnetic energy levels in such systems and -b) provide key information to understand the quantum tunnel effect of the magnetization in molecular spin clusters. (author) Progress report on neutron scattering at JAERI Morii, Yukio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment In the first half of fiscal year 1997, JRR-3M was operated for 97 days followed by a long term shut down for its annual maintenance. Three days were lost out of 100 scheduled operation days, due to a trouble in irradiation facility. Neutron scattering research activities at the JRR-3M have been extended from that of fiscal year 1996. In the Research Group for Quantum Condensed Matter System, experimental study under high pressures, low temperatures and high fields as well as coupling of these conditions were planned to find new quantum condensed matter systems. And, obtained experimental results were immediately provided to theorists for their investigations. In cooperation with new group, Research Group for Neutron Scattering of Strongly Correlated Electron Systems and Research Group for Neutron Scattering at Ultralow Temperatures were carrying neutron scattering experiments at JRR-3M. Research Group for Neutron Crystallography in Biology had opened a way for investigating biomatter neutron diffraction research with high experimental accuracy by growing a millimeter-class large single crystal. In fiscal year 1997, 39 research projects were conducted by these four groups and other staffs in JAERI, 27 projects collaborated with university researchers and 3 projects collaborated with private enterprises were also conducted as complementary researches. 2117 days of machine times were requested to use 8 neutron scattering instruments this year, which corresponded to 1.51 times larger than those planned at its beginning. (G.K.) Scattering of Neutrons by an Anharmonic Crystal Hoegberg, T; Bohlin, L; Ebbsjoe, I Numerical calculations have been performed for the anharmonic effects in neutron scattering. The phonon frequency widths and shifts have been calculated as a function of neutron frequency at different wave numbers and temperatures for a potential with central symmetry and for a face-centered cubic lattice. Status and neutron scattering experiments at KENS Watanabe, N.; Sasaki, H.; Ishikawa, Y.; Endoh, Y.; Inoue, K. This paper reports present status of the KENS facility, progress in neutron scattering experiments and instrumental developments, and status of the KENS-I' program. A design study of a high intensity rapid-cycle 800 MeV proton synchrotron for proposed new pulsed neutron (KENS-II) and meson source is also descirbed Lectures on magnetism and neutron scattering The paper contains six lectures given to the Neutron Division of the Rutherford Appleton laboratory in 1983. The aim was to explain fundamental physics of neutron scattering and basic magnetism to the non-specialist scientist. The text includes: origin of neutron's magnetic moment and spin-dependent interactions with electrons and nuclei, why are solids magnetic, magnetic anistropy and domain structure, phenomenological spin waves, magnetic phase transitions and electronic excitations in magnets. (U.K.) Ramsauer effect in triplet neutron-neutron scattering Pupyshev, V.V.; Solovtsova, O.P. As we show, due to interplay of pure nuclear and magnetic moment interactions, the total cross section of triplet neutron-neutron scattering should possess a non-zero limit at E cm = 0 and a local minimum at ∼ 20 keV. 17 refs., 1 fig Introduction to neutron scattering. Lecture notes of the introductory course These proceedings enclose ten papers presented at the 1. European Conference on Neutron scattering (ECNS '96). The aim of the Introductory Course was fourfold: - to learn the basic principles of neutron scattering, - to get introduced into the most important classes of neutron scattering instruments, -to learn concepts and their transformation into neutron scattering experiments in various fields of condensed matter research, - to recognize the limitations of the neutron scattering technique as well as to the complementarity of other methods. figs., tabs., refs Neutron scattering by normal liquids Gennes, P.G. de [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires Neutron data on motions in normal liquids well below critical point are reviewed and classified according to the order of magnitude of momentum transfers {Dirac_h}q and energy transfers {Dirac_h}w. For large momentum transfers a perfect gas model is valid. For smaller q and incoherent scattering, the major effects are related to the existence of two characteristic times: the period of oscillation of an atom in its cell, and the average lifetime of the atom in a definite cell. Various interpolation schemes covering both time scales are discussed. For coherent scattering and intermediate q, the energy spread is expected to show a minimum whenever q corresponds to a diffraction peak. For very small q the standard macroscopic description of density fluctuations is applicable. The limits of the various (q) and (w) domains and the validity of various approximations are discussed by a method of moments. The possibility of observing discrete transitions due to internal degrees of freedom in polyatomic molecules, in spite of the 'Doppler width' caused by translational motions, is also examined. (author) [French] L'auteur examine les donnees neutroniques sur les mouvements dans les liquides normaux, bien au-dessous du point critique, et les classe d'apres l'ordre de grandeur des transferts de quantite de mouvement {Dirac_h}q et des transferts d'energie {Dirac_h}w. Pour les grands transferts de, quantite de mouvement, un modele de gaz parfait est valable. En ce qui concerne les faibles valeurs de q et la diffussion incoherente, les principaux effets sont lies a l'existence de deux temps caracteristiques: la periode d'oscillation d'un atome dans sa cellule et la duree moyenne de vie de l'atome dans une cellule determinee. L'auteur etudie divers systemes d'interpolation se rapportant aux deux echelles de temps. Pour la diffusion coherente et les valeurs intermediaires de q, on presume que le spectre d'energie accuse un minimum chaque fois que q correspond a un pic de Neutron Scattering from 36Ar and 4He Films Carneiro, K. Scale factors for neutron diffraction and neutron inelastic scattering are presented for common adsorbates, and the feasibility of experiments is discussed together with the information gained by each type of experiment. Diffraction, coherent inelastic scattering, and incoherent scattering are tr... The WNR facility - a pulsed spallation neutron source at the Los Alamos Scientific Laboratory Russell, G.J.; Lisowski, P.W.; King, N.S.P. The Weapons Neutron Research facility (WNR) at the Los Alamos Scientific Laboratory is the first operating example of a new class of pulsed neutron sources using the X(p,n)Y spallation reaction. At present, up to 10 microamperes of 800-MeV protons from the Clinton P. Anderson Meson Physics Facility (LAMPF) linear accelerator bombard a Ta target to produce an intense white-neutron spectrum from about 800 MeV to 100 keV. The Ta target can be coupled with CH 2 and H 2 O moderators to produce neutrons of lower energy. The time structure of the WNR proton beam may be varied to optimize neutron time-of-flight (TOF) measurements covering the energy range from several hundred MeV to a few meV. The neutronics of the WNR target and target/moderator configurations have been calculated from 800 MeV to 0.5 eV. About 11 neutrons per proton are predicted for the existing Ta target. Some initial neutron TOF data are presented and compared with calculations Neutron Scattering in Biology Techniques and Applications Fitter, Jörg; Katsaras, John The advent of new neutron facilities and the improvement of existing sources and instruments world wide supply the biological community with many new opportunities in the areas of structural biology and biological physics. The present volume offers a clear description of the various neutron-scattering techniques currently being used to answer biologically relevant questions. Their utility is illustrated through examples by some of the leading researchers in the field of neutron scattering. This volume will be a reference for researchers and a step-by-step guide for young scientists entering the field and the advanced graduate student. Diffuse neutron scattering signatures of rough films Pynn, R.; Lujan, M. Jr. Patterns of diffuse neutron scattering from thin films are calculated from a perturbation expansion based on the distorted-wave Born approximation. Diffuse fringes can be categorised into three types: those that occur at constant values of the incident or scattered neutron wavevectors, and those for which the neutron wavevector transfer perpendicular to the film is constant. The variation of intensity along these fringes can be used to deduce the spectrum of surface roughness for the film and the degree of correlation between the film's rough surfaces Fast-neutron scattering from elemental cadmium Neutron differential-elastic-scattering cross sections of elemental cadmium are measured from approx. = 1.5 to 4.0 MeV at incident-neutron energy intervals of 50 to 200 keV and at 10 to 20 scattering angles distributed between approx. = 20 and 160 degrees. Concurrently, lumped-level neutron inelastic-excitation cross sections are measured. The experimental results are used to deduce parameters of an optical-statistical model that is descriptive of the observables and are compared with corresponding quantities given in ENDF/B-V Neutron scattering instruments for the Spallation Neutron Source (SNS) Crawford, R.K.; Fornek, T.; Herwig, K.W. The Spallation Neutron Source (SNS) is a 1 MW pulsed spallation source for neutron scattering planned for construction at Oak Ridge National Laboratory. This facility is being designed as a 5-laboratory collaboration project. This paper addresses the proposed facility layout, the process for selection and construction of neutron scattering instruments at the SNS, the initial planning done on the basis of a reference set of ten instruments, and the plans for research and development (R and D) to support construction of the first ten instruments and to establish the infrastructure to support later development and construction of additional instruments Small angle neutron scattering by polymer solutions Farnoux, B.; Jannink, G. Small angle neutron scattering is an experimental technique introduced since about 10 years for the observation of the polymer conformation in all the concentration range from dilute solution to the melt. After a brief recall of the elementary relations between scattering amplitude, index of refraction and scattered intensity, two concepts related to this last quantity (the contrast and the pair correlation function) are discussed in details Inelastic neutron scattering from cerium under pressure Rainford, B.D.; Buras, B.; Lebech, B. Inelastic neutron scattering from Ce metal at 300K was studied both below and above the first order γ-α phase transition, using a triple axis spectrometer. It was found that (a) there is no indication of any residual magnetic scattering in the collapsed α phase and (b) the energy width of the paramagnetic scattering in the γ-phase increases with pressure. (Auth.) Zeeman splitting of surface-scattered neutrons Felcher, G.P.; Adenwalla, S.; De Haan, V.O.; Van Well, A.A. If a beam of slow neutrons impinges on a solid at grazing incidence, the neutrons reflected can be used to probe the composition and magnetization of the solid near its surface. In this process, the incident and reflected neutrons generally have identical kinetic energies. Here we report the results of an experiment in which subtle inelastic scattering processes are revealed as relatively large deviations in scattering angle. The neutrons are scattered from a ferromagnetic surface in the presence of a strong ambient magnetic field, and exhibit a small but significant variation in kinetic energy as a function of the reflection angle. This effect is attributable to the Zeeman splitting of the energies of the neutron spin states due to the ambient magnetic field: some neutrons flip their spins upon reflection from the magnetized surface, thereby exchanging kinetic energy for magnetic potential energy. The subtle effects of Zeeman splitting are amplified by the extreme sensitivity of grazing-angle neutron scattering, and might also provide a useful spectroscopic tool if significant practical obstacles (such as low interaction cross-sections) can be overcome. (author) Neutron Brillouin scattering in dense fluids Verkerk, P [Technische Univ. Delft (Netherlands); FINGO Collaboration Thermal neutron scattering is a typical microscopic probe for investigating dynamics and structure in condensed matter. In contrast, light (Brillouin) scattering with its three orders of magnitude larger wavelength is a typical macroscopic probe. In a series of experiments using the improved small-angle facility of IN5 a significant step forward is made towards reducing the gap between the two. For the first time the transition from the conventional single line in the neutron spectrum scattered by a fluid to the Rayleigh-Brillouin triplet known from light-scattering experiments is clearly and unambiguously observed in the raw neutron data without applying any corrections. Results of these experiments are presented. (author). Polarized neutron scattering research: the beginning The visionary idea of using neutron scattering for the study of magnetic phenomena in condensed matter was proposed by Bloch in 1936, mere 4 years after the neutron was discovered. It was based on one of the surprises the neutron presented the scientific community with: it is neutral, yet it has a magnetic moment, which latter was then not yet directly observed though. Although the first results proved to be mathematically wrong, due to a non-trivial ambiguity of classical electromagnetism theory, which could only be settled by neutron beam experiments 15 years later, the recognition lead to the advent of a most productive area of modern research, which culminated in the development of the powerful and sophisticated techniques of polarized neutron scattering. This recollection traces the early milestones of the development of the field in strong interaction between theory and experiment BUILDING A NETWORK FOR NEUTRON SCATTERING EDUCATION Pynn, Roger; Baker, Shenda Mary; Louca, Despo A.; McGreevy, Robert L.; Ekkebus, Allen E.; Kszos, Lynn A.; Anderson, Ian S. In a concerted effort supported by the National Science Foundation, the Department of Commerce, and the Department of Energy, the United States is rebuilding its leadership in neutron scattering capability through a significant investment in U.S. neutron scattering user facilities and related instrumentation. These unique facilities provide opportunities in neutron scattering to a broad community of researchers from academic institutions, federal laboratories, and industry. However, neutron scattering is often considered to be a tool for 'experts only' and in order for the U.S. research community to take full advantage of these new and powerful tools, a comprehensive education and outreach program must be developed. The workshop described below is the first step in developing a national program that takes full advantage of modern education methods and leverages the existing educational capacity at universities and national facilities. During March 27-28, 2008, a workshop entitled 'Building a Network for Neutron Scattering Education' was held in Washington, D.C. The goal of the workshop was to define and design a roadmap for a comprehensive neutron scattering education program in the United States. Successful implementation of the roadmap will maximize the national intellectual capital in neutron sciences and will increase the sophistication of research questions addressed by neutron scattering at the nation's forefront facilities. (See Appendix A for the list of attendees, Appendix B for the workshop agenda, Appendix C for a list of references. Appendix D contains the results of a survey given at the workshop; Appendix E contains summaries of the contributed talks.) The workshop brought together U.S. academicians, representatives from neutron sources, scientists who have developed nontraditional educational programs, educational specialists, and managers from government agencies to create a national structure for providing ongoing neutron scattering education. A Pynn, Roger [ORNL; Baker, Shenda Mary [ORNL; Louca, Despo A [ORNL; McGreevy, Robert L [ORNL; Ekkebus, Allen E [ORNL; Kszos, Lynn A [ORNL; Anderson, Ian S [ORNL In a concerted effort supported by the National Science Foundation, the Department of Commerce, and the Department of Energy, the United States is rebuilding its leadership in neutron scattering capability through a significant investment in U.S. neutron scattering user facilities and related instrumentation. These unique facilities provide opportunities in neutron scattering to a broad community of researchers from academic institutions, federal laboratories, and industry. However, neutron scattering is often considered to be a tool for &apos;experts only&apos; and in order for the U.S. research community to take full advantage of these new and powerful tools, a comprehensive education and outreach program must be developed. The workshop described below is the first step in developing a national program that takes full advantage of modern education methods and leverages the existing educational capacity at universities and national facilities. During March 27-28, 2008, a workshop entitled &apos;Building a Network for Neutron Scattering Education&apos; was held in Washington, D.C. The goal of the workshop was to define and design a roadmap for a comprehensive neutron scattering education program in the United States. Successful implementation of the roadmap will maximize the national intellectual capital in neutron sciences and will increase the sophistication of research questions addressed by neutron scattering at the nation&apos;s forefront facilities. (See Appendix A for the list of attendees, Appendix B for the workshop agenda, Appendix C for a list of references. Appendix D contains the results of a survey given at the workshop; Appendix E contains summaries of the contributed talks.) The workshop brought together U.S. academicians, representatives from neutron sources, scientists who have developed nontraditional educational programs, educational specialists, and managers from government agencies to create a national structure for providing ongoing neutron The neutron scattering experiments in Thailand have been done continuously since the start up of the reactor. In 1977, Thai research reactor was modified into TRIGA MARK III core. After that, the neutron spectrometer was installed again under a development program. Installation of upgrading spectrometer was delayed because of some problems involving the neutron intensity and instruments. However, these problems were solved and the setup is almost completed. The paper reports the current status of neutron spectrometer, the problems and plans for the experiments. (author) Neutron scattering studies of modulated magnetic structures Aagaard Soerensen, Steen This report describes investigations of the magnetic systems DyFe{sub 4}Al{sub 8} and MnSi by neutron scattering and in the former case also by X-ray magnetic resonant scattering. The report is divided into three parts: An introduction to the technique of neutron scattering with special emphasis on the relation between the scattering cross section and the correlations between the scattering entities of the sample. The theoretical framework of neutron scattering experiments using polarized beam technique is outlined. The second part describes neutron and X-ray scattering investigation of the magnetic structures of DyFe{sub 4}Al{sub 8}. The Fe sublattice of the compound order at 180 K in a cycloidal structure in the basal plane of the bct crystal structure. At 25 K the ordering of the Dy sublattice shows up. By the element specific technique of X-ray resonant magnetic scattering, the basal plane cycloidal structure was also found for the Dy sublattice. The work also includes neutron scattering studies of DyFe{sub 4}Al{sub 8} in magnetic fields up to 5 T applied along a <110> direction. The modulated structure at the Dy sublattice is quenched by a field lower than 1 T, whereas modulation is present at the Fe sublattice even when the 5 T field is applied. In the third part of the report, results from three small angle neutron experiments on MnSi are presented. At ambient pressure, a MnSi is known to form a helical spin density wave at temperature below 29 K. The application of 4.5 kbar pressure intended as hydrostatic decreased the Neel temperature to 25 K and changed the orientation of the modulation vector. To understand this reorientation within the current theoretical framework, anisotropic deformation of the sample crystal must be present. The development of magnetic critical scattering with an isotropic distribution of intensity has been studied at a level of detail higher than that of work found in the literature. Finally the potential of a novel polarization Neutron Inelastic Scattering Study of Liquid Argon Skoeld, K; Rowe, J M; Ostrowski, G [Solid State Science Div., Argonne National Laboratory, Argonne, Illinois (US); Randolph, P D [Nuclear Technology Div., Idaho Nuclear Corporation, Idaho Falls, Idaho (US) The inelastic scattering functions for liquid argon have been measured at 85.2 K. The coherent scattering function was obtained from a measurement on pure A-36 and the incoherent function was derived from the result obtained from the A-36 sample and the result obtained from a mixture of A-36 and A-40 for which the scattering is predominantly incoherent. The data, which are presented as smooth scattering functions at constant values of the wave vector transfer in the range 10 - 44/nm, are corrected for multiple scattering contributions and for resolution effects. Such corrections are shown to be essential in the derivation of reliable scattering functions from neutron scattering data. The incoherent data are compared to recent molecular dynamics results and the mean square displacement as a function of time is derived. The coherent data are compared to molecular dynamics results and also, briefly, to some recent theoretical models The basic physics of neutron scattering experiments The basic physical principles behind the well-established but also developing practice of neutron scattering experiments are presented. A few examples are given either to illustrate the physical principles or to give an idea of the variety, importance or magnitude of various phenomena. The evolution of neutron scattering experimental techniques is investigated from a special aspect: the increasing capability of taking into account more and more important and sometimes decisive finer details by using more and more realistic mathematical models of the evolution of the neutrons from birth do death, eventually passing by the sample and being scattered more than one times. Working with such numerical 'virtual instruments' one will have to go far beyond notions like resolution function, convolution etc, and actually eliminate a large number of approximations currently in use. (K.A.) Neutron Scattering and High Magnetic Fields Winn, Barry L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Stone, Matthew B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States) The workshop "Neutron Scattering and High Magnetic Fields� was held September 4-5, 2014 at the Oak Ridge National Laboratory (ORNL). The workshop was held in response to a recent report by the National Research Council of the National Academy of Sciences entitled "High Magnetic Field Science and Its Application in the United States: Current Status and Future Directions.�1 This report highlights the fact that neutron scattering measurements carried out in high magnetic fields provide important opportunities for new science. The workshop explored the range of the scientific discoveries that could be enabled with neutron scattering measurements at high fields (25 Tesla or larger), the various technologies that might be utilized to build specialized instruments and sample environment equipment to enable this research at ORNL, and possible routes to funding and constructing these facilities and portable high field sample environments. Material classification by fast neutron scattering Buffler, A. E-mail: [email protected]; Brooks, F.D. E-mail: [email protected]; Allie, M.S.; Bharuth-Ram, K.; Nchodu, M.R The scattering of a beam of fast monoenergetic neutrons is used to determine elemental compositions of bulk samples (0.2-0.8 kg) of materials composed from one or more of the elements H, C, N, O, Al, S, Fe and Pb. Scattered neutrons are detected by liquid scintillators placed at forward and at backward angles. Different elements are identified by their characteristic scattering signatures derived either from a combination of time-of-flight and pulse height measurements, or from pulse height measurements alone. Scattering signatures measured for multi-element samples are analysed to determine atom fractions for H, C, N, O and other elements in the sample. Atom fractions determined from scattering signatures are insensitive to neutron interactions in material surrounding the scattering sample, provided the amount of material is not excessive. The atom fraction data are used to classify scattering material into categories including &apos;explosives&apos;, &apos;illicit drugs&apos; and &apos;other materials&apos; for the purpose of contraband detection. Inelastic neutron scattering from glass formers Buchenau, U. Neutron spectra below and above the glass transition temperature show a pronounced difference between strong and fragile glass formers in Angell's fragility scheme. The strong anharmonic increase of the inelastic scattering with increasing temperature in fragile substances is absent in the strongest glass former SiO 2 . That difference is reflected in the temperature dependence of Brillouin sound velocities above the glass transition. Coherent inelastic neutron scattering data indicate a mixture of sound waves and local modes at the low frequency boson peak. A relation between the fragility and the temperature dependence of the transverse hypersound velocity at the glass temperature is derived. (author) Monte Carlo simulation of neutron scattering instruments Seeger, P.A. A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width Analytic scattering kernels for neutron thermalization studies Sears, V.F. Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results Larmor-precession based neutron scattering instrumentation Ioffe, Alexander The Larmor precession of the neutron spin in a magnetic field allows the attachment of a Larmor clock to every neutron. Such Larmor labelling opens the possibility for the development of unusual neutron scattering techniques, where the energy (momentum) resolution does not require the initial and final states to be well selected. This principally allows for achievement of very high energy (momentum) resolution that is not feasible at all with conventional neutron scattering techniques, because the required neutron beam monochromatization (collimation) will result in intolerable intensity losses. Such decoupling of resolution and collimation allows, for example, for a significant increase in the luminosity of small-angle scattering or high-resolution diffractometers; the fact that opens new perspectives for their implementation at middle flux neutron sources. Different kinds of Larmor clock-based instrumentation, particularly two alternative NSE techniques using rotating and time-gradient magnetic field arrangements, which can be considered as inexpensive and affordable alternatives to present day NSE techniques, will be discussed and results of simulations and first experiments will be presented. (author) Small-angle neutron-scattering experiments Hardy, A.D.; Thomas, M.W.; Rouse, K.D. A brief introduction to the technique of small-angle neutron scattering is given. The layout and operation of the small-angle scattering spectrometer, mounted on the AERE PLUTO reactor, is also described. Results obtained using the spectrometer are presented for three materials (doped uranium dioxide, Magnox cladding and nitrided steel) of interest to Springfields Nuclear Power Development Laboratories. The results obtained are discussed in relation to other known data for these materials. (author) Defense, basic, and industrial research at the Los Alamos Neutron Science Center: Proceedings Longshore, A.; Salgado, K. [comps. The Workshop on Defense, Basic, and Industrial Research at the Los Alamos Neutron Science Center gathered scientists from Department of Energy national laboratories, other federal institutions, universities, and industry to discuss the use of neutrons in science-based stockpile stewardship, The workshop began with presentations by government officials, senior representatives from the three weapons laboratories, and scientific opinion leaders. Workshop participants then met in breakout sessions on the following topics: materials science and engineering; polymers, complex fluids, and biomaterials; fundamental neutron physics; applied nuclear physics; condensed matter physics and chemistry; and nuclear weapons research. They concluded that neutrons can play an essential role in science-based stockpile stewardship and that there is overlap and synergy between defense and other uses of neutrons in basic, applied, and industrial research from which defense and civilian research can benefit. This proceedings is a collection of talks and papers from the plenary, technical, and breakout session presentations. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database. Magnetism and magnetic materials probed with neutron scattering Velthuis, S.G.E. te; Pappas, C. Neutron scattering techniques are becoming increasingly accessible to a broader range of scientific communities, in part due to the onset of next-generation, high-power spallation sources, high-performance, sophisticated instruments and data analysis tools. These technical advances also advantageously impact research into magnetism and magnetic materials, where neutrons play a major role. In this Current Perspective series, the achievements and future prospects of elastic and inelastic neutron scattering, polarized neutron reflectometry, small angle neutron scattering, and neutron imaging, are highlighted as they apply to research into magnetic frustration, superconductivity and magnetism at the nanoscale. - Highlights: • Introduction to Current Perspective series titled Magnetism and Magnetic Materials probed with Neutron Scattering. • Elastic and inelastic neutron scattering in systems with magnetic frustration and superconductivity. • Small angle neutron scattering and polarized neutron reflectometry in studying magnetism at the nanoscale. • Imaging of magnetic fields and domains Velthuis, S.G.E. te, E-mail: [email protected] [Materials Science Division, Argonne National Laboratory, 9700 S Cass Ave, Argonne, IL 60439 (United States); Pappas, C. [Faculty of Applied Sciences, Delft University of Technology, Mekelweg 15, NL-2629JB Delft (Netherlands) Neutron scattering techniques are becoming increasingly accessible to a broader range of scientific communities, in part due to the onset of next-generation, high-power spallation sources, high-performance, sophisticated instruments and data analysis tools. These technical advances also advantageously impact research into magnetism and magnetic materials, where neutrons play a major role. In this Current Perspective series, the achievements and future prospects of elastic and inelastic neutron scattering, polarized neutron reflectometry, small angle neutron scattering, and neutron imaging, are highlighted as they apply to research into magnetic frustration, superconductivity and magnetism at the nanoscale. - Highlights: • Introduction to Current Perspective series titled Magnetism and Magnetic Materials probed with Neutron Scattering. • Elastic and inelastic neutron scattering in systems with magnetic frustration and superconductivity. • Small angle neutron scattering and polarized neutron reflectometry in studying magnetism at the nanoscale. • Imaging of magnetic fields and domains. Interesting results from neutron-scattering Woods, A.D.B. Neutron scattering has been a useful tool in the determination of the lattice dynamics of metals, studies of the physics of magnetism in rare-earth systems, observing changes in the structure of DNA bases after ultraviolet irradiation, looking at plastic crystals, following structural phase changes in ferroelectric materials, and studying liquid He. Both low- and high-flux facilities are useful. (LL) Some applications of polarized inelastic neutron scattering A brief account of applications of polarized inelastic neutron scattering in condensed matter research is given. ... the itinerant antiferromagnet chromium we demonstrate that the dynamics of the longitudinal and transverse excitations are very different, resolving a long standing puzzle concerning the slope of their dispersion. Critical scattering of neutrons from terbium Als-Nielsen, Jens Aage; Dietrich, O.W.; Marshall, W. The inelasticity of the critical scattering of neutrons in terbium has been measured above the Neél temperature at the (0, 0, 2−Q) satellite position. The results show that dynamic slowing down of the fluctuations does occur in a second�order phase transition in agreement with the general theory... NEUTRON-SCATTERING STUDY OF DCN Mackenzie, Gordon A.; Pawley, G. S. Phonons in deuterium cyanide have been measured by neutron coherent inelastic scattering. The main subject of study was the transverse acoustic mode in the (110) direction polarised along (110) which is associated with the first-order structural phase transition at 160K. Measurements have shown... Neutron scattering (progress report) January - December 1991 Buehrer, W.; Fischer, P.; Furrer, A. Progress made by the Laboratory for Neutron Scattering of the Swiss Federal Institute of Technology during the year 1991 in the fields of high-T c superconductors, materials science, magnetism, structural research, lattice dynamics, phase transitions, instrumental and support activities is reported. figs., tabs., refs Experimental technique of small angle neutron scattering Xia Qingzhong; Chen Bo The main parts of Small Angle Neutron Scattering (SANS) spectrometer, and their function and different parameters are introduced from experimental aspect. Detailed information is also introduced for SANS spectrometer 'Membrana-2'. Based on practical experiments, the fundamental requirements and working condition for SANS experiments, including sample preparation, detector calibration, standard sample selection and data preliminary process are described. (authors) Neutron elastic scattering at very small angles This experiment will measure neutron-proton elastic scattering at very small angles and hence very small four-momentum transfer, |t|. The range of |t| depends on the incident neutron momentum of the events but the geometrical acceptance will cover the angular range 0.025 < $\\Theta_{lab}$ < 1.9 mrad. The higher figure could be extended to 8.4 mrad by changing the geometry of the experiment in a later phase. \\\\ \\\\ The neutron beam will be highly collimated and will be derived from a 400 GeV external proton beam of up to $4 \\times 10^{10}$ protons per pulse in the SPS North Area Hall 1. The hydrogen target will be gaseous, operating at 40 atm. pressure and acts as a multiwire proportional chamber to detect the recoil protons. The forward neutron will be detected and located by interaction in a neutron vertex detector and its energy measured by a conventional steel plate calorimeter. \\\\ \\\\ The experiment will cover the angular region of nucleon-nucleon scattering which is dominated by Coulomb scattering ... Resonance effects in neutron scattering lengths Lynn, J.E. The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-/angstrom/ wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs. 2016 American Conference on Neutron Scattering (ACNS) Woodward, Patrick The 8th American Conference on Neutron Scattering (ACNS) was held July 10-14, 2016 in Long Beach California, marking the first time the meeting has been held on the west coast. The meeting was coordinated by the Neutron Scattering Society of America (NSSA), and attracted 285 attendees. The meeting was chaired by NSSA vice president Patrick Woodward (the Ohio State University) assisted by NSSA president Stephan Rosenkranz (Argonne National Laboratory) together with the local organizing chair, Brent Fultz (California Institute of Technology). As in past years the Materials Research Society assisted with planning, logistics and operation of the conference. The science program was divided into the following research areas: (a) Sources, Instrumentation, and Software; (b) Hard Condensed Matter; (c) Soft Matter; (d) Biology; (e) Materials Chemistry and Materials for Energy; (f) Engineering and Industrial Applications; and (g) Neutron Physics. Fast neutron scattering near shell closures: Scandium Neutron differential elastic- and inelastic-scattering cross sections are measured from ∼ 1.5 to 10 MeV with sufficient detail to define the energy-averaged behavior of the scattering processes. Neutrons corresponding to excitations of 465 ± 23, 737 ± 20, 1017 ± 34, 1251 ± 20, 1432 ± 23 and 1692 ± 25 keV are observed. It is shown that the observables, including the absorption cross section, are reasonably described with a conventional optical-statistical model having energy-dependent geometric parameters. These energy dependencies are alleviated when the model is extended to include the contributions of the dispersion relationship. The model parameters are conventional, with no indication of anomalous behavior of the neutron interaction with 45 Sc, five nucleons from the doubly closed shell at 40 Ca The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-angstrom wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs Woodward, Patrick [Materials Research Society, Warrendale, PA (United States) Quantum entanglement and neutron scattering experiments Cowley, R A It is shown that quantum entanglement in condensed matter can be observed with scattering experiments if the energy resolution of the experiments enables a clear separation between the elastic scattering and the scattering from the excitations in the system. These conditions are not satisfied in recent deep inelastic neutron scattering experiments from hydrogen-containing systems that have been interpreted as showing the existence of quantum entanglement for short times in, for example, water at room temperature. It is shown that the theory put forward to explain these experiments is inconsistent with the first-moment sum rule for the Van Hove scattering function and we suggest that the theory is incorrect. The experiments were performed using the unique EVS spectrometer at ISIS and suggestions are made about how the data and their interpretation should be re-examined Neutron scattering study of dilute supercritical solutions Cochran, H.D.; Wignall, G.D.; Shah, V.M.; Londono, J.D.; Bienkowski, P.R. Dilute solutions in supercritical solvents exhibit interesting microstructures that are related to their dramatic macroscopic behavior. In typical attractive solutions, solutes are believed to be surrounded by clusters of solvent molecules, and solute molecules are believed to congregate in the vicinity of one another. Repulsive solutions, on the other hand, exhibit a local region of reduced solvent density around the solute with solute-solute congregation. Such microstructures influence solubility, partial molar volume, reaction kinetics, and many other properties. We have undertaken to observe these interesting microstructures directly by neutron scattering experiments on dilute noble gas systems including Ar. The three partial structure factors for such systems and the corresponding pair correlation functions can be determined by using the isotope substitution technique. The systems studied are uniquely suited for our objectives because of the large coherent neutron scattering length of the isotope 36 Ar and because of the accurate potential energy functions that are available for use in molecular simulations and theoretical calculations to be compared with the scattering results. We will describe our experiment, the unique apparatus we have built for it, and the neutron scattering results from our initial allocations of beam time. We will also describe planned scattering experiments to follow those with noble gases, including study of long-chain molecules in supercritical solvents. Such studies will involve hydrocarbon mixtures with and without deuteration to provide contrast Neutron scattering on deformed nuclei Hansen, L.F.; Haight, R.C.; Pohl, B.A.; Wong, C.; Lagrange, C. Measurements of neutron elastic and inelastic differential cross sections around 14 MeV for 9 Be, C, 181 Ta, 232 Th, 238 U and 239 Pu have been analyzed using a coupled channel (CC) formalism for deformed nuclei and phenomenological global optical model potentials (OMP). For the actinide targets these results are compared with the predictions of a semi-microscopic calculation using Jeukenne, Lejeune and Mahaux (JLM) microscopic OMP and a deformed ground state nuclear density. The overall agreement between calculations and the measurements is reasonable good even for the very light nuclei, where the quality of the fits is better than those obtained with spherical OMP Small-angle neutron scattering at pulsed spallation sources Seeger, P.A.; Hjelm, R.P. Jr. The importance of small-angle neutron scattering (SANS) in biological, chemical, physical, and engineering research mandates that all intense neutron sources be equipped with SANS instruments. Four existing instruments are described, and the general differences between pulsed-source and reactor-based instrument designs are discussed. The basic geometries are identical, but dynamic range is achieved by using a broad band of wavelengths (with time-of-flight analysis) rather than by moving the detector. This allows a more optimized collimation system. Data acquisition requirements at a pulsed source are more severe, requiring large, fast histogramming memories. Data reduction is also more complex, as all wave length-dependent and angle-dependent backgrounds and non-linearities must be accounted for before data can be transformed to intensity vs Q. A comparison is shown between the Los Alamos pulsed instrument and D-11 (Institute Laue-Langevin), and examples from the four major topics of the conference are shown. The general conclusion is that reactor-based instruments remain superior at very low Q or if only a narrow range of Q is required, but that the current generation of pulsed-source instruments is competitive at moderate Q and may be faster when a wide range of Q is required. In principle, a user should choose which facility to use on the basis of optimizing the experiment; in practice the tradeoffs are not severe and the choice is usually made on the basis of availability The importance of small-angle neutron scattering (SANS) in biological, chemical, physical and engineering research mandates that all intense neutron sources be equipped with SANS instruments. Four existing instruments at pulsed sources are described and the general differences between pulsed-source and reactor-based instrument designs are discussed. The basic geometries are identical, but dynamic range is generally achieved by using a broad band of wavelengths (with time-of-flight analysis) rather than by moving the detector. This allows optimization for maximum beam intensity at a given beam size over the full dynamic range with fixed collimation. Data-acquisition requirements at a pulsed source are more severe, requiring large fast histrograming memories. Data reduction is also more complex, as all wavelength-dependent and angle-dependent backgrounds and nonlinearities must be accounted for before data can be transformed to intensity vs momentum transfer (Q). A comparison is shown between the Los Alamos pulsed instrument and D11 (Institut Laue-Langevin) and examples from the four major topics of the conference are shown. The general conclusion is that reactor-based instruments remain superior at very low Q or if only a narrow range of Q is required, but that the current generation of pulsed-source instruments is competitive of moderate Q and may be faster when a wide range of Q is required. (orig.) Neutron transfer with anisotropic scattering El Wakil, S.A.; Haggag, M.H.; Saad, E.A. The finite slab problem is reduced to a semi-infinite one by adding an infinitesimally thick layer such that both the added layer and the total layer are semi-infinite. The relation between the reflection and transmission functions for a finite slab and those for an infinite one are obtained in terms of an operator which satisfies a semigroup equation. The method is applied to anisotropic scattering with azimuthal dependence. Numerical calculations are made and the results compared with those of other workers. (author) Proceedings of the workshop on neutron scattering instrumentation for SNQ Scherm, R.; Stiller, H. These proceedings contain the articles presented at the named workshop. These concern instrumentation for neutron diffraction with special regards to small angle scattering, diffuse scattering, inelastic scattering, high resolution spectroscopy, and special techniques. (HSI) Applications of thermal neutron scattering in biology, biochemistry and biophysics Worcester, D.L. Biological applications of thermal neutron scattering have increased rapidly in recent years. The following categories of biological research with thermal neutron scattering are presently identified: crystallography of biological molecules; neutron small-angle scattering of biological molecules in solution (these studies have already included numerous measurements of proteins, lippoproteins, viruses, ribosomal subunits and chromatin subunit particles); neutron small-angle diffraction and scattering from biological membranes and membrane components; and neutron quasielastic and inelastic scattering studies of the dynamic properties of biological molecules and materials. (author) Theory of neutron scattering in disordered alloys Yussouff, M.; Mookerjee, A. A comprehensive theory of thermal neutron scattering in disordered alloys is presented here. We consider in detail the case of substitutional random binary alloy with random changes in mass and force constants; and for all values of the concentration. The cluster CPA formalism in argumented space developed here is free from analytical difficulties for the Green function, performs correct averaging over random atomic scattering lengths and employs a self-consistent medium for the calculations. For easy computation, we describe the graphical representation of the resolvent where the approximation steps can be depicted as closed paths in augmented space. Our results for scattering cross sections, both coherent and incoherent, include new types of terms and these lead to asymmetric line shapes for the coherent scattering. (author) Scattering of fast neutrons from 103Rh Barnard, E.; Reitmann, D. The scattering of fast neutrons from 103 Rh was studied by means of (n, n), (n, n') and (n, n'γ) measurements at neutron energies up to 2 MeV. More than fifty unknown γ-transitions were identified and a level scheme established which includes fifteen unreported excited states. Branching ratios, spins and parities for these levels were deduced, as well as the effective activation cross sections for the 103 Rh(n, n')sup(103m)Rh reaction. The results are compared with existing data and with calculations based on the optical and statistical models. (Auth.) Quasielastic neutron scattering facility at Dhruva reactor Mukhopadhyay, R.; Mitra, S.; Paranjpe, S.K.; Dasannacharya, B.A. Quasi-elastic neutron scattering is a powerful experimental tool for studying the various dynamical motions in solids and liquids. In this paper, we have described the salient features of the quasi-elastic neutron spectrometer in operation at Dhruva reactor at Trombay, India. The design criteria have been such as to maximise the throughput by various means like closer approach to the source, focusing a larger beam on to a sample, and Multi-Angle Reflecting X-tal mode of energy analysis. Some results of molecular motions from recently studied systems using this spectrometer are also reported A mechanical rotator for neutron scattering measurements Thaler, A.; Northen, E.; Aczel, A. A.; MacDougall, G. J. We have designed and built a mechanical rotation system for use in single crystal neutron scattering experiments at low temperatures. The main motivation for this device is to facilitate the application of magnetic fields transverse to a primary training axis, using only a vertical cryomagnet. Development was done in the context of a triple-axis neutron spectrometer, but the design is such that it can be generalized to a number of different instruments or measurement techniques. Here, we discuss some of the experimental constraints motivating the design, followed by design specifics, preliminary experimental results, and a discussion of potential uses and future extension possibilities. Schlieren diagnostics of the Los Alamos hypersonic gas target neutron generator Haasz, A.A.; Lever, J.H. The gasdynamic behaviour of a planar model of the Los Alamos geometry hypersonic gas target neutron generator (GTNG) was investigated using Schlieren flow visualization photographs, static and total pressure and spill flow measurements. The model consisted of two symmetrical expansion nozzles with 220 μm throats producing a combined flow of about Mach 4 in the GTNG channel. Stagnation pressures of 100-800 kPa were used. Two basic flow configurations, spill line closed and spill line open, were studied in order to gain insight into the complex boundary layer development near the nozzle exit planes. Both flow configurations are discussed qualitatively, making use of the pressure measurements and theoretical analysis. (orig.) Los Alamos neutron science center nuclear weapons stewardship and unique national scientific capabilities Schoenberg, Kurt F [Los Alamos National Laboratory This presentation gives an overview of the Los Alamos Neutron Science Center (LANSCE) and its contributions to science and the nuclear weapons program. LANSCE is made of multiple experimental facilities (the Lujan Center, the Weapons Neutron Research facility (WNR), the Ultra-Cold Neutron facility (UCN), the proton Radiography facility (pRad) and the Isotope Production Facility (IPF)) served by the its kilometer long linear accelerator. Several research areas are supported, including materials and bioscience, nuclear science, materials dynamics, irradiation response and medical isotope production. LANSCE is a national user facility that supports researchers worldwide. The LANSCE Risk Mitigation program is currently in progress to update critical accelerator equipment to help extend the lifetime of LANSCE as a key user facility. The Associate Directorate of Business Sciences (ADBS) plays an important role in the continued success of LANSCE. This includes key procurement support, human resource support, technical writing support, and training support. LANSCE is also the foundation of the future signature facility MARIE (Matter-Radiation Interactions in Extremes). Schoenberg, Kurt F. Neutron scattering from polarised proton domains Van den Brandt, B; Kohbrecher, J; Konter, J A; Mango, S; Glattli, H; Leymarie, E; Grillo, I; May, R P; Jouve, H; Stuhrmann, H B; Stuhrmann, H B; Zimmer, O Time-dependent small-angle polarised neutron scattering from domains of polarised protons has been observed at the onset of dynamic nuclear polarisation in a frozen solution of 98% deuterated glycerol-water at 1 K containing a small concentration of paramagnetic centres (EHBA-Cr sup V). Simultaneous NMR measurements show that the observed scattering arises from protons around the Cr sup V -ions which are polarised to approx 10% in a few seconds, much faster than the protons in the bulk. (authors) Neutron scattering. Annual progress report 1997 Allenspach, P.; Boeni, B.; Fischer, P.; Furrer, A. The present progress report describes the scientific and technical activities obtained by LNS staff members in 1997. It also includes the work performed by external groups at our CRG instruments D1A and IN3 at the ILL Grenoble. Due to the outstanding properties of neutrons and x-rays the research work covered many areas of science and materials research. The highlight of the year 1997 was certainly the production of neutrons at the new spallation neutron source SINQ. From July to November, SINQ was operating for typically two days/week and allowed the commissioning of four instruments at the neutron guide system: - the triple-axis spectrometer Druechal, - the powder diffractometer DMC, - the double-axis diffractometer TOPSI, the polarised triple-axis spectrometer TASP. These instruments are now fully operational and have already been used for condensed matter studies, partly in cooperation with external groups. Five further instruments are in an advanced state, and their commissioning is expected to occur between June and October 1998: - the high-resolution powder diffractometer HRPT, - the single-crystal diffractometer TriCS, - the time-of-flight spectrometer FOCUS, - the reflectometer AMOR, - the neutron optical bench NOB. Together with the small angle neutron scattering facility SANS operated by the spallation source department, all these instruments will be made available to external user groups in the future. (author) figs., tabs., refs Specimen environments in thermal neutron scattering experiments Cebula, D.J. This report is an attempt to collect into one place outline information concerning the techniques used and basic design of sample environment apparatus employed in neutron scattering experiments. Preliminary recommendations for the specimen environment programme of the SNS are presented. The general conclusion reached is that effort should be devoted towards improving reliability and efficiency of operation of specimen environment apparatus and developing systems which are robust and easy to use, rather than achieving performance at the limits of technology. (author) Summary of coherent neutron scattering length Rauch, H. Experimental values of neutron-nuclei bound scattering lengths for some 354 isotopes and elements and the various spin-states are compiled in a uniform way together with their error bars as quoted in the original literature. Recommended values are also given. The definitions of the relevant quantities presented in the data tables and the basic principles of measurements are explained in the introductory chapters. The data is also available on a magnetic tape Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr. A code package consisting of the Monte Carlo Library MCLIB, the executing code MC RUN, the web application MC Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown Neutron scattering studies of Mn12-acetate Robinson, R.A. Full text: The S=10 magnetic molecule Mn 12 -acetate, which crystallises into a tetragonal crystal structure, has attracted substantial recent attention by virtue of its low temperature bulk magnetic properties, which give evidence for resonant quantum tunnelling of the magnetisation. We report a full neutron crystal structure including positions of all protons/deuterons, including the solvated water and acetic acid, a polarised-neutron study of the real space magnetisation, which confirms a simple magnetic-structure model for the molecule, albeit with reduced Mn moments, and inelastic neutron scattering data containing both the excitations within the 21-fold degenerate S=10 manifold, and those from S=10 to the S=9 manifolds. Both manifolds are split by uniaxial magnetic anisotropy, and we report coefficients for 2nd and 4th-order terms in the magnetic Hamiltonian Neutron scattering studies on frustrated magnets Arima, Taka-hisa A lot of frustrated magnetic systems exhibit a nontrivial magnetic order, such as long-wavelength modulation, noncollinear, or noncoplanar order. The nontrivial order may pave the way for the novel magnetic function of matter. Neutron studies are necessary to determine the magnetic structures in the frustrated magnetic systems. In particular, spin-polarized neutron scattering is a useful technique for the investigation of the novel physical properties relevant to the nontrivial spin arrangement. Here some neutron studies on a multiferroic perovskite manganese oxide system are demonstrated as a typical case. The frustrated magnetic systems may also a playground of novel types of local magnetic excitations, which behave like particles in contrast to the magnetic waves. It is becoming a good challenge to study such particle-type magnetic excitations relevant to the magnetic frustration. (author) Neutron scattering cross sections of uranium-238 Beghian, L.E.; Kegel, G.H.R.; Marcella, T.V.; Barnes, B.K.; Couchell, G.P.; Egan, J.J.; Mittler, A.; Pullen, D.J.; Schier, W.A. The University of Lowell high-resolution time-of-flight spectrometer was used to measure angular distributions and 90-deg excitation functions for neutrons scattered from 238 U in the energy range from 0.9 to 3.1 MeV. This study was limited to the elastic and the first two inelastic groups, corresponding to states of 238 U at 45 keV (2 + ) and 148 keV (4 + ). Angular distributions were measured at primary neutron energies of 1.1, 1.9, 2.5, and 3.1 MeV for the same three neutron groups. Whereas the elastic data are in fair agreement with the evaluation in the ENDF/B-IV file, there is substantial disagreement between the inelastic measurements and the evaluated cross sections. 12 figures Study of material science by neutron scattering Kim, H.J.; Yoon, B.K.; Cheon, B.C.; Lee, C.Y.; Kim, C.S. To develop accurate methods of texture measurement in metallic materials by neutron diffraction, (100),(200),(111) and (310) pole figures have been measured for the oriented silicon steel sheet, and currently study of correction methods for neutron absorption and extinction effects are in progress. For quantitative analysis of texture of polycrystalline material with a cubic structure, a software has been developed to calculate inverse pole figures for arbitrary direction specified in the speciman as well as pole figures for arbitrary chosen crystallographic planes from three experimental pole figures. This work is to be extended for the calculation of three dimensional orientation distribution function and for the evaluation of errors in the quantitative analysis of texture. Work is also for the study of N-H...O hydrogen bond in amino acid by observing molecular motions using neutron inelastic scattering. Measurement of neutron inelastic scattering spectrum of L-Serine is completed at 100 0 K and over the energy transfer range of 20-150 meV. (KAERI INIS Section) Spectral distortion due to scattered cold neutrons in beryllium filter Sakamoto, Yukio; Inoue, Kazuhiko Polycrystalline beryllium filters are used to discriminate the cold neutrons from the thermal neutrons with energies above Bragg cut-off energy. The cold neutron scattering cross section is very small, but the remaining cross section is not zero. Then the neutrons scattered once from the filter in the cold neutron energy region have chance of impinging on the outlet of filter. Those neutrons are almost upscattered and develop into thermal neutrons; thus the discriminated cold neutrons include a small spectral distortion due to the thermal neutrons. In the present work we have evaluated the effect on the cold neutron spectrum due to the repeatedly scattered and transmitted neutrons by using a Monte Carlo calculation method. (author) A neutron scattering study of DCN Mackenzie, G.A.; Pawley, G.S. Phonons in deuterium cyanide have been measured by neutron coherent inelastic scattering. The main subject of study was the transverse acoustic mode in the (110) direction polarised along (110) which is associated with the first-order structural phase transition at 160 K. Measurements have shown that the frequency decreases by about 25% between about 225 and 160 K as the transition temperature is approached. The other acoustic modes observable in the a*b* scattering plane have been measured and show no anomalous temperature dependence. Optic modes were unobservable because of the small size of the single-crystal sample which gave insufficient scattered intensity. Apart from the 'soft' mode, the measured frequencies are in good agreement with lattice dynamics calculations. (author) Development of temperature related thermal neutron scattering database for MCNP Mei Longwei; Cai Xiangzhou; Jiang Dazhen; Chen Jingen; Guo Wei Based on ENDF/B-Ⅶ neutron library, the thermal neutron scattering library S(α, β) for molten salt reactor moderators was developed. The temperatures of this library were chose as the characteristic temperature of the molten salt reactor. The cross section of the thermal neutron scattering of ACE format was investigated, and this library was also validated by the benchmarks of ICSBEP. The uncertainties shown in the validation were in reasonable range when compared with the thermal neutron scattering library tmccs which included in the MCNP data library. It was proved that the thermal neutron scattering library processed in this study could be used in the molten salt reactor design. (authors) Neutron scattering applications in structural biology: now and the future Trewhella, J [Los Alamos National Lab., NM (United States) Neutrons have an important role to play in structural biology. Neutron crystallography, small-angle neutron scattering and inelastic neutron scattering techniques all contribute unique information on biomolecular structures. In particular, solution scattering techniques give critical information on the conformations and dispositions of the components of complex assemblies under a wide variety of relevant conditions. The power of these methods is demonstrated here by studies of protein/DNA complexes, and Ca{sup 2+}-binding proteins complexed with their regulatory targets. In addition, we demonstrate the utility of a new structural approach using neutron resonance scattering. The impact of biological neutron scattering to date has been constrained principally by the available fluxes at neutron sources and the true potential of these approaches will only be realized with the development of new more powerful neutron sources. (author) Neutron scattering from adsorbed species Shuwang An Neutron reflection has been used to investigate the structure of layers of water-soluble diblock copolymers poly(2-(dimethylamino)ethyl methacrylate-block-methyl methacrylate (poly(DMAEMA-b-MMA)) (70 mol% DMAEMA, M n = 10k, 80 mol% DMAEMA, M n = 10k, and 70 mol% DMAEMA, M n = 20k) adsorbed at the air-liquid and solid-liquid interfaces. The surface tension behaviour of these copolymers at the air-liquid interface has also been investigated. The study of the structure of layers of poly(DMAEMA-b-MMA) adsorbed at the air-water interface forms the main part of the thesis. The surface structure, the effects of pH and ionic strength, and the effects of composition and molecular weight of the copolymers have been studied systematically. For the 70%-10k copolymer at pH 7.5, the adsorption isotherm shows that there is a surface phase transition. The concentration of copolymer at which the phase transition occurs is close to that at which micellar aggregation in the bulk solution also occurs. At low concentrations (below the CMC), the two blocks of the copolymer are approximately uniformly distributed in the direction normal to the interface and the layer is partially immersed in water. At high concentrations (above the CMC), the adsorbed layer has a cross-sectional structure resembling that expected for a micelle with the majority of the MMA blocks forming the core. The outer layers, comprising predominantly DMAEMA blocks, are not equivalent, being more highly extended on the aqueous side of the interface. The effects of pH and added electrolyte on the structure of layers of the 70%-10k copolymer show that the layered structure is promoted by any changes in the bulk solution that enhance the surface coverage but is inhibited by an increase in the fractional charge on the polyelectrolyte part of the copolymer. The effect of lowering the pH is to increase the positive charge on the weak polyelectrolyte block. Addition of electrolyte generally enhances the amount adsorbed and Neutron scattering on equilibrium and nonequilibrium phonons, excitons and polaritons Broude, V.L.; Sheka, E.F. A number of problems of solid-state physics representing interest for neutron spectroscopy of future is considered. The development of the neutron inelastic scattering spectroscopy (neutron spectroscopy of equilibrium phonons) is discussed with application to nuclear dynamics of crystals in the thermodynamic equilibrium. The results of high-flux neutron source experiments on molecular crystals are presented. The advantages of neutron inelastic scattering over optical spectroscopy are discussed. The spectroscopy of quasi-equilibrium and non-equilibrium quasi-particles is discussed. In particular, the neutron scattering on polaritons, excitons in thermal equilibrium and production of light-excitons are considered. The problem of the possibility of such experiments is elucidated Design of the Next Generation Target at the Lujan Neutron Scattering Center, LANSCE Ferres, Laurent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); National Graduate School of Engineering and Research Center (ENSICAEN), Caen (France) Los Alamos National Laboratory (LANL) supports scientific research in many diverse fields such as biology, chemistry, and nuclear science. The Laboratory was established in 1943 during the Second World War to develop nuclear weapons. Today, LANL is one of the largest laboratories dedicated to nuclear defense and operates an 800 MeV proton linear accelerator for basic and applied research including: production of high- and low-energy neutrons beams, isotope production for medical applications and proton radiography. This accelerator is located at the Los Alamos Neutron Science Center (LANSCE). The work performed involved the redesign of the target for the low-energy neutron source at the Lujan Neutron Scattering Center, which is one of the facilities built around the accelerator. The redesign of the target involves modeling various arrangements of the moderator-reflector-shield for the next generation neutron production target. This is done using Monte Carlo N-Particle eXtended (MCNPX), and ROOT analysis framework, a C++ based-software, to analyze the results. Neutron scattering treatise on materials science and technology Kostorz, G Treatise on Materials Science and Technology, Volume 15: Neutron Scattering shows how neutron scattering methods can be used to obtain important information on materials. The book discusses the general principles of neutron scattering; the techniques used in neutron crystallography; and the applications of nuclear and magnetic scattering. The text also describes the measurement of phonons, their role in phase transformations, and their behavior in the presence of crystal defects; and quasi-elastic scattering, with its special merits in the study of microscopic dynamical phenomena in solids and Polarized neutron inelastic scattering experiments on spin dynamics Kakurai, Kazuhisa The principles of polarized neutron scattering are introduced and examples of polarized neutron inelastic scattering experiments on spin dynamics investigation are presented. These examples should demonstrate the importance of the polarized neutron utilization for the investigation of non-trivial magnetic ground and excited states in frustrated and low dimensional quantum spin systems. (author) Neutron-scattering studies of chromatin Bradbury, E.M.; Baldwin, J.P.; Carpenter, B.G.; Hjelm, R.P.; Hancock, R.; Ibel, K. It is clear that a knowledge of the basic molecular structure of chromatin is a prerequisite for any progress toward an understanding of chromosome organization. With a two-component system, protein and nucleic acid, neutrons have a particularly powerful application to studies of the spatial arrangements of these components because of the ability, by contrast matching with H 2 O-D 2 O mixtures, to obtain neutron-scattering data on the individual components. With this approach it has been shown that the neutron diffraction of chromatin is consistent with a ''beads on a string'' model in which the bead consists of a protein core with DNA coiled on the outside. However, because chromatin is a gel and gives limited structural data, confirmation of such a model requires extension of the neutron studies by deuteration of specific chromatin components and the isolation of chromatin subunits. Although these studies are not complete, the neutron results so far obtained support the subunit model described above Scattering of neutrons and critical phenomena in antiferromagnetic fermi liquid Akhiezer, I.A.; Barannik, E.A. The scattering of slow neutrons in an antiferromagnetic with collectivized magnetic electrons is considered and it is shown to significantly differ from the neutron scattering in an antiferromagnetic with localized magnetic electrons. The behaviour of scattering cross sections and fluctuation correlators near the Neel point is studied. These magnitudes are shown to increase with the critical index r=-1 [ru Quantum effects in deep inelastic neutron scattering In the Impulse Approximation (IA), which is used to interpret deep inelastic neutron scattering (DINS) measurements, it is assumed both that the target system can be treated as a gas of free atoms and that the struck atom recoils freely after the collision with the neutron. Departures from the IA are generally attributed to final state effects (FSE), which are due to the inaccuracy of the latter assumption. However it is shown that even when FSE are neglected, significant departures from the IA occur at low temperatures due to inaccuracies in the former assumption. These are referred to as initial state effects (ISE) and are due to the quantum nature of the initial state. Comparison with experimental data and exactly soluble models shows that ISE largely account for observed asymmetries and peak shifts in the neutron scattering function S(q,ω), compared with the IA prediction. It is shown that when FSE are neglected, ISE can also be neglected when either the momentum transfer or the temperature is high. Finally it is shown that FSE should be negligible at high momentum transfers in systems other than quantum fluids and that therefore in this regime the IA is reached in such systems. (author) Probing fine magnetic particles with neutron scattering Pynn, R. Because thermal neutrons are scattered both by nuclei and by unpaired electrons, they provide an ideal probe for studying the atomic and magnetic structures of fine-grained magnetic materials, including nanocrystalline solids, thin epitaxial layers, and colloidal suspensions of magnetic particles, known as ferrofluids. Diffraction, surface reflection, and small angle neutron scattering (SANS) are the techniques used. With the exception of surface reflection, these methods are described in this article. The combination of SANS with refractive-index matching and neutron polarisation analysis is particularly powerful because it allows the magnetic and atomic structures to be determined independently. This technique has been used to study both dilute and concentrated ferrofluid suspensions of relatively monodisperse cobalt particles, subjected to a series of applied magnetic fields. The size of the cobalt particle core and the surrounding surfactant layer were determined. The measured interparticle structure factor agrees well with a recent theory that allows correlations in binary mixtures of magnetic particles to be calculated in the case of complete magnetic alignment. When one of the species in such a binary mixture is a nonmagnetic, cyclindrical macromolecule, application of a magnetic field leads to some degree of alignment of the nonmagnetic species. This result has been demonstrated with tobacco mosaic virus suspended in a water-based ferrofluid Incoherent quasielastic neutron scattering from plastic crystals Bee, M.; Amoureux, J.P. The aim of this paper is to present some applications of a method indicated by Sears in order to correct for multiple scattering. The calculations were performed in the particular case of slow neutron incoherent quasielastic scattering from organic plastic crystals. First, an exact calculation (up to second scattering) is compared with the results of a Monte Carlo simulation technique. Then, an approximation is developed on the basis of a rotational jump model which allows a further analytical treatment. The multiple scattering is expressed in terms of generalized structure factors (which can be regarded as self convolutions of first order structure factors taking into account the instrumental geometry) and lorentzian functions the widths of which are linear combinations of the jump rates. Three examples are given. Two of them correspond to powder samples while in the third we are concerned with the case of a single crystalline slab. In every case, this approximation is shown to be a good approach to the multiple scattering evaluation, its main advantage being the possibility of applying it without any preliminary knowledge of the correlation times for rotational jumps. (author) Neutron Scattering Differential Cross Sections for 12C Byrd, Stephen T.; Hicks, S. F.; Nickel, M. T.; Block, S. G.; Peters, E. E.; Ramirez, A. P. D.; Mukhopadhyay, S.; McEllistrem, M. T.; Yates, S. W.; Vanhoy, J. R. Because of the prevalence of its use in the nuclear energy industry and for our overall understanding of the interactions of neutrons with matter, accurately determining the effects of fast neutrons scattering from 12C is important. Previously measured 12C inelastic neutron scattering differential cross sections found in the National Nuclear Data Center (NNDC) show significant discrepancies (>30%). Seeking to resolve these discrepancies, neutron inelastic and elastic scattering differential cross sections for 12C were measured at the University of Kentucky Acceleratory Laboratory for incident neutron energies of 5.58, 5.83, and 6.04 MeV. Quasi mono-energetic neutrons were scattered off an enriched 12C target (>99.99%) and detected by a C6D6 liquid scintillation detector. Time-of-flight (TOF) techniques were used to determine scattered neutron energies and allowed for elastic/inelastic scattering distinction. Relative detector efficiencies were determined through direct measurements of neutrons produced by the 2H(d,n) and 3H(p,n) source reactions, and absolute normalization factors were found by comparing 1H scattering measurements to accepted NNDC values. This experimental procedure has been successfully used for prior neutron scattering measurements and seems well-suited to our current objective. Significant challenges were encountered, however, with measuring the neutron detector efficiency over the broad incident neutron energy range required for these measurements. Funding for this research was provided by the National Nuclear Security Administration (NNSA). Polarized neutron scattering on HYSPEC: the HYbrid SPECtrometer at SNS Zaliznyak, Igor [Brookhaven National Laboratory (BNL); Savici, Andrei T [ORNL; Garlea, Vasile O [ORNL; Winn, Barry L [ORNL; Schneelock, John [Brookhaven National Laboratory (BNL); Tranquada, John M. [Brookhaven National Laboratory (BNL); Gu, G. D. [Brookhaven National Laboratory (BNL); Wang, Aifeng [Brookhaven National Laboratory (BNL); Petrovic, C [Brookhaven National Laboratory (BNL) We describe some of the first polarized neutron scattering measurements performed at HYSPEC spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory. We discuss details of the instrument setup and the experimental procedures in the mode with the full polarization analysis. Examples of the polarized neutron diffraction and the polarized inelastic neutron data obtained on single crystal samples are presented. Scatterings and reactions by means of polarized neutron beam Koori, N. A high resolution polarized neutron beam should be prepared for nuclear physics, which will be planned with the new ring cyclotron at RCNP. Studies on scatterings and reactions by means of polarized neutron beams are reviewed briefly. Beam lines for polarized neutrons are summarized. An example of high resolution measurements of neutron induced reactions is described. (author) Zaliznyak, Igor A; Savici, Andrei T.; Garlea, V. Ovidiu; Winn, Barry; Filges, Uwe; Schneeloch, John; Tranquada, John M.; Gu, Genda; Wang, Aifeng; Petrovic, Cedomir We describe some of the first polarized neutron scattering measurements performed at HYSPEC spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory. We discuss details of the instrument setup and the experimental procedures in the mode with full polarization analysis. Examples of polarized neutron diffraction and polarized inelastic neutron data obtained on single crystal samples are presented. Annual report on neutron scattering studies in JAERI Sato, Masatoshi; Nishi, Masakazu; Fujishita, Hideshi; Iizumi, Masashi Neutron scattering studies carried out from September 1979 to August 1981 by Division of Physics, JAERI, and universities with JRR-2 and -3 neutron beam facilities are described: 61 summary reports, and a list of publications. (author) Study of scattering in bi-dimensional neutron radiographic images Oliveira, K.A.M. de; Crispim, V.R.; Silva, F.C. The effect of neutron scattering frequently causes distortions in neutron radiographic images and, thus, reduces the quality. In this project, a type of filter, comprised of cadmium (a neutron absorber), was used in the form of a grid to correct this effect. This device generated image data in the discrete shadow bands of the absorber, components relative to neutron scattering on the test object and surroundings. Scattering image data processing, together with the original neutron radiographic image, resulted in a corrected image with improved edge delineation and, thus, greater definition in the neutron radiographic image of the test object. The objective of this study is to propose a theoretical/experimental methodology that is capable of eliminating the components relative to neutron scattering in neutron radiographic images, coming from the material that composes the test object and the materials that compose the surrounding area. (author) Neutron scattering studies of the actinides Lander, G.H. The electronic structure of actinide materials presents a unique example of the interplay between localized and band electrons. Together with a variety of other techniques, especially magnetization and the Mossbauer effect, neutron studies have helped us to understand the systematics of many actinide compounds that order magnetically. A direct consequence of the localization of 5f electrons is the spin-orbit coupling and subsequent spin-lattice interaction that often leads to strongly anisotropic behavior. The unusual phase transition in UO 2 , for example, arises from interactions between quadrupole moments. On the other hand, in the monopnictides and monochalcogenides, the anisotropy is more difficult to understand, but probably involves an interaction between actinide and anion wave functions. A variety of neutron experiments, including form-factor studies, critical scattering and measurements of the elementary excitations have now been performed, and the conceptual picture emerging from these studies will be discussed Los Alamos pulsed spallation neutron source target systems - present and future Russell, G.J.; Daemen, L.L.; Pitcher, E.J.; Brun, T.O.; Hjelm, R.P. Jr. For the past 16 yr, spallation target-system designers have devoted much time and effort to the design and optimization of pulsed spallation neutron sources. Many concepts have been proposed, but, in practice, only one has been implemented horizontal beam insertion with moderators in wing geometry i.e., until we introduced the innovative split-target/flux-trap-moderator design with a composite reflector shield at the Manuel Lujan, Jr., Neutron Scattering Center (LANSCE). The LANSCE target system design is now considered a classic by spallation target system designers worldwide. LANSCE, a state-of-the-art pulsed spallation neutron source for materials science and nuclear physics research, uses 800-MeV protons from the Clinton P. Anderson Meson Physics Facility. These protons are fed into the proton storage ring to be compressed to 250-ns pulses before being delivered to LANSCE at 20 Hz. LANSCE produces the highest peak neutron flux of any pulsed spallation neutron source in the world Molecular dynamics using quasielastic neutron scattering Mitra, S Quasielastic neutron scattering (QENS) technique is well suited to study the molecular motions (rotations and translations) in solids or liquids. It offers a unique possibility of analysing spatial dimensions of atomic or molecular processes in their development over time. We describe here some of the systems studied using the QENS spectrometer, designed, developed and commissioned at Dhruva reactor in Trombay. We have studied a variety of systems to investigate the molecular motion, for example, simple molecular solids, molecules adsorbed in confined medium like porous systems or zeolites, monolayer-protected nano-sized metal clusters, water in Portland cement as it cures with time, etc. (author) Neutron scattering on partially deuterated polybutadiene Kahle, S; Monkenbusch, M; Richter, D; Arbe, A; Colmenero, J; Frick, B The molecular nature of the secondary relaxation (Johari-Goldstein relaxation) and its relationship with the alpha relaxation is in most cases still unknown. In order to access these processes on a molecular level, it is necessary to obtain spatial information of the relaxation. Through the momentum-transfer dependence of the dynamic structure factor S(Q,t), this information can be provided by quasielastic neutron scattering techniques. The large difference in scattering lengths between hydrogen and deuterium allows us to accentuate specific correlations between atoms in a polymer melt. Here, we report on recent results on a polybutadiene melt, where the double bond was hydrogeneous, while the methylene groups carried deuterons (d4h2-PB). In this way the correlations between the double bonds are emphasised. We will show that the double bond/double bond correlation function, generated in this way, shows the same temperature dependence as the viscosity at higher temperatures at the structure factor peak maximum... LOS ALAMOS NEUTRON SCIENCE CENTER CONTRIBUTIONS TO THE DEVELOPMENT OF FUTURE POWER REACTORS GAVRON, VICTOR I. [Los Alamos National Laboratory; HILL, TONY S. [Los Alamos National Laboratory; PITCHER, ERIC J. [Los Alamos National Laboratory; TOVESSON, FREDERIK K. [Los Alamos National Laboratory The Los Alamos Neutron Science Center (LANSCE) is a large spallation neutron complex centered around an 800 MeV high-currently proton accelerator. Existing facilities include a highly-moderated neutron facility (Lujan Center) where neutrons between thermal and keV energies are produced, and the Weapons Neutron Research Center (WNR), where a bare spallation target produces neutrons between 0.1 and several hundred MeV.The LANSCE facility offers a unique capability to provide high precision nuclear data over a large energy region, including that for fast reactor systems. In an ongoing experimental program the fission and capture cross sections are being measured for a number of minor actinides relevant for Generation-IV reactors and transmutation technology. Fission experiments makes use of both the highly moderated spallation neutron spectrum at the Lujan Center, and the unmoderated high energy spectrum at WNR. By combininb measurements at these two facilities the differential fission cross section is measured relative to the {sup 235}U(n,f) standard from subthermal energies up to about 200 MeV. An elaborate data acquisition system is designed to deal with all the different types of background present when spanning 10 energy decades. The first isotope to be measured was {sup 237}Np, and the results were used to improve the current ENDF/B-VII evaluation. Partial results have also been obtained for {sup 240}Pu and {sup 242}Pu, and the final results are expected shortly. Capture cross sections are measured at LANSCE using the Detector for Advanced Neutron Capture Experiments (DANCE). This unique instrument is highly efficient in detecting radiative capture events, and can thus handle radioactive samples of half-lives as low as 100 years. A number of capture cross sections important to fast reaction applications have been measured with DANCE. The first measurement was on {sup 237}Np(n,{gamma}), and the results have been submitted for publication. Other capture Small-angle neutron scattering technique in liquid crystal studies Shahidan Radiman The following topics discussed: general principles of SAS (Small-angle Neutron Scattering), liquid crystals, nanoparticle templating on liquid crystals, examples of SAS results, prospects of this studies Significance of collective motions in biopolymers and neutron scattering Go, Nobuhiro [Kyoto Univ. (Japan) Importance of collective variable description of conformational dynamics of biopolymers and the vital role that neutron inelastic scattering phenomena would play in its experimental determination are discussed. (author) Dynamic properties of electrons in solids by neutron scattering Illustrative cases of the use of neutron scattering in the study of the electronic properties of materials discussed here include scattering by localised electrons, narrow band materials and electron plasmas. (U.K.) Magnetic scattering of neutrons by atoms Stassis, C.; Deckman, H.W. The magnetic scattering of neutrons by an atom or ion possessing both a spin and orbital magnetic moment is examined. For an atom in the 1sup(n) electronic configuration the magnetic scattering amplitude is determined by matrix elements of even-order electric and odd-order magnetic multipoles, whose order of multipolarity k is less than or equal to 21 + 1. The calculation of the matrix elements of these multipoles is separated into evaluating radial matrix elements and matrix elements of the Racah tensors Wsup(0,k) and Wsup(1,k') where k is an even integar less than or equal to 21. The calculation of the matrix elements of these tensors is considerably simplified by selection rules based on the groups Sp(41 + 2), R(21 + 1), R(3) and in the case of f-electrons, the special group G 2 . It is shown that, in the case of elastic scattering by an atom or an ion whose state is a single Russell-Saunders state, the magnetic scattering amplitude can be written in the conventional form p(q)qsub(m).sigma. General expressions for the amplitude p(q) as well as the elastic magnetic form factor are obtained. The evaluation of the coherent magnetic scattering amplitude by an atom in a magnetic field is discussed, and the small-q approximation to the elastic magnetic scattering is considered. The formation is illustrated for the important case of d- and f-electrons. The generalization of the formalism to the case of mixed atomic configurations is examined in some detail. (author) Symmetry effects in neutron scattering from isotopically enriched Se isotopes Lachkar, J.; Haouat, G.; McEllistrem, M. T.; Patin, Y.; Sigaud, J.; Cocu, F. Differential cross sections for neutron elastic and inelastic scattering from {sup 76}Se, {sup 78}Se, {sup 80}Se and {sup 82}Se, have been measured at 8-MeV incident neutron energy and from {sup 76}Se and {sup 82}Se at 6- and 10-MeV incident energies. The differences observed in the elastic scattering cross sections are interpretable as the effects of isospin term in the scattering potentials. A full analysis of the elastic scattering data are presented. Magnetic Dynamics of Fine Particles Studied by Inelastic Neutron Scattering Hansen, Mikkel Fougt; Bødker, Franz; Mørup, Steen We give an introduction to inelastic neutron scattering and the dynamic scattering function for magnetic nanoparticles. Differences between ferromagnetic and antiferromagnetic nanoparticles are discussed and we give a review of recent results on ferromagnetic Fe nanoparticles and canted antiferro......We give an introduction to inelastic neutron scattering and the dynamic scattering function for magnetic nanoparticles. Differences between ferromagnetic and antiferromagnetic nanoparticles are discussed and we give a review of recent results on ferromagnetic Fe nanoparticles and canted... Current status and future development of neutron scattering in CIAE Chen, D.F.; Gou, C.; Ye, C.T.; Guo, L.P.; Sun, K. Currently, the 15 MW Heavy Water Research Reactor (HWRR) at China Institute of Atomic Energy (CIAE) in Beijing is the only neutron source available for neutron scattering experiments in China. A 60 MW tank-in-pool inverse neutron trap-type research reactor, China Advanced Research Reactor (CARR), now is being built at CIAE to meet the increasing demand of neutron scattering research in China. According to design, the maximum unperturbed thermal neutron flux would be expected to be 8x10 14 n/cm 2 .s in the reflector region. Seven out of nine tangential horizontal beam tubes will be dedicated for neutron scattering experiments. A cold source, a hot source and a 30x60 m 2 guide tube hall will also be constructed. In this paper, a brief introduction of HWRR, the existing neutron scattering facilities and research activities at HWRR, CARR, and the facilities to be built at CARR are presented. (author) Soller collimators for small angle neutron scattering Crawford, R.K.; Epperson, J.E.; Thiyagarajan, P. The neutron beam transmitted through the soller collimators on the SAD (Small Angle Diffractometer) instrument at IPNS (Intense Pulsed Neutron Source) showed wings about the main beam. These wings were quite weak, but were sufficient to interfere with the low-Q scattering data. General considerations of the theory of reflection from homogeneous absorbing media, combined with the results from a Monte Carlo simulation, suggested that these wings were due to specular reflection of neutrons from the absorbing material on the surfaces of the collimator blades. The simulations showed that roughness of the surface was extremely important, with wing background variations of three orders of magnitude being observed with the range of roughness values used in the simulations. Based on the results of these simulations, new collimators for SAD were produced with a much rougher 10 B-binder surface coating on the blades. These new collimators were determined to be significantly better than the original SAD collimators. This work suggests that any soller collimators designed for use with long wavelengths should be fabricated with such a rough surface coating, in order to eliminate (or at least minimize) the undesirable reflection effects which otherwise seem certain to occur. 4 refs., 6 figs The TUNL neutron-neutron scattering length experiment Trotter, D.E.G.; Tornow, W.; Howell, C.R. Since an accurate value for the neutron-neutron (nn) scattering length a nn is of fundamental interest, its determination should not rely on one source of experimental information only. Besides the π d capture reaction, the nd breakup reaction has been the classical reaction used for determining a nn . However, none of the published values for a nn obtained from kinematically complete nd → n+n+p breakup data are based on a rigorous treatment of the three-nucleon continuum. In addition, the scale uncertainty associated with the existing nd breakup cross-section data in the region of the nn final-state interaction peak is too large to allow for a meaningful reanalysis. Therefore, a new kinematically complete nd breakup experiment is underway at TUNL at an incident neutron energy of 13 MeV. State-of-the-art three-nucleon continuum calculations will be used to analyze the data. In order to investigate the possible influence of three-nucleon force effects, a nn will be determined from data taken at four production angles of the nn pair between 20.5 degrees and 43 degrees (lab) A proposal for a long-pulse spallation source at Los Alamos National Laboratory Pynn, R.; Weinacht, D. Los Alamos National Laboratory is proposing a new spallation neutron source that will provide the US with an internationally competitive facility for neutron science and technology that can be built in approximately three years for less than $100 million. The establishment of a 1-MW, long-pulse spallation source (LPSS) at the Los Alamos Neutron Science Center (LANSCE) will meet many of the present needs of scientists in the neutron scattering community and provide a significant boost to neutron research in the US. The new facility will support the development of a future, more intense spallation neutron source, that is planned by DOE's Office of Energy Research. Together with the existing short pulse spallation source (SPSS) at the Manual Lujan, Jr. Neutron Scattering Center (MLNSC) at Los Alamos, the new LPSS will provide US scientists with a complementary pair of high-performance neutron sources to rival the world's leading facilities in Europe Los Alamos National Laboratory is proposing a new spallation neutron source that will provide the U.S. with an internationally competitive facility for neutron science and technology that can be built in approximately three years for less than $100 million. The establishment of a 1-MW long-pulse spallation source (LPSS) at the Los Alamos Neutron Science Center (LANSCE) will meet many of the present needs of scientists in the neutron scattering community and provide a significant boost to neutron research in the U.S. The new facility will support the development of a future, more intense spallation neutron source, that is planned by DOE's Office of Energy Research. Together with the existing short pulse spallation source (SPSS) at the Manual Lujan, Jr. Neutron Scattering Center (MLNSC) at Los Alamos, the new LPSS will provide U.S. scientists with a complementary pair of high-performance neutron sources to rival the world's leading facilities in Europe. (author) 1 ref Non-destructive diagnostics of irradiated materials using neutron scattering from pulsed neutron sources Korenev, Sergey E-mail: [email protected]; Sikolenko, Vadim The advantage of neutron-scattering studies as compared to the standard X-ray technique is the high penetration of neutrons that allow us to study volume effects. The high resolution of instrumentation on the basis neutron scattering allows measurement of the parameters of lattice structure with high precision. We suggest the use of neutron scattering from pulsed neutron sources for analysis of materials irradiated with pulsed high current electron and ion beams. The results of preliminary tests using this method for Ni foils that have been studied by neutron diffraction at the IBR-2 (Pulsed Fast Reactor at Joint Institute for Nuclear Research) are presented. Korenev, Sergey; Sikolenko, Vadim Next generation neutron scattering at Neutron Science Center project in JAERI Yamada, Yasusada; Watanabe, Noboru; Niimura, Nobuo; Morii, Yukio; Katano, Susumu; Aizawa, Kazuya; Suzuki, Jun-ichi; Koizumi, Satoshi; Osakabe, Toyotaka. Japan Atomic Energy Research Institute (JAERI) has promoted neutron scattering researches by means of research reactors in Tokai Research Establishment, and proposes 'Neutron Science Research Center' to develop the future prospect of the Tokai Research Establishment. The scientific fields which will be expected to progress by the neutron scattering experiments carried out at the proposed facility in the Center are surveyed. (author) Fission Product Data Measured at Los Alamos for Fission Spectrum and Thermal Neutrons on 239Pu, 235U, 238U Selby, H.D.; Mac Innes, M.R.; Barr, D.W.; Keksis, A.L.; Meade, R.A.; Burns, C.J.; Chadwick, M.B.; Wallstrom, T.C. We describe measurements of fission product data at Los Alamos that are important for determining the number of fissions that have occurred when neutrons are incident on plutonium and uranium isotopes. The fission-spectrum measurements were made using a fission chamber designed by the National Institute for Standards and Technology (NIST) in the BIG TEN critical assembly, as part of the Inter-laboratory Liquid Metal Fast Breeder Reactor (LMFBR) Reaction Rate (ILRR) collaboration. The thermal measurements were made at Los Alamos' Omega West Reactor. A related set of measurements were made of fission-product ratios (so-called R-values) in neutron environments provided by a number of Los Alamos critical assemblies that range from having average energies causing fission of 400-600 keV (BIG TEN and the outer regions of the Flattop-25 assembly) to higher energies (1.4-1.9 MeV) in the Jezebel, and in the central regions of the Flattop-25 and Flattop-Pu, critical assemblies. From these data we determine ratios of fission product yields in different fuel and neutron environments (Q-values) and fission product yields in fission spectrum neutron environments for 99 Mo, 95 Zr, 137 Cs, 140 Ba, 141,143 Ce, and 147 Nd. Modest incident-energy dependence exists for the 147 Nd fission product yield; this is discussed in the context of models for fission that include thermal and dynamical effects. The fission product data agree with measurements by Maeck and other authors using mass-spectrometry methods, and with the ILRR collaboration results that used gamma spectroscopy for quantifying fission products. We note that the measurements also contradict earlier 1950s historical Los Alamos estimates by ∼5-7%, most likely owing to self-shielding corrections not made in the early thermal measurements. Our experimental results provide a confirmation of the England-Rider ENDF/B-VI evaluated fission-spectrum fission product yields that were carried over to the ENDF/B-VII.0 library, except Time gating for energy selection and scatter rejection: High-energy pulsed neutron imaging at LANSCE Swift, Alicia; Schirato, Richard; McKigney, Edward; Hunter, James; Temple, Brian The Los Alamos Neutron Science Center (LANSCE) is a linear accelerator in Los Alamos, New Mexico that accelerates a proton beam to 800 MeV, which then produces spallation neutron beams. Flight path FP15R uses a tungsten target to generate neutrons of energy ranging from several hundred keV to ~600 MeV. The beam structure has micropulses of sub-ns width and period of 1.784 ns, and macropulses of 625 μs width and frequency of either 50 Hz or 100 Hz. This corresponds to 347 micropulses per macropulse, or 1.74 x 104 micropulses per second when operating at 50 Hz. Using a very fast, cooled ICCD camera (Princeton Instruments PI-Max 4), gated images of various objects were obtained on FP15R in January 2015. Objects imaged included blocks of lead and borated polyethylene; a tungsten sphere; and a tungsten, polyethylene, and steel cylinder. Images were obtained in 36 min or less, with some in as little as 6 min. This is novel because the gate widths (some as narrow as 10 ns) were selected to reject scatter and other signal not of interest (e.g. the gamma flash that precedes the neutron pulse), which has not been demonstrated at energies above 14 MeV. This proof-of-principle experiment shows that time gating is possible above 14MeV and is useful for selecting neutron energy and reducing scatter, thus forming clearer images. Future work (simulation and experimental) is being undertaken to improve camera shielding and system design and to precisely determine optical properties of the imaging system. Wines: water inelastic neutron scattering experimental study Risch, P.; Ait Abderrahim, H.; D'hondt, P.; Malabu, E. An intercomparison of calculated fast neutron flux (E > 1 MeV) traverse through a very thick water zone obtained using both S N , (DORT) and Monte-Carlo (TRIPOLI and MCBEND) codes in combination with different cross-sections libraries (based on ENDF/B-III, IV, V and VI), showed small discrepancies either between S N , and Monte-Carlo results or even between S N , or Monte-Carlo results when we consider different cross-sections libraries except for S N , calculation when using P 0 , cross-sections. In order to validate our calculations we looked for experimental data. Unfortunately no experiment, dedicated for the fast neutron transport in large thickness of water, was found in the literature. Therefore SCK-CEN and EDF decided to launch the WINES experiment which is dedicated to study this phenomenon. WINES sands for Water Inelastic Neutron scattering Experimental Study. The aim of this experiment is to provide-experimental data for validation of neutron transport codes and nuclear cross-sections libraries used for LWR surveillance dosimetry analysis. The experimental device is made of 1 m 3 cubic plexiglass container filled with demineralized water. At one face of this cube, a 235 U neutron fission source system is screwed. The source device is made of a 235 U (93 % weight enriched) 18.55 x 16 cm 2 plate cladded with aluminium which is inserted in neutron beam emerging from the graphite gas-cooled BR1 reactor. Fission chambers ( 238 U(n,f), 232 Th(n,f), 237 Np(n,f) and 235 U(n,f)) are used to measure the flux traverses on the central axis of the water cube perpendicular to the fission sources. In this paper we will compare the experimental data to the calculated results using the S N , transport code DORT with the P 3 , ELXSIR library, based on ENDF/B-V, and the P 7 -BUGLE-93 library, based on ENDF/B-VI as well as the Monte-Carlo transport code TRIPOLI with a cross-section library based on ENDF/B IV and ENDF/B-VI. (authors) Inelastic neutron scattering and lattice dynamics of minerals We review current research on minerals using inelastic neutron scattering and lattice dynamics calculations. Inelastic neutron scattering studies in combination with first principles and atomistic calculations provide a detailed understanding of the phonon dispersion relations, density of states and their manifestations in ... Abstract. We review current research on minerals using inelastic neutron scattering and lattice dynamics calculations. Inelastic neutron scattering studies in combination with first principles and atomistic calculations provide a detailed understanding of the phonon dispersion relations, density of states and their ... Ten year's activity in the field of neutron scattering workshop Hamaguchi, Yoshikazu 'Neutron scattering' is in the frame of the 'Utilization of Research Reactor's of the FNCA (Forum for Nuclear Cooperation in Asia) project, which held the workshops from FY 1992. This report is a summary of the results and activities of neutron scattering workshops and sub-workshops since the start in FY 1992. (author) Progress in small angle neutron scattering activities in Malaysia Mohamed, Abudl Aziz Bin [Industrial Technology Division, Malaysian Institute for Nuclear Technology Research (MINT) (Indonesia) Research activities by use of small angle neutron scattering in Malaysia are briefly reported. Scattered neutron data are displayed in two or three-dimensional isometric view by the data acquisition system. Visual Basic is utilized for data acquisition and MathCad for data processing and analyses. (Y. Kazumata) Mohamed, Abudl Aziz Bin Review of the Lujan neutron scattering center: basic energy sciences prereport February 2009 The Lujan Neutron Scattering Center (Lujan Center) at LANSCE is a designated National User Facility for neutron scattering and nuclear physics studies with pulsed beams of moderated neutrons (cold, thermal, and epithermal). As one of five experimental areas at the Los Alamos Neutron Science Center (LANSCE), the Lujan Center hosts engineers, scientists, and students from around the world. The Lujan Center consists of Experimental Room (ER) 1 (ERl) built by the Laboratory in 1977, ER2 built by the Office of Basic Energy Sciences (BES) in 1989, and the Office Building (622) also built by BES in 1989, along with a chem-bio lab, a shop, and other out-buildings. According to a 1996 Memorandum of Agreement (MOA) between the Defense Programs (DP) Office of the National Nuclear Security Agency (NNSA) and the Office of Science (SC, then the Office of Energy Research), the Lujan Center flight paths were transferred from DP to SC, including those in ERI. That MOA was updated in 2001. Under the MOA, NNSA-DP delivers neutron beam to the windows of the target crypt, outside of which BES becomes the &apos;landlord.&apos; The leveraging nature of the Lujan Center on the LANSCE accelerator is a substantial annual leverage to the $11 M BES operating fund worth approximately $56 M operating cost of the linear accelerator (LINAC)-in beam delivery. The contribution of neutron scattering to molecular biology Stuhrmann, H.B. About half of the atoms of living cells are hydrogens, and nearly all biological applications of neutron scattering rely on the well-known difference in the scattering lengths of the proton and the deuteron. This introduces us to a wide variety of biological problems, which are related with hydrogen in water, proteins, nucleic acids and lipids. Neutron scattering gives an answer to both structural and dynamical aspects of the system in question. With deuterium labelled samples unambiguous information about molecular structure and motion becomes accessible. The architecture of viruses, cell membranes and gene expressing molecules has become a lot clearer with neutron scattering. (author) Neutron scattering studies of the heavy Fermion superconductors Goldman, A.I. Recent neutron scattering measurements of the heavy Fermion superconductors are described. Those materials offer an exciting opportunity for neutron scattering since the f-electrons, which couple directly to magnetic scattering measurements, seem to be the same electrons which form the superconducting state below T/sub c/. In addition, studies of the magnetic fluctuations in these, and other heavy Fermion systems, by inelastic magnetic neutron scattering can provide information about the nature of the low temperature Fermi liquid character of these novel compounds Debye-Waller Factor in Neutron Scattering by Ferromagnetic Metals Paradezhenko, G. V.; Melnikov, N. B.; Reser, B. I. We obtain an expression for the neutron scattering cross section in the case of an arbitrary interaction of the neutron with the crystal. We give a concise, simple derivation of the Debye-Waller factor as a function of the scattering vector and the temperature. For ferromagnetic metals above the Curie temperature, we estimate the Debye-Waller factor in the range of scattering vectors characteristic of polarized magnetic neutron scattering experiments. In the example of iron, we compare the results of harmonic and anharmonic approximations. Optimising polarised neutron scattering measurements--XYZ and polarimetry analysis Cussen, L.D.; Goossens, D.J. The analytic optimisation of neutron scattering measurements made using XYZ polarisation analysis and neutron polarimetry techniques is discussed. Expressions for the 'quality factor' and the optimum division of counting time for the XYZ technique are presented. For neutron polarimetry the optimisation is identified as analogous to that for measuring the flipping ratio and reference is made to the results already in the literature Cussen, L D The analytic optimisation of neutron scattering measurements made using XYZ polarisation analysis and neutron polarimetry techniques is discussed. Expressions for the 'quality factor' and the optimum division of counting time for the XYZ technique are presented. For neutron polarimetry the optimisation is identified as analogous to that for measuring the flipping ratio and reference is made to the results already in the literature. Slow neutron scattering by water molecules Stancic, V [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia) In this work some new, preliminary formulae for slow neutron scattering cross section calculation by heavy and light water molecules have been done. The idea was to find, from the sum which exists in well known Nelkin model, other cross sections in a more simple analytical form, so that next approximations may be possible. In order to sum a series it was starting from Euler-Mclaurin formula. Some new summation formulae have been derived there, and defined in two theorems. Extensive calculations, especially during the evaluation of residues, have been made at the CDC 3600 computer. validation of derived formulae was done by comparison with the BNL-325 results. Good agreement is shown. (author) Lattice Waves, Spin Waves, and Neutron Scattering Brockhouse, Bertram N. Use of neutron inelastic scattering to study the forces between atoms in solids is treated. One-phonon processes and lattice vibrations are discussed, and experiments that verified the existence of the quantum of lattice vibrations, the phonon, are reviewed. Dispersion curves, phonon frequencies and absorption, and models for dispersion calculations are discussed. Experiments on the crystal dynamics of metals are examined. Dispersion curves are presented and analyzed; theory of lattice dynamics is considered; effects of Fermi surfaces on dispersion curves; electron-phonon interactions, electronic structure influence on lattice vibrations, and phonon lifetimes are explored. The dispersion relation of spin waves in crystals and experiments in which dispersion curves for spin waves in Co-Fe alloy and magnons in magnetite were obtained and the reality of the magnon was demonstrated are discussed. (D.C.W) Neutron scattering applied to environmental waste containment Elcombe, M.M.; Studer, A.J.; Waring, C.L. Full text: A major environmental problem in Australia occurs at mine sites, where rock dumps and tailings dams are still causing problems many years after the mines have ceased operation. ANSTO has developed a method of producing a neutral barrier in-situ, which reduces water flow through the waste material. This in turn prevents water carrying waste products out into the wider environment. Both the loose grained sand substrate and the Neutral Barrier produced are crystalline and therefore amenable to diffraction techniques. In recent laboratory experiments neutron scattering has been used to confirm the presence of the barrier and measure the amount of calcite forming the barrier, at centimetre depths below the surface. The results of these measurements will be presented Inelastic Neutron Scattering Study of Mn Zhong, Y.; Sarachik, M.P.; Friedman, J.R.; Robinson, R.A.; Kelley, T.M.; Nakotte, H.; Christianson, A.C.; Trouw, F.; Aubin, S.M.J.; Hendrickson, D.N. The authors report zero-field inelastic neutron scattering experiments on a 14-gram deuterated sample of Mn{sub 12}-Acetate consisting of a large number of identical spin-10 magnetic clusters. Their resolution enables them to see a series of peaks corresponding to transitions between the anisotropy levels within the spin-10 manifold. A fit to the spin Hamiltonian H = {minus}DS{sub z}{sup 2} + {mu}{sub B}B{center_dot}g{center_dot}S-BS{sub z}{sup 4} + C(S{sub +}{sup 4} + S{sub {minus}}{sup 4}) yields an anisotropy constant D = (0.54 {+-} 0.02) K and a fourth-order diagonal anisotropy coefficient B = (1.2 {+-} 0.1) x 10{sup {minus}3}K. Unlike EPR measurements, their experiments do not require a magnetic field and yield parameters that do not require knowledge of the g-value. Neutron Compton scattering from selectively deuterated acetanilide Wanderlingh, U. N.; Fielding, A. L.; Middendorf, H. D. With the aim of developing the application of neutron Compton scattering (NCS) to molecular systems of biophysical interest, we are using the Compton spectrometer EVS at ISIS to characterize the momentum distribution of protons in peptide groups. In this contribution we present NCS measurements of the recoil peak (Compton profile) due to the amide proton in otherwise fully deuterated acetanilide (ACN), a widely studied model system for H-bonding and energy transfer in biomolecules. We obtain values for the average width of the potential well of the amide proton and its mean kinetic energy. Deviations from the Gaussian form of the Compton profile, analyzed on the basis of an expansion due to Sears, provide data relating to the Laplacian of the proton potential. Stancic, V. Neutron detection efficiency determinations for the TUNL neutron-neutron and neutron-proton scattering-length measurements Trotter, D.E. Gonzalez [Department of Physics, Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States)], E-mail: [email protected]; Meneses, F. Salinas [Department of Physics, Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Tornow, W. [Department of Physics, Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States)], E-mail: [email protected]; Crowell, A.S.; Howell, C.R. [Department of Physics, Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Schmidt, D. [Physikalisch-Technische Bundesanstalt, D-38116, Braunschweig (Germany); Walter, R.L. [Department of Physics, Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States) The methods employed and the results obtained from measurements and calculations of the detection efficiency for the neutron detectors used at Triangle Universities Nuclear Laboratory (TUNL) in the simultaneous determination of the {sup 1}S{sub 0} neutron-neutron and neutron-proton scattering lengths a{sub nn} and a{sub np}, respectively, are described. Typical values for the detector efficiency were 0.3. Very good agreement between the different experimental methods and between data and calculation has been obtained in the neutron energy range below E{sub n}=13MeV. Trotter, D.E. Gonzalez; Meneses, F. Salinas; Tornow, W.; Crowell, A.S.; Howell, C.R.; Schmidt, D.; Walter, R.L. The methods employed and the results obtained from measurements and calculations of the detection efficiency for the neutron detectors used at Triangle Universities Nuclear Laboratory (TUNL) in the simultaneous determination of the 1 S 0 neutron-neutron and neutron-proton scattering lengths a nn and a np , respectively, are described. Typical values for the detector efficiency were 0.3. Very good agreement between the different experimental methods and between data and calculation has been obtained in the neutron energy range below E n =13MeV. Neutron metrology for SBSS Morris, C.L.; Anaya, J.M.; Armijo, V. This is the final report of a two-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The goal of this work is to develop new detector technologies for Science-Based Stockpile Stewardship (SBSS) at the Los Alamos Neutron Scattering Center (LANSCE) using existing expertise and infrastructure from the nuclear and particle physics programs at LANL Solid State Power Amplifier for 805 MegaHertz at the Los Alamos Neutron Science Center Davis, J.L.; Lyles, J.T.M. Particle accelerators for protons, electrons, and other ion species often use high-power vacuum tubes for RF amplification, due to the high RF power requirements to accelerate these particles with high beam currents. The final power amplifier stages driving large accelerators are unable to be converted to solid-state devices with the present technology. In some instances, radiation levels preclude the use of transistors near beamlines. Work is being done worldwide to replace the RF power stages under about ten kilowatts CW with transistor amplifiers, due to the lower maintenance costs and obsolescence of power tubes in these ranges. This is especially practical where the stages drive fifty Ohm impedance and are not located in high radiation zones. The authors are doing this at the Los Alamos Neutron Science Center (LANSCE) proton linear accelerator (linac) in New Mexico. They replaced a physically-large air-cooled UHF power amplifier using a tetrode electron tube with a compact water-cooled unit based on modular amplifier pallets developed at LANSCE. Each module uses eight push-pull bipolar power transistor pairs operated in class AB. Four pallets can easily provide up to 2,800 watts of continuous RF at 805 MHz. A radial splitter and combiner parallels the modules. This amplifier has proven to be completely reliable after over 10,000 hours of operation without failure. A second unit was constructed and installed for redundancy, and the old tetrode system was removed in 1998. The compact packaging for cooling, DC power, impedance matching, RF interconnection, and power combining met the electrical and mechanical requirements. CRT display of individual collector currents and RF levels is made possible with built-in samplers and a VXI data acquisition unit Neutron scattering for the analysis of biological structures. Brookhaven symposia in biology. Number 27 Schoenborn, B P [ed. Sessions were included on neutron scattering and biological structure analysis, protein crystallography, neutron scattering from oriented systems, solution scattering, preparation of deuterated specimens, inelastic scattering, data analysis, experimental techniques, and instrumentation. Separate entries were made for the individual papers. Introduction to the theory of thermal neutron scattering Squires, G L Since the advent of the nuclear reactor, thermal neutron scattering has proved a valuable tool for studying many properties of solids and liquids, and research workers are active in the field at reactor centres and universities throughout the world. This classic text provides the basic quantum theory of thermal neutron scattering and applies the concepts to scattering by crystals, liquids and magnetic systems. Other topics discussed are the relation of the scattering to correlation functions in the scattering system, the dynamical theory of scattering and polarisation analysis. No previous knowledge of the theory of thermal neutron scattering is assumed, but basic knowledge of quantum mechanics and solid state physics is required. The book is intended for experimenters rather than theoreticians, and the discussion is kept as informal as possible. A number of examples, with worked solutions, are included as an aid to the understanding of the text. Neutron scattering investigations of frustated magnets Fennell, Tom This thesis describes the experimental investigation of frustrated magnetic systems based on the pyrochlore lattice of corner-sharing tetrahedra. Ho2Ti207 and Dy2Ti207 are examples of spin ices, in which the manifold of disordered magnetic groundstates maps onto that of the proton positions in ice. Using single crystal neutron scattering to measure Bragg and diffuse scattering, the effect of applying magnetic fields along different directions in the crystal was investigated. Different schemes of degeneracy removal were observed for different directions. Long and short range order, and the coexistence of both could be observed by this technique.The field and temperature dependence of magnetic ordering was studied in Ho2Ti207 and Dy2Ti207. Ho2Ti2()7 has been more extensively investigated. The field was applied on [00l], [hh0], [hhh] and [hh2h]. Dy2Ti207 was studied with the field applied on [00l] and [hho] but more detailed information about the evolution of the scattering pattern across a large area of reciprocal space was obtained.With the field applied on [00l] both materials showed complete degeneracy removal. A long range ordered structure was formed. Any magnetic diffuse scattering vanished and was entirely replaced by strong magnetic Bragg scattering. At T =0.05 K both materials show unusual magnetization curves, with a prominent step and hysteresis. This was attributed to the extremely slow dynamics of spin ice materials at this temperature.Both materials were studied in greatest detail with the field applied on [hh0]. The coexistence of long and short range order was observed when the field was raised at T = 0.05 K. The application of a field in this direction separated the spin system into two populations. One could be ordered by the field, and one remained disordered. However, via spin-spin interactions, the field restricted the degeneracy of the disordered spin population. The neutron scattering pattern of Dy2Ti207 shows that the spin system was separated Multiple small-angle neutron scattering studies of anisotropic materials Allen, A J; Long, G G; Ilavsky, J Building on previous work that considered spherical scatterers and randomly oriented spheroidal scatterers, we describe a multiple small-angle neutron scattering (MSANS) analysis for nonrandomly oriented spheroids. We illustrate this with studies of the multi-component void morphologies found in plasma-spray thermal barrier coatings. (orig.) Dynamics of liquid N2 studied by neutron inelastic scattering Pedersen, Karen Schou; Carneiro, Kim; Hansen, Flemming Yssing Neutron inelastic-scattering data from liquid N2 at wave-vector transfer κ between 0.18 and 2.1 Å-1 and temperatures ranging from T=65-77 K are presented. The data are corrected for the contribution from multiple scattering and incoherent scattering. The resulting dynamic structure factor S (κ,ω)... DISCUS, Neutron Single to Double Scattering Ratio in Inelastic Scattering Experiment by Monte-Carlo Johnson, M.W. 1 - Description of problem or function: DISCUS calculates the ratio of once-scattered to twice-scattered neutrons detected in an inelastic neutron scattering experiment. DISCUS also calculates the flux of once-scattered neutrons that would have been observed if there were no absorption in the sample and if, once scattered, the neutron would emerge without further re-scattering or absorption. Three types of sample geometry are used: an infinite flat plate, a finite flat plate or a finite length cylinder. (The infinite flat plate is included for comparison with other multiple scattering programs.) The program may be used for any sample for which the scattering law is of the form S(/Q/, omega). 2 - Method of solution: Monte Carlo with importance sampling is used. Neutrons are 'forced' both into useful angular trajectories, and useful energy bins. Biasing of the collision point according to the point of entry of the neutron into the sample is also utilised. The first and second order scattered neutron fluxes are calculated in independent histories. For twice-scattered neutron histories a square distribution in Q-omega space is used to sample the neutron coming from the first scattering event, whilst biasing is used for the second scattering event. (A square distribution is used so as to obtain reasonable inelastic-inelastic statistics.) 3 - Restrictions on the complexity of the problem: Unlimited number of detectors. Max. size of (Q, omega) matrix is 39*149. Max. number of points in momentum space for the scattering cross section is 199 Neutron scattering and the search for mechanisms of superconductivity Aeppli, G.; Bishop, D.J.; Broholm, C. Neutron scattering is a direct probe of mass and magnetization density in solids. We start with a brief review of experimental strategies for determining the mechanisms of superconductivity and how neutron scattering contributed towards our understanding of conventional superconductors. The remai......Neutron scattering is a direct probe of mass and magnetization density in solids. We start with a brief review of experimental strategies for determining the mechanisms of superconductivity and how neutron scattering contributed towards our understanding of conventional superconductors....... The remainder of the article gives examples of neutron results with impact on the search for the mechanism of superconductivity in more recently discovered, 'exotic', materials, namely the heavy fermion compounds and the layered cuprates, (C) 1999 Elsevier Science B.V. All rights reserved.... Elements of slow-neutron scattering basics, techniques, and applications Carpenter, J M Providing a comprehensive and up-to-date introduction to the theory and applications of slow-neutron scattering, this detailed book equips readers with the fundamental principles of neutron studies, including the background and evolving development of neutron sources, facility design, neutron scattering instrumentation and techniques, and applications in materials phenomena. Drawing on the authors' extensive experience in this field, this text explores the implications of slow-neutron research in greater depth and breadth than ever before in an accessible yet rigorous manner suitable for both students and researchers in the fields of physics, biology, and materials engineering. Through pedagogical examples and in-depth discussion, readers will be able to grasp the full scope of the field of neutron scattering, from theoretical background through to practical, scientific applications. Dynamically Polarized Sample for Neutron Scattering At the Spallation Neutron Source Pierce, Josh; Zhao, J. K.; Crabb, Don The recently constructed Spallation Neutron Source at the Oak Ridge National Laboratory is quickly becoming the world's leader in neutron scattering sciences. In addition to the world's most intense pulsed neutron source, we are continuously constructing state of the art neutron scattering instruments as well as sample environments to address today and tomorrow's challenges in materials research. The Dynamically Polarized Sample project at the SNS is aimed at taking maximum advantage of polarized neutron scattering from polarized samples, especially biological samples that are abundant in hydrogen. Polarized neutron scattering will allow us drastically increase the signal to noise ratio in experiments such as neutron protein crystallography. The DPS project is near completion and all key components have been tested. Here we report the current status of the project. Neutron total scattering cross sections of elemental antimony Smith, A.B.; Guenther, P.T.; Whalen, J.F. Neutron total cross sections are measured from 0.8 to 4.5 MeV with broad resolutions. Differential-neutron-elastic-scattering cross sections are measured from 1.5 to 4.0 MeV at intervals of 50 to 200 keV and at scattering angles distributed between 20 and 160 degrees. Lumped-level neutron-inelastic-scattering cross sections are measured over the same angular and energy range. The exPerimental results are discussed in terms of an optical-statistical model and are compared with respective values given in ENDF/B-V. On the theory of ultracold neutrons scattering by Davydov solitons Brizhik, L.S. Elastic coherent scattering of ultracold neutrons by Davydov solitons in one-dimensional periodic molecular chains without account of thermal oscillations of chain atoms is studied. It is shown that the expression for the differential cross section of the elastic neutron scattering by Davydov soliton breaks down into two components. One of them corresponds to scattering by a resting soliton, the other is proportional to the soliton velocity and has a sharp maximum in the direction of mirror reflection of neutrons from the chain Local-field refinement of neutron scattering lengths Sears, V F We examine the way in which local field effects in the neutron refractive index affect the values of coherent scattering lengths determined by various kinds of neutron optical measurements. We find that under typical experimental conditions these effects are negligible for interferometry measurements but that they are significant for gravity refractometry measurements, producing changes in the effective scattering length of as much as two or three standard deviations in some cases. Refined values of the scattering length are obtained for the thirteen elements for which data are presently available. The special role of local field effects in neutron transmission is also discussed. We examine the way in which local field effects in the neutron refractive index affect the values of coherent scattering lengths determined by various kinds of neutron optical measurements. We find that under typical experimental conditions these effects are negligible for interferometry measurements but that they are significant for gravity refractometry measurements, producing changes in the effective scattering length of as much as two or three standard deviations in some cases. Refined values of the scattering length are obtained for the thirteen elements for which data are presently available. The special role of local field effects in neutron transmission is also discussed. (orig.) New statistical model of inelastic fast neutron scattering Stancicj, V. A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations Neutron total cross sections are measured from 0.8 to 4.5 MeV with broad resolutions. Differential-neutron-elastic-scattering cross sections are measured from 1.5 to 4.0 MeV at intervals of 50 to 200 keV and at scattering angles distributed between 20 and 160 degrees. Lumped-level neutron-inelastic-scattering cross sections are measured over the same angular and energy range. The exPerimental results are discussed in terms of an optical-statistical model and are compared with respective values given in ENDF/B-V Inelastic magnetic scattering of polarized neutrons by a superconducting ring Agafonov, A. I. The inelastic scattering of cold neutrons by a ring leads to quantum jumps of a superconducting current which correspond to a decrease in the fluxoid quantum number by one or several units while the change in the ring energy is transferred to the kinetic energy of the scattered neutron. The scattering cross sections of transversely polarized neutrons have been calculated for a thin type-II superconductor ring, the thickness of which is smaller than the field penetration depth but larger than the electron mean free path. Application of Van Hove theory to fast neutron inelastic scattering Stanicicj, V. The Vane Hove general theory of the double differential scattering cross section has been used to derive the particular expressions of the inelastic fast neutrons scattering kernel and scattering cross section. Since the considered energies of incoming neutrons being less than 10 MeV, it enables to use the Fermi gas model of nucleons. In this case it was easy to derive an analytical expression for the time-dependent correlation function of the nucleus. Further, by using an impulse approximation and a short-collision time approach, it was possible to derive the analytical expression of the scattering kernel and scattering cross section for the fast neutron inelastic scattering. The obtained expressions have been used for Fe nucleus. It has been shown a surprising agreement with the experiments. The main advantage of this theory is in its simplicity for some practical calculations and for some theoretical investigations of nuclear processes LANSCE '90: the Manuel Lujan Jr. Neutron Scattering Center Pynn, Roger This paper describes progress that has been made at the Manuel Lujan Jr. Neutron Scattering Center (LANSCE) during the past two years. Presently, LANSCE provides a higher peak neutron flux than any other pulsed spallation neutron source. There are seven spectrometers for neutron scattering experiments that are operated for a national user program sponsored by the U.S. Department of Energy. Two more spectrometers are under construction. Plans have been made to raise the number of beam holes available for instrumentation and to improve the efficiency of the target/moderator system. (author) This paper describes progress that has been made at the Manuel Lujan Jr. Neutron Scattering Center (LANSCE) during the past two years. Presently, LANSCE provides a higher peak neutron flux than any other pulsed spallation neutron source. There are seven spectrometers for neutron scattering experiments that are operated for a national user program sponsored by the US Department of Energy. Two more spectrometers are under construction. Plans have been made to raise the number of beam holes available for instrumentation and to improve the efficiency of the target/moderator system. 9 refs., 4 figs Early history of neutron scattering at Oak Ridge Wilkinson, M.K. Most of the early development of neutron scattering techniques utilizing reactor neutrons occurred at the Oak Ridge National Laboratory during the years immediately following World War II. C.G. Shull, E.O. Wollan, and their associates systematically established neutron diffraction as a quantitative research tool and then applied this technique to important problems in nuclear physics, chemical crystallography, and magnetism. This article briefly summarizes the very important research at ORNL during this period, which laid the foundation for the establishment of neutron scattering programs throughout the world. 47 refs., 10 figs Detection of gastrointestinal cancer by elastic scattering and absorption spectroscopies with the Los Alamos Optical Biopsy System Mourant, J.R.; Boyer, J.; Johnson, T.M.; Lacey, J.; Bigio, I.J. [Los Alamos National Lab., NM (United States); Bohorfoush, A. [Wisconsin Medical School, Milwaukee, WI (United States). Dept. of Gastroenterology; Mellow, M. [Univ. of Oklahoma Medical School, Oklahoma City, OK (United States). Dept. of Gastroenterology The Los Alamos National Laboratory has continued the development of the Optical Biopsy System (OBS) for noninvasive, real-time in situ diagnosis of tissue pathologies. In proceedings of earlier SPIE conferences we reported on clinical measurements in the bladder, and we report here on recent results of clinical tests in the gastrointestinal tract. With the OBS, tissue pathologies are detected/diagnosed using spectral measurements of the elastic optical transport properties (scattering and absorption) of the tissue over a wide range of wavelengths. The use of elastic scattering as the key to optical tissue diagnostics in the OBS is based on the fact that many tissue pathologies, including a majority of cancer forms, exhibit significant architectural changes at the cellular and sub-cellular level. Since the cellular components that cause elastic scattering have dimensions typically on the order of visible to near-IR wavelengths, the elastic (Mie) scattering properties will be wavelength dependent. Thus, morphology and size changes can be expected to cause significant changes m an optical signature that is derived from the wavelength-dependence of elastic scattering. Additionally, the optical geometry of the OBS beneficially enhances its sensitivity for measuring absorption bands. The OBS employs a small fiber-optic probe that is amenable to use with any endoscope or catheter, or to direct surface examination, as well as interstitial needle insertion. Data acquistion/display time is <1 second. Neutron scattering study of yttrium iron garnet Shamoto, Shin-ichi; Ito, Takashi U.; Onishi, Hiroaki; Yamauchi, Hiroki; Inamura, Yasuhiro; Matsuura, Masato; Akatsu, Mitsuhiro; Kodama, Katsuaki; Nakao, Akiko; Moyoshi, Taketo; Munakata, Koji; Ohhara, Takashi; Nakamura, Mitsutaka; Ohira-Kawamura, Seiko; Nemoto, Yuichi; Shibata, Kaoru The nuclear and magnetic structure and full magnon dispersions of yttrium iron garnet Y3Fe5O12 have been studied using neutron scattering. The refined nuclear structure is distorted to a trigonal space group of R 3 ¯ . The highest-energy dispersion extends up to 86 meV. The observed dispersions are reproduced by a simple model with three nearest-neighbor-exchange integrals between 16 a (octahedral) and 24 d (tetrahedral) sites, Ja a, Ja d, and Jd d, which are estimated to be 0.00 ±0.05 , -2.90 ±0.07 , and -0.35 ±0.08 meV, respectively. The lowest-energy dispersion below 14 meV exhibits a quadratic dispersion as expected from ferromagnetic magnons. The imaginary part of q -integrated dynamical spin susceptibility χ″(E ) exhibits a square-root energy dependence at low energies. The magnon density of state is estimated from χ″(E ) obtained on an absolute scale. The value is consistent with the single chirality mode for the magnon branch expected theoretically. Inelastic neutron scattering of amorphous ice Fukazawa, Hiroshi; Ikeda, Susumu; Suzuki, Yoshiharu We measured the inelastic neutron scattering from high-density amorphous (HDA) and low-density amorphous (LDA) ice produced by pressurizing and releasing the pressure. We found a clear difference between the intermolecular vibrations in HDA and those in LDA ice: LDA ice has peaks at 22 and 33 meV, which are also seen in the spectrum of lattice vibrations in ice crystal, but the spectrum of HDA ice does not have these peaks. The excitation energy of librational vibrations in HDA ice is 10 meV lower than that in LDA ice. These results imply that HDA ice includes 2- and 5-coordinated hydrogen bonds that are created by breakage of hydrogen bonds and migration of water molecules into the interstitial site, while LDA ice contains mainly 4-coordinated hydrogen bonds and large cavities. Furthermore, we report the dynamical structure factor in the amorphous ice and show that LDA ice is more closely related to the ice crystal structure than to HDA ice. (author) Small-angle neutron scattering studies of sodium butyl benzene Na-NBBS), in aqueous solutions is investigated by small-angle neutron scattering (SANS). Nearly ellipsoidal aggregates of Na-NBBS at concentrations well above its minimum hydrotrope concentration were detected by SANS. The hydrotrope ... Spectrum of ferromagnetic transition metal magnetic excitations and neutron scattering Kuzemskij, A.L. Quantum statistical models of ferromagnetic transition metals as well as methods of their solutions are reviewed. The correspondence of results on solving these models and the data on scattering thermal neutrons in ferromagnetic is discussed Life at extreme conditions: Neutron scattering studies of biological Extremophile bacteria; molecular adaptation; halophile; water dynamics; protein dynamics. ... Results of neutron scattering measurements on the dynamics of proteins ... The experiments were performed on a halophilic protein, and membrane ... Analysis of inelastic neutron scattering results on model compounds ... Vibrational spectroscopy; nitrogenous bases; inelastic neutron scattering. PACS No. ... obtain good quality, high resolution results in this region. Here the .... knowledge of the character of each molecular transition as well as the calculated. 3rd AINSE neutron scattering conference, Lucas Heights -AINSE Theatre Abstracts of papers, the conference program and general information is included in the conference handbook. The program is divided into the following sessions: hydrogeneous and biological materials, industrial applications, phase transitions, magnetism, small angle neutron scattering and new developments Applications of neutron scattering in molecular biological research Nierhaus, K.H. The study of the molecular structure of biological materials by neutron scattering is described. As example the results of the study of the components of a ribosome of Escherichia coli are presented. (HSI) [de An empirical formula for scattered neutron components in fast neutron radiography Dou Haifeng; Tang Bin Scattering neutrons are one of the key factors that may affect the images of fast neutron radiography. In this paper, a mathematical model for scattered neutrons is developed on a cylinder sample, and an empirical formula for scattered neutrons is obtained. According to the results given by Monte Carlo methods, the parameters in the empirical formula are obtained with curve fitting, which confirms the logicality of the empirical formula. The curve-fitted parameters of common materials such as 6 LiD are given. (authors) Inelastic scattering of 275 keV neutrons by silver Litvinsky, L.L.; Zhigalov, Ya.A.; Krivenko, V.G.; Purtov, O.A.; Sabbagh, S. Neutron total, elastic and inelastic scattering cross-scattering of Ag at the E n = 275 KeV neutron energy were measured by using the filtered neutron beam of the WWR-M reactor in Kiev. The d-neutron strength function S n2 of Ag was determined from the analysis of all available data in the E n ≤ keV energy region on neutron inelastic scattering cross-sections with excitation of the first isomeric levels I π m = 7/2 + , E m ∼ 90 keV of 107,109 Ag: S n2 = (1.03 ± 0.19) · 10 -4 . (author). 10 refs, 3 figs Some thoughts on the future of neutron scattering Egelstaff, P.A. Attendance of ICANS meetings believe that neutron scattering has a bright future, but critics of neutron scattering argue that its practitioners are an aging group, that they use a few, very expensive neutron sources and that the interesting science may be done by other techniques. The ICANS committee asked me to comment on the future of neutron scattering in the light of this contrast. Some comments will be made on the age distribution, on the proper distribution of sources, on the convenient availability of neutron instruments and methods, on the expansion into new areas of science, on applications to industry and on the probable impact of synchrotron sources. It is hoped that these comments will lead to an outward looking discussion on the future. (author) Set of thermal neutron-scattering experiments for the Weapons Neutron Research Facility Brugger, R.M. Six classes of experiments form the base of a program of thermal neutron scattering at the Weapons Neutron Research (WNR) Facility. Three classes are to determine the average microscopic positions of atoms in materials and three are to determine the microscopic vibrations of these atoms. The first three classes concern (a) powder sample neutron diffraction, (b) small angle scattering, and (c) single crystal Laue diffraction. The second three concern (d) small kappa inelastic scattering, (e) scattering surface phonon measurements, and (f) line widths. An instrument to couple with the WNR pulsed source is briefly outlined for each experiment The dynamics of physisorbed layers studied by neutron scattering Nielsen, M.; McTague, J.P. We discuss the neutron scattering technique applied to the study of adsorbed thin films. Despite the fact that neutrons are scattered very weakly by surfaces, recent studies have shown that both structural and dynamical information can be obtained even for submonolayer coverages. Results will be shown for films of Ar, D 2 , H 2 , and O 2 adsorbed on (001) surfaces of graphite and for H 2 molecules adsorbed on activated alumina. (orig.) [de Neutron scattering research at JAERI reactors - past, present and future - Funahashi, Satoru; Morii, Yukio; Minakawa, Nobuaki It was in 1961 that the first neutron scattering experiment was performed in Japan at JRR-2. The start of JRR-3 in 1964 accelerated the neutron scattering activities in Japan. The research in this field in Japan grew up by using these two research reactors. Among them JRR-2 has played an important role because its neutron flux was about seven times higher than that of the old JRR-3. The completion of the new JRR-3M in 1990 made an epoch to the neutron scattering activities in Japan. The long-waited JRR-3M came up to the expectations of the scientists of Japan. It is a realization of the ideal reactor with tangential beam holes, cold source and neutron guides in a large guide hall. The flux at the neutron scattering instruments is about five times higher than that of JRR-2. Utilization of JRR-3M has just started. Twelve neutron scattering machines are running there. The number will increase up to close twenty in a couple of years. (author) Small Angle Neutron Scattering instrument at Malaysian TRIGA reactor Mohd, Shukri; Kassim, Razali; Mahmood, Zal Uyun [Malaysian Inst. for Nuclear Technology Research (MINT), Bangi, Kajang (Malaysia); Radiman, Shahidan The TRIGA MARK II Research reactor at the Malaysian Institute for Nuclear Research (MINT) was commissioned in July 1982. Since then various works have been performed to utilise the neutrons produced from this steady state reactor. One of the project involved the Small Angle Neutron Scattering (SANS). (author) A high pressure sample facility for neutron scattering Carlile, C.J.; Glossop, B.H. Commissioning tests involving deformation studies and tests to destruction as well as neutron diffraction measurements of a standard sample have been carried out on the SERC high pressure sample facility for neutron scattering studies. A detailed description of the pressurising equipment is given. (author) Spectrometer for neutron inelastic scattering investigations of microsamples Balagurov, A.M.; Kozlenko, D.P.; Platonov, S.L.; Savenko, B.N.; Glazkov, V.P.; Krasnikov, Yu.M.; Naumov, I.V.; Pukhov, A.V.; Somenkov, V.A.; Syrykh, G.F. A new neutron spectrometer for investigation of inelastic neutron scattering on polycrystal microsamples under high pressure in sapphire and diamond anvils cells is described. The spectrometer is operating at the IBR-2 pulsed reactor in JINR. Parameters and methodical peculiarities of the spectrometer and the examples of experimental studies are given. (author) Applications of neutron scattering to the study of magnetic materials Koehler, W.C. The types of interactions that neutrons undergo with condensed matter are reviewed and those properties of neutrons that make them an ideal probe for the study of magnetism on a microscopic scale are discussed. Following a very brief survey of experimental methods, a few illustrative examples of specific investigations are described in sufficient detail to illustrate the power of the techniques. Views as to the future directions that may be taken by neutron scattering are presented Neutron scattering: history, present state and perspectives Belushkin, A.V. The paper reminds some milestones in development of condensed matter research with neutrons. Present status of the investigations in this field is briefly outlined. An analysis is given on the situation and future prospects in different neutron sources development in Russia and in the world. The next generation neutron sources projects in Japan, USA and Europe are reviewed Collective Excitations in Liquid Hydrogen Observed by Coherent Neutron Scattering da Costa Carneiro, Kim; Nielsen, M.; McTague, J. P. Coherent scattering of neutrons by liquid parahydrogen shows the existence of well-defined collective excitations in this liquid. Qualitative similarity with the scattering from liquid helium is found. Furthermore, in the range of observed wave vectors, 0.7 Å-1 ≤κ≤3.1 Å-1, extending from the firs... Inelastic neutron scattering for materials science and engineering The neutron is the ideal probe for studying the positions and motions of atoms in condensed matter. The main advantage of the neutron in inelastic scattering results from its heavy mass when compared to other particles which are used to probe materials such as the photon (light, x-rays, or γ-rays) or the electron. The author discusses the application of neutron scattering to study a number of different materials related problems, including, hard magnets, shape memory effects, and hydrogen distribution in metals A combined neutron scattering and simulation study on bioprotectant systems Affouard, F. [Laboratoire de Dynamique et Structure des Materiaux Moleculaires UMR 8024, Universite Lille I - 59655 Villeneuve d&apos; Ascq cedex (France); Bordat, P. [Laboratoire de Dynamique et Structure des Materiaux Moleculaires UMR 8024, Universite Lille I - 59655 Villeneuve d&apos; Ascq cedex (France); Descamps, M. [Laboratoire de Dynamique et Structure des Materiaux Moleculaires UMR 8024, Universite Lille I - 59655 Villeneuve d&apos; Ascq cedex (France); Lerbret, A. [Laboratoire de Dynamique et Structure des Materiaux Moleculaires UMR 8024, Universite Lille I - 59655 Villeneuve d&apos; Ascq cedex (France); Magazu, S. [Dipartimento di Fisica and INFM, Universita di Messina, P.O. Box 55, I-98166 Messina (Italy); Migliardo, F. [Laboratoire de Dynamique et Structure des Materiaux Moleculaires UMR 8024, Universite Lille I - 59655 Villeneuve d&apos; Ascq cedex (France); Dipartimento di Fisica and INFM, Universita di Messina, P.O. Box 55, I-98166 Messina (Italy)], E-mail: [email protected]; Ramirez-Cuesta, A.J. [ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot (United Kingdom); Telling, M.F.T. [ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot (United Kingdom) The present work shows quasi elastic neutron scattering, neutron spin echo and inelastic neutron scattering results on a class of bioprotectant systems, such as homologous disaccharides (i.e., trehalose and sucrose)/water solutions, as a function of temperature. The whole set of findings indicates a noticeable &apos;kosmotrope&apos; character of the disaccharides, and in particular of trehalose, which is able to strongly modify both the structural and dynamical properties of water. This superior capability of trehalose can be linked to its higher bioprotective effectiveness in respect with the other disaccharides. Superfluidity, Bose condensation and neutron scattering in liquid 4He Silver, R.N. The relation between superfluidity and Bose condensation in 4 He provides lessons that may be valuable in understanding the strongly correlated electron system of high T c superconductivity. Direct observation of a Bose condensate in the superfluid by deep inelastic neutron scattering measurements has been attempted over many years. But the impulse approximation, which relates momentum distributions to neutron scattering structure functions, is broadened by final state effects. Nevertheless, the excellent quantitative agreement between ab initio quantum many body theory and high precision neutron experiments provides confidence in the connection between superfluidity and Bose condensation Introduction of sample environment equipment for neutron scattering experiments Shimojo, Yutaka; Ihata, Yoshiaki; Kaneko, Koji; Takeda, Masayasu Neutron scattering experiments have been frequently performed under variety of sample conditions, such as various temperatures, pressures, magnetic fields and stresses, and those complex conditions to fully utilize superior properties of neutron. To this aim, a number of sample environment equipment, refrigerators, furnaces, pressure cells, superconducting magnets are equipped in JRR-3 to be used for experiments. In this document, all available sample environment equipment in both JRR-3 reactor and guide halls are summarized. We hope this document would help neutron scattering users to perform effective and excellent experiments. (author) The hydrogen anomaly problem in neutron Compton scattering Karlsson, Erik B. Neutron Compton scattering (also called 'deep inelastic scattering of neutrons', DINS) is a method used to study momentum distributions of light atoms in solids and liquids. It has been employed extensively since the start-up of intense pulsed neutron sources about 25 years ago. The information lies primarily in the width and shape of the Compton profile and not in the absolute intensity of the Compton peaks. It was therefore not immediately recognized that the relative intensities of Compton peaks arising from scattering on different isotopes did not always agree with values expected from standard neutron cross-section tables. The discrepancies were particularly large for scattering on protons, a phenomenon that became known as 'the hydrogen anomaly problem'. The present paper is a review of the discovery, experimental tests to prove or disprove the existence of the hydrogen anomaly and discussions concerning its origin. It covers a twenty-year-long history of experimentation, theoretical treatments and discussions. The problem is of fundamental interest, since it involves quantum phenomena on the subfemtosecond time scale, which are not visible in conventional thermal neutron scattering but are important in Compton scattering where neutrons have two orders of magnitude times higher energy. Different H-containing systems show different cross-section deficiencies and when the scattering processes are followed on the femtosecond time scale the cross-section losses disappear on different characteristic time scales for each H-environment. The last section of this review reproduces results from published papers based on quantum interference in scattering on identical particles (proton or deuteron pairs or clusters), which have given a quantitative theoretical explanation both regarding the H-cross-section reduction and its time dependence. Some new explanations are added and the concluding chapter summarizes the conditions for observing the specific quantum Fast Neutron Elastic and Inelastic Scattering of Vanadium Holmqvist, B; Johansson, S G; Lodin, G; Wiedling, T Fast neutron scattering interactions with vanadium were studied using time-of-flight techniques at several energies in the interval 1.5 to 8.1 MeV. The experimental differential elastic scattering cross sections have been fitted to optical model calculations and the inelastic scattering cross sections have been compared with Hauser-Feshbach calculations, corrected for the fluctuation of compound-nuclear level widths. The world's first pelletized cold neutron moderator at a neutron scattering facility Ananiev, V.; Belyakov, A.; Bulavin, M.; Kulagin, E.; Kulikov, S.; Mukhin, K.; Petukhova, T.; Sirotin, A.; Shabalin, D.; Shabalin, E.; Shirokov, V.; Verhoglyadov, A., E-mail: [email protected] In July 10, 2012 cold neutrons were generated for the first time with the unique pelletized cold neutron moderator CM-202 at the IBR-2M reactor. This new moderator system uses small spherical beads of a solid mixture of aromatic hydrocarbons (benzene derivatives) as the moderating material. Aromatic hydrocarbons are known as the most radiation-resistant hydrogenous substances and have properties to moderate slow neutrons effectively. Since the new moderator was put into routine operation in September 2013, the IBR-2 research reactor of the Frank Laboratory of Neutron Physics has consolidated its position among the world's leading pulsed neutron sources for investigation of matter with neutron scattering methods. Quasielastic Neutron Scattering by Superionic Strontium Chloride Dickens, M. H.; Hutchings, M. T.; Kjems, Jørgen The scattering, from powder and single crystal samples, appears only above the superionic transition temperature, 1000K. The integrated intensity is found to be strongly dependent on the direction and magnitude of the scattering vector, Q, (which suggests the scattering is coherent) but does not ... Small Angle Neutron Scattering From Iron. Vol. 2 Adib, M; Abdel-Kawy, A; Naguib, K; Habib, N; Kilany, M [Reactor and Neutron Physics Dept., Nuclear Research Centre, AEA, Cairo, (Egypt); Wahba, M [Faculty of Engineering, ain Shams University, Cairo, (Egypt); Ashry, A [Faculty of Education, Ain Shams University, Cairo, (Egypt) The total neutron cross-section measurements have been carried out for iron in both metallic and powder forms in the wavelengths band 0.35 nm to 0.52 nm. The measurements were performed using the TOF spectrometer installed in front of one of the horizontal channels of the ET-RR-1 reactor. The observed behavior for the small-angle neutron scattering cross-section of iron powder was analyzed in terms of its particle diameter, incident neutron wavelength and beam divergence. It was found that for iron particles of diameter 25 {mu}m the small-angle neutron scattering is only due to refraction of neutron wave traversing the particles. A method was established to determine the particle size of iron powders within an accuracy of 8% which is higher than that obtained by mesh analysis. 4 figs., 1 tab. ANL--LASL workshop on advanced neutron detection systems Kitchens, T.A. A two-day workshop on advanced neutron detectors and associated electronics was held in Los Alamos on April 5--6, 1979, as a part of the Argonne National Laboratory--Los Alamos Scientific Laboratory Coordination on neutron scattering instrumentation. This report contains an account of the information presented and conclusions drawn at the workshop New era of neutron scattering research on advanced materials Ikeda, Susumu The projects of the next generation of pulsed spallation neutron source are planed in USA, Europe and Japan. They are one order of magnitude more powerful than the most powerful existing neutron source, ISIS in UK. They offer the exciting prospects for the future, and will open the new era of neutron scattering research on advanced materials. The Japanese project is named as the 'Joint project' between JAERI and KEK on high-intensity proton accelerators. The details of the neutron science facility in the 'Joint project' and the sciences to be developed are summarized. (author) 2010 American Conference on Neutron Scattering (ACNS 2010) Billinge, Simon The ACNS provides a focal point for the national neutron user community to strengthen ties within this diverse group, while at the same time promoting neutron research among colleagues in related disciplines identified as "would-be� neutron users. The American Conference on Neutron Scattering thus serves a dual role as a national user meeting and a scientific meeting. As a venue for scientific exchange, the ACNS showcases recent results and provides forums for scientific discussion of neutron research in diverse fields such as hard and soft condensed matter, liquids, biology, magnetism, engineering materials, chemical spectroscopy, crystal structure, and elementary excitations, fundamental physics and development of neutron instrumentation through a combination of invited talks, contributed talks and poster sessions. As a "super-user� meeting, the ACNS fulfills the main objectives of users&apos; meetings previously held periodically at individual national neutron facilities, with the advantage of a larger and more diverse audience. To this end, each of the major national neutron facilities (NIST, LANSCE, HFIR and SNS) have an opportunity to exchange information and update users, and potential users, of their facility. This is also an appropriate forum for users to raise issues that relate to the facilities. For many of the national facilities, this super-user meeting should obviate the need for separate user meetings that tax the time, energy and budgets of facility staff and the users alike, at least in years when the ACNS is held. We rely upon strong participation from the national facilities. The NSSA intends that the American Conference on Neutron Scattering (ACNS) will occur approximately every two years, but not in years that coincide with the International or European Conferences on Neutron Scattering. The ACNS is to be held in association with one of the national neutron centers in a rotating sequence, with the host facility providing local Data acquisition system for the neutron scattering instruments at the intense pulsed neutron source Crawford, R.K.; Daly, R.T.; Haumann, J.R.; Hitterman, R.L.; Morgan, C.B.; Ostrowski, G.E.; Worlton, T.G. The Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory is a major new user-oriented facility which is now coming on line for basic research in neutron scattering and neutron radiation damage. This paper describes the data-acquisition system which will handle data acquisition and instrument control for the time-of-flight neutron-scattering instruments at IPNS. This discussion covers the scientific and operational requirements for this system, and the system architecture that was chosen to satisfy these requirements. It also provides an overview of the current system implementation including brief descriptions of the hardware and software which have been developed Experimental studies of the critical scattering of neutrons for large scattering vectors Ciszewski, R. The most recent results concerned with the critical scattering of neutrons are reviewed. The emphasis is on the so-called thermal shift, that is the shift of the main maximum in the intensity of critically scattered neutrons with temperature changes. Four theories of this phenomenon are described and their shortcomings are shown. It has been concluded that the situation is involved at present and needs further theoretical and experimental study. (S.B.) Earth formation pulsed neutron porosity logging system utilizing epithermal neutron and inelastic scattering gamma ray detectors Smith, H.D. Jr.; Smith, M.P.; Schultz, W.E. An improved pulsed neutron porosity logging system is provided in the present invention. A logging tool provided with a 14 MeV pulsed neutron source, an epithermal neutron detector and an inelastic scattering gamma ray detector is moved through a borehole. The detection of inelastic gamma rays provides a measure of the fast neutron population in the vicinity of the detector. repetitive bursts of neutrons irradiate the earth formation and, during the busts, inelastic gamma rays representative of the fast neutron population is sampled. During the interval between bursts the epithermal neutron population is sampled along with background gamma radiation due to lingering thermal neutrons. the fast and epithermal neutron population measurements are combined to provide a measurement of formation porosity Toward a new polyethylene scattering law determined using inelastic neutron scattering Lavelle, C.M.; Liu, C.-Y.; Stone, M.B. Monte Carlo neutron transport codes such as MCNP rely on accurate data for nuclear physics cross-sections to produce accurate results. At low energy, this takes the form of scattering laws based on the dynamic structure factor, S(Q,E). High density polyethylene (HDPE) is frequently employed as a neutron moderator at both high and low temperatures, however the only cross-sections available are for ambient temperatures (∼300K), and the evaluation has not been updated in quite some time. In this paper we describe inelastic neutron scattering measurements on HDPE at 5 and 294 K which are used to improve the scattering law for HDPE. We review some of the past HDPE scattering laws, describe the experimental methods, and compare computations using these models to the measured S(Q,E). The total cross-section is compared to available data, and the treatment of the carbon secondary scatterer as a free gas is assessed. We also discuss the use of the measurement itself as a scattering law via the one phonon approximation. We show that a scattering law computed using a more detailed model for the Generalized Density of States (GDOS) compares more favorably to this experiment, suggesting that inelastic neutron scattering can play an important role in both the development and validation of new scattering laws for Monte Carlo work. -- Highlights: ► Polyethylene at 5 K and 300 K is measured using inelastic neutron scattering (INS). ► Measurements conducted at the Wide Angular-Range Chopper Spectrometer at SNS. ► Several models for Polyethylene are compared to measurements. ► Improvements to existing models for the polyethylene scattering law are suggested. ► INS is shown to be highly valuable tool for scattering law development A comparative study of the systems for neutronics calculations used in Los Alamos Scientific Laboratory (LASL) and Argonne National Laboratory (ANL) Amorim, E.S. do; D'Oliveira, A.B.; Oliveira, E.C. de. A comparative study of the systems for neutronics calculations used in Los Alamos Scientific Laboratory (LASL) and Argonne National Laboratory (ANL) has been performed using benchmark results available in the literature, in order to analyse tghe convenience of using the respective codes MINX/NJOY and ETOE/MC 2 -2 for performing neutronics calculations in course at the Divisao de Estudos Avancados. (Author) [pt Status of thermal neutron scattering data for graphite Mattes, M.; Keinert, J. At thermal neutron energies, the binding of the scattering nucleus in a solid, liquid, or gas affects the cross sections and the angular and energy distributions of the scattered neutrons. These effects are described in the thermal sub-library of evaluated files in File 7 of the ENDF-6 format. A re-evaluation of thermal neutron scattering data for carbon bound in graphite has been performed to investigate the impact of models (e.g., generalised frequency distributions) based on different experimental and theoretical data for the generation of scattering law data files S(α,β,T) and coherent elastic scattering data. Two phonon frequency distributions of graphite published in 2002 and 2004 were considered and the results compared with those based on the phonon spectra from Koppel et al. (published in 1968), on which the evaluations of ENDF/B-VI and JEFF-3.1 are based. The new frequency distributions were partly derived from ab initio simulations. Detailed comparisons with measurements of differential and integral neutron cross sections and other relevant data are reported. In addition, thermal MCNP data sets for use in the continuous Monte Carlo codes MCNP and MCNPX were generated from these evaluations for different temperatures. Calculated neutron spectra were found to be in good agreement with the measurements. (author) Neutron scattering at the high-flux isotope reactor Cable, J.W. Chakoumakos, B.C.; Dai, P. The title facilities offer the brightest source of neutrons in the national user program. Neutron scattering experiments probe the structure and dynamics of materials in unique and complementary ways as compared to x-ray scattering methods and provide fundamental data on materials of interest to solid state physicists, chemists, biologists, polymer scientists, colloid scientists, mineralogists, and metallurgists. Instrumentation at the High- Flux Isotope Reactor includes triple-axis spectrometers for inelastic scattering experiments, a single-crystal four diffractometer for crystal structural studies, a high-resolution powder diffractometer for nuclear and magnetic structure studies, a wide-angle diffractometer for dynamic powder studies and measurements of diffuse scattering in crystals, a small-angle neutron scattering (SANS) instrument used primarily to study structure-function relationships in polymers and biological macromolecules, a neutron reflectometer for studies of surface and thin-film structures, and residual stress instrumentation for determining macro- and micro-stresses in structural metals and ceramics. Research highlights of these areas will illustrate the current state of neutron science to study the physical properties of materials Critical magnetic scattering of polarized neutrons on iron Hetzelt, M. A new spectrometer has been built and tested. The instrument was designed particularly for small angle scattering of polarized neutrons whereby the degree of polarisation of the scattered neutrons can be measured. The use of polarizing neutron pipes as polarizer and analyser allows the performence with a very broad wavelength spectrum (2 A 7 n/cm 2 sec) with good collimation (Δ theta approximately 0.2 0 ). The instrument is applied for the measurement of the critical magnetic scattering of polarized neutrons on an iron single crystal. For this purpose a special oven with an appropriate magnetic field configuration and a high precision in temperature has been constructed. The measured intensity distributions are in good agreement with other experiments. The critical exponent of the correlation range xi results in 0.65 +- 0.06. Angle and temperature dependence of the scattered neutron polarisation could be determined with good precision. The measurements are partly in extreme contradiction to the only hitherto existing experiment of this kind of Drabkin et al, and to assumptions in the theoretical evaluation. This contradiction is shown to be caused by the influence of multiple scattering. (orig./HPOE) [de Studies on biological macromolecules by neutron inelastic scattering Fujiwara, Satoru; Nakagawa, Hiroshi Neutron inelastic scattering techniques, including quasielastic and elastic incoherent neutron scattering, provide unique tools to directly measure the protein dynamics at a picosecond time scale. Since the protein dynamics at this time scale is indispensable to the protein functions, elucidation of the protein dynamics is indispensable for ultimate understanding of the protein functions. There are two complementary directions of the protein dynamics studies: one is to explore the physical basis of the protein dynamics using 'model' proteins, and the other is more biology-oriented. Examples of the studies on the protein dynamics with neutron inelastic scattering are described. The examples of the studies in the former direction include the studies on the dynamical transitions of the proteins, the relationship between the protein dynamics and the hydration water dynamics, and combined analysis of the protein dynamics with molecular dynamics simulation. The examples of the studies in the latter direction include the elastic incoherent and quasielastic neutrons scattering studies of actin. Future prospects of the studies on the protein dynamics with neutron scattering are briefly described. (author) Development of new methods for studying nanostructures using neutron scattering The goal of this project was to develop improved instrumentation for studying the microscopic structures of materials using neutron scattering. Neutron scattering has a number of advantages for studying material structure but suffers from the well-known disadvantage that neutrons&apos; ability to resolve structural details is usually limited by the strength of available neutron sources. We aimed to overcome this disadvantage using a new experimental technique, called Spin Echo Scattering Angle Encoding (SESAME) that makes use of the neutron&apos;s magnetism. Our goal was to show that this innovation will allow the country to make better use of the significant investment it has recently made in a new neutron source at Oak Ridge National Laboratory (ORNL) and will lead to increases in scientific knowledge that contribute to the Nation&apos;s technological infrastructure and ability to develop advanced materials and technologies. We were successful in demonstrating the technical effectiveness of the new method and established a baseline of knowledge that has allowed ORNL to start a project to implement the method on one of its neutron beam lines. Anon. Following the historic observation of neutrinos in the mid-1950s by two Los Alamos scientists, Fred Reines and Clyde Cowan, Jr, using inverse beta decay, there has been a long and distinguished history of experimental neutrino physics at LAMPF, the Los Alamos Meson Physics Facility. LAMPF is the only meson factory to have had an experimental neutrino programme. In the late 1970s, the first LAMPF neutrino experiment used a 6-tonne water Cherenkov detector 7 metres from the beam stop. A collaboration of Yale, Los Alamos and several other institutions, this experiment searched for the forbidden decay of a muon into an electron and two neutrinos, and measured the reaction rate of a neutrino interacting with a deuteron to give two protons and an electron - the inverse of the reaction that drives the sun&apos;s primary energy source. The next LAMPF neutrino experiment, a UC Irvine/Maryland/Los Alamos collaboration, ran from 1982 through 1986 and measured the elastic scattering rate of electron neutrinos and protons, where both neutral and charged weak currents contribute. With its precision of about 15%, the experiment provided the first demonstration of (destructive) interference between the charged and neutral currents. More recent neutrino experiments at LAMPF have searched for neutrino oscillations, especially between muon- and electron-neutrinos. The newest experiment to pursue this physics (as well as oscillations in other channels) is LSND (July/ August, page 10 and cover). In addition to searching for these oscillations, LSND will measure neutrino-proton elastic scattering at low momentum transfer, providing a sensitive measure of the strange quark contribution to the proton spin. LSND began taking data in August. Los Alamos physicists have also been busy in neutrino physics experiments elsewhere. One such experiment looked at the beta decay of free molecular tritium to obtain an essentially model independent determination of the electron-neutrino mass. The Background determination for the neutron-neutron scattering experiment at the reactor YAGUAR Muzichka, A.Yu. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Furman, W.I. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Lychagin, E.V. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Krylov, A.R. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Nekhaev, G.V. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Sharapov, E.I. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Shvetsov, V.N. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Strelkov, A.V. [Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Levakov, B.G. [Russian Federal Nuclear Center-All-Russian Research Institute of Technical Physics, PO Box 245, 456770 Snezhinsk (Russian Federation); Lyzhin, A.E. [Russian Federal Nuclear Center-All-Russian Research Institute of Technical Physics, PO Box 245, 456770 Snezhinsk (Russian Federation); Chernukhin, Yu.I. [Russian Federal Nuclear Center-All-Russian Research Institute of Technical Physics, PO Box 245, 456770 Snezhinsk (Russian Federation); Kandiev, Ya.Z. [Russian Federal Nuclear Center-All-Russian Research Institute of Technical Physics, PO Box 245, 456770 Snezhinsk (Russian Federation); Howell, C.R. [Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Mitchell, G.E. [North Carolina State University, Raleigh, NC 27695-8202 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Crawford, B.E. [Gettysburg College, Box 405, Gettysburg, PA 17325 (United States); Stephenson, S.L. [Gettysburg College, Box 405, Gettysburg, PA 17325 (United States)]. E-mail: [email protected]; Tornow, W. [Duke University and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States) The motivation and design is outlined for the experiment to measure the neutron-neutron singlet scattering length directly with thermal neutrons at the pulsed reactor YAGUAR. A statistical accuracy of 3% can be reached, though achieving the goal of an overall accuracy of 3-5% for the nn-scattering length depends on the background level. Possible sources of background are discussed in depth and the results of extensive modeling of the background are presented. Measurements performed at YAGUAR to test these background calculations are described. The experimental results indicate an anticipated background level up to 30% relative to the expected nn effect at the maximal energy burst of the reactor. The conclusion is made that the nn experiment at YAGUAR is feasible to produce the first directly measured value for the neutron-neutron scattering length. Small angle neutron scattering and small angle X-ray scattering ... Abstract. The morphology of carbon nanofoam samples comprising platinum nanopar- ticles dispersed in the matrix was characterized by small angle neutron scattering (SANS) and small angle X-ray scattering (SAXS) techniques. Results show that the structure of pores of carbon matrix exhibits a mass (pore) fractal nature ... Investigation of collective excitations in fluid neon by coherent neutron scattering at small scattering vectors Bell, H.G. The energy spectra of Ne studied under different temperatures and pressures with the aid of inelastic, coherent neutron scattering can be described by a scattering law derived from the basic hydrodynamic equations. The Brillouin lines found with very small momentum transfer 0.06 A -1 -1 are interpreted as collective, adiabatic pressure fluctuations. (orig./WL) [de Monte Carlo simulation of fast neutron scattering experiments including DD-breakup neutrons Schmidt, D.; Siebert, B.R.L. The computational simulation of the deuteron breakup in a scattering experiment has been investigated. Experimental breakup spectra measured at 16 deuteron energies and at 7 angles for each energy served as the data base. Analysis of these input data and of the conditions of the scattering experiment made it possible to reduce the input data. The use of one weighted breakup spectrum is sufficient to simulate the scattering spectra at one incident neutron energy. A number of tests were carried out to prove the validity of this result. The simulation of neutron scattering on carbon, including the breakup, was compared with measured spectra. Differences between calculated and measured spectra were for the most part within the experimental uncertainties. Certain significant deviations can be attributed to erroneous scattering cross sections taken from an evaluation and used in the simulation. Scattering on higher-lying states in [sup 12]C can be analyzed by subtracting the simulated breakup-scattering from the experimental spectra. (orig.) The computational simulation of the deuteron breakup in a scattering experiment has been investigated. Experimental breakup spectra measured at 16 deuteron energies and at 7 angles for each energy served as the data base. Analysis of these input data and of the conditions of the scattering experiment made it possible to reduce the input data. The use of one weighted breakup spectrum is sufficient to simulate the scattering spectra at one incident neutron energy. A number of tests were carried out to prove the validity of this result. The simulation of neutron scattering on carbon, including the breakup, was compared with measured spectra. Differences between calculated and measured spectra were for the most part within the experimental uncertainties. Certain significant deviations can be attributed to erroneous scattering cross sections taken from an evaluation and used in the simulation. Scattering on higher-lying states in 12 C can be analyzed by subtracting the simulated breakup-scattering from the experimental spectra. (orig.) Fragility of complexity biophysical systems by neutron scattering Magazu, Salvatore [Dipartimento di Fisica, Universita di Messina, P.O. Box 55, I-98166 Messina (Italy)]. E-mail: [email protected]; Migliardo, Federica [Dipartimento di Fisica, Universita di Messina, P.O. Box 55, I-98166 Messina (Italy); Bellocco, Ersilia [Dipartimento di Chimica Organica e Biologica, Universita di Messina, I-98166 Messina (Italy); Lagana, Giuseppina [Dipartimento di Chimica Organica e Biologica, Universita di Messina, I-98166 Messina (Italy); Mondelli, Claudia [CNR-INFM OGG and CRS-SOFT, c/o ILL, 6 Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France) Neutron scattering is an exceptional tool to investigate structural and dynamical properties of systems of biophysical interest, such as proteins, enzymes, lipids and sugars. Moreover, elastic neutron scattering enhances the investigation of atomic motions in hydrated proteins in a wide temperature range and on the picosecond timescale. Homologous disaccharides, such as trehalose, maltose and sucrose, are cryptobiotic substances, since they allow to many organisms to undergo in a &apos;suspended life&apos; state, known as cryptobiosis in extreme environmental conditions. The present paper is aimed to discuss the fragility degree of disaccharides, as evaluated of the temperature dependence of the mean square displacement by elastic neutron scattering, in order to link this feature with their bioprotective functions. Software for simulation and design of neutron scattering instrumentation Bertelsen, Mads designed using the software. The Union components uses a new approach to simulation of samples in McStas. The properties of a sample are split into geometrical and material, simplifying user input, and allowing the construction of complicated geometries such as sample environments. Multiple scattering...... from conventional choices. Simulation of neutron scattering instrumentation is used when designing instrumentation, but also to understand instrumental effects on the measured scattering data. The Monte Carlo ray-tracing package McStas is among the most popular, capable of simulating the path of each...... neutron through the instrument using an easy to learn language. The subject of the defended thesis is contributions to the McStas language in the form of the software package guide_bot and the Union components.The guide_bot package simplifies the process of optimizing neutron guides by writing the Mc... Small angle neutron scattering (SANS) under non-equilibrium conditions Oberthur, R.C. The use of small angle neutron scattering (SANS) for the study of systems under non-equilibrium conditions is illustrated by three types of experiments in the field of polymer research: - the relaxation of a system from an initial non-equilibrium state towards equilibrium, - the cyclic or repetitive installation of a series of non-equilibrium states in a system, - the steady non-equilibrium state maintained by a constant dissipation of energy within the system. Characteristic times obtained in these experiments with SANS are compared with the times obtained from quasi-elastic neutron and light scattering, which yield information about the equilibrium dynamics of the system. The limits of SANS applied to non-equilibrium systems for the measurement of relaxation times at different length scales are shown and compared to the limits of quasielastic neutron and light scattering Neutron scattering in soft matter physics and chemistry Recent experiments area of soft matter science show that self assembly on the micron scale as well as the nanometer scale can be directed chemically. This lecture illustrates how such processes can be studied using the contrast variation available in neutron scattering through isotopic replacement and the techniques of neutron small angle scattering and neutron reflectivity. Related dynamical information at nanometer resolution and on time scales between a nanosecond and a few tenths of a picosecond will become accessible with brighter neutron sources. The examples presented concern the template induced crystallisation of zeolites, the liquid crystal template induced synthesis of mesoporous materials and the structure of thin films at the air water interface. (J.P.N.) Spin dynamics above the Curie temperature studied by neutron scattering Steinsvoll, O.; Riste, T. Neutron scattering can in principle give information about magnetic fluctuations over the entire atomic space and time domain. The weakness of the neutron-matter interaction renders this information undistorted by the neutron probe, but at the same time puts intensity limitations on the method. A considerable number of studies on the magnetism of 3d metals have been performed at some of the larger reactor laboratories. In the regions of overlap the experimental results from the different laboratories are consistent, but the interpretations are along different lines. Among the controversial issues are itinerancy versus localization, the degree of order above T C . In our talk we shall give an introduction to the neutron scattering method, including some of the sophisticated polarized beam methods. In the rest of the talk we shall review recent experimental results and some of the theoretical models used in their interpretation. (orig.) Fluence-compensated down-scattered neutron imaging using the neutron imaging system at the National Ignition Facility Casey, D. T., E-mail: [email protected]; Munro, D. H.; Grim, G. P.; Landen, O. L.; Spears, B. K.; Fittinghoff, D. N.; Field, J. E.; Smalyuk, V. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Volegov, P. L.; Merrill, F. E. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States) The Neutron Imaging System at the National Ignition Facility is used to observe the primary ∼14 MeV neutrons from the hotspot and down-scattered neutrons (6-12 MeV) from the assembled shell. Due to the strong spatial dependence of the primary neutron fluence through the dense shell, the down-scattered image is convolved with the primary-neutron fluence much like a backlighter profile. Using a characteristic scattering angle assumption, we estimate the primary neutron fluence and compensate the down-scattered image, which reveals information about asymmetry that is otherwise difficult to extract without invoking complicated models. High energy neutron recoil scattering from liquid 4He Holt, R.S.; Needham, L.M.; Paoli, M.P. The neutron recoil scattering from liquid 4 He at 4.2 K and 1.6 K has been observed for a momentum transfer of 150 A -1 using the Electron Volt Spectrometer on the pulsed neutron source, ISIS. The experiment yielded mean atomic kinetic energy values = 14.8 +- 3 K at 4.2 K and = 14.6 +- 3.2 K at 1.6 K in good agreement with values obtained at lower momentum transfers. (author) Inelastic scattering of neutrons by spin waves in terbium Bjerrum Møller, Hans; Houmann, Jens Christian Gylden Measurements of spin-wave dispersion relations for magnons propagating in symmetry directions in ferromagnetic Tb; it is first experiment to give detailed information on magnetic excitations in heavy rare earths; Tb was chosen for these measurements because it is one of few rare-earth metals which...... does not have very high thermal-neutron capture cross section, so that inelastic neutron scattering experiments can give satisfactory information on magnon dispersion relations.... Thermal neutron scattering studies of condensed matter under high pressures Carlile, C.J.; Salter, D.C. Although temperature has been used as a thermodynamic variable for samples in thermal neutron scattering experiments since the inception of the neutron technique, it is only in the last decade that high pressures have been utilised for this purpose. In the paper the problems particular to this field of work are outlined and a review is made of the types of high-pressure cells used and the scientific results obtained from the experiments. 103 references. (author) Scattering and depolarization of polarized neutrons in ferrofluids Balasoiu, M.; Dokukin, E.B.; Kozhevnikov, S.V.; Nikitenko, Y.V. On the SPN - 1 polarized neutron spectrometer at IBR -2 high - flux pulsed rector there were carried out preliminary measurements on transmission and polarization of a neutron beam passing through a magnetic colloidal system of Fe 3 O 4 particles in transformer oil and dodecane carriers. It was found that in the ferrofluids with magnetite particles exist, dependent on the particle volume concentration and the magnitude of the external magnetic field, effects of depolarization and nuclear - magnetic small angle scattering. (author) Evaluation of room-scattered neutrons at the JNC Tokai neutron reference field Yoshida, Tadayoshi; Tsujimura, Norio [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (Japan). Tokai Works; Oyanagi, Katsumi [Japan Radiation Engineering Co., Ltd., Hitachi, Ibaraki (Japan) Neutron reference fields for calibrating neutron-measuring devices in JNC Tokai Works are produced by using radionuclide neutron sources, {sup 241}Am-Be and {sup 252}Cf sources. The reference field for calibration includes scattered neutrons from the material surrounding sources, wall, floor and ceiling of the irradiation room. It is, therefore, necessary to evaluate the scattered neutrons contribution and their energy spectra at reference points. Spectral measurements were performed with a set of Bonner multi-sphere spectrometers and the reference fields were characterized in terms of spectral composition and the fractions of room-scattered neutrons. In addition, two techniques stated in ISO 10647, the shadow-cone method and the polynomial fit method, for correcting the contributions from the room-scattered neutrons to the readings of neutron survey instruments were compared. It was found that the two methods gave an equivalent result within a deviation of 3.3% at a source-to-detector distance from 50cm to 500cm. (author) Yoshida, Tadayoshi; Tsujimura, Norio Neutron reference fields for calibrating neutron-measuring devices in JNC Tokai Works are produced by using radionuclide neutron sources, 241 Am-Be and 252 Cf sources. The reference field for calibration includes scattered neutrons from the material surrounding sources, wall, floor and ceiling of the irradiation room. It is, therefore, necessary to evaluate the scattered neutrons contribution and their energy spectra at reference points. Spectral measurements were performed with a set of Bonner multi-sphere spectrometers and the reference fields were characterized in terms of spectral composition and the fractions of room-scattered neutrons. In addition, two techniques stated in ISO 10647, the shadow-cone method and the polynomial fit method, for correcting the contributions from the room-scattered neutrons to the readings of neutron survey instruments were compared. It was found that the two methods gave an equivalent result within a deviation of 3.3% at a source-to-detector distance from 50cm to 500cm. (author) Chemical shift of neutron resonances and some ideas on neutron resonances and scattering theory Ignatovich, V.K.; ) The dependence of positions of neutron resonances in nuclei in condensed matter on chemical environment is considered. A possibility of theoretical description of neutron resonances, different from R-matrix theory is investigated. Some contradictions of standard scattering theory are discussed and a new approach without these contradictions is formulated [ru The neutron spin-echo spectrometer: a new high resolution technique in neutron scattering Nicholson, L.K. The neutron spin-echo (NSE) spectrometer provides the highest energy resolution available in neutron scattering experiments. The article describes the principles behind the first NSE spectrometer (at the Institute Laue-Langevin, Grenoble, France) and, as an example of one of its applications, some recent results on polymer chain dynamics are presented. (author) Performance of the upgraded ultracold neutron source at Los Alamos National Laboratory and its implication for a possible neutron electric dipole moment experiment Ito, T. M.; Adamek, E. R.; Callahan, N. B.; Choi, J. H.; Clayton, S. M.; Cude-Woods, C.; Currie, S.; Ding, X.; Fellers, D. E.; Geltenbort, P.; Lamoreaux, S. K.; Liu, C.-Y.; MacDonald, S.; Makela, M.; Morris, C. L.; Pattie, R. W.; Ramsey, J. C.; Salvat, D. J.; Saunders, A.; Sharapov, E. I.; Sjue, S.; Sprow, A. P.; Tang, Z.; Weaver, H. L.; Wei, W.; Young, A. R. The ultracold neutron (UCN) source at Los Alamos National Laboratory (LANL), which uses solid deuterium as the UCN converter and is driven by accelerator spallation neutrons, has been successfully operated for over 10 years, providing UCN to various experiments, as the first production UCN source based on the superthermal process. It has recently undergone a major upgrade. This paper describes the design and performance of the upgraded LANL UCN source. Measurements of the cold neutron spectrum and UCN density are presented and compared to Monte Carlo predictions. The source is shown to perform as modeled. The UCN density measured at the exit of the biological shield was 184 (32 ) UCN /cm3 , a fourfold increase from the highest previously reported. The polarized UCN density stored in an external chamber was measured to be 39 (7 ) UCN /cm3 , which is sufficient to perform an experiment to search for the nonzero neutron electric dipole moment with a one-standard-deviation sensitivity of σ (dn) =3 ×10-27e cm . Thermal neutron scattering cross sections of beryllium and magnesium oxides Al-Qasir, Iyad; Jisrawi, Najeh; Gillette, Victor; Qteish, Abdallah Highlights: • Neutron thermalization in BeO and MgO was studied using Ab initio lattice dynamics. • The BeO phonon density of states used to generate the current ENDF library has issues. • The BeO cross sections can provide a more accurate ENDF library than the current one. • For MgO an ENDF library is lacking: a new accurate one can be built from our results. • BeO is a better filter than MgO, especially when cooled down to 77 K. - Abstract: Alkaline-earth beryllium and magnesium oxides are fundamental materials in nuclear industry and thermal neutron scattering applications. The calculation of the thermal neutron scattering cross sections requires a detailed knowledge of the lattice dynamics of the scattering medium. The vibrational properties of BeO and MgO are studied using first-principles calculations within the frame work of the density functional perturbation theory. Excellent agreement between the calculated phonon dispersion relations and the experimental data have been obtained. The phonon densities of states are utilized to calculate the scattering laws using the incoherent approximation. For BeO, there are concerns about the accuracy of the phonon density of states used to generate the current ENDF/B-VII.1 libraries. These concerns are identified, and their influences on the scattering law and inelastic scattering cross section are analyzed. For MgO, no up to date thermal neutron scattering cross section ENDF library is available, and our results represent a potential one for use in different applications. Moreover, the BeO and MgO efficiencies as neutron filters at different temperatures are investigated. BeO is found to be a better filter than MgO, especially when cooled down, and cooling MgO below 77 K does not significantly improve the filter's efficiency. Neutron scattering lengths of molten metals determined by gravity refractometry Reiner, G.; Waschkowski, W.; Koester, L. Very accurate values of the coherent neutron scattering lengths of the heavy elements Bi and Pb are important quantities for the investigation of the electric interactions of neutrons with atoms. We performed, therefore, a series of experiments to determine accurate scattering lengths by means of neutron gravity refractometry on liquid mirrors of molten metals. The possible perturbations of the necessary reflection measurements have been discussed in details. After taking into account the uncertainties and corrections associated with observable perturbations we obtained the following values for bound atoms: b(Bi)=8.532±0.002 fm, b(Pb)=9.405±0.003 fm, b(Tl)=8.776±0.005 fm, b(Sn)=6.225±0.002 fm and b(Ga)=7.288±0.002 fm. These data are corrected for the local field effect occuring in the reflection on liquids. The recently reported results for the neutron's electric polarizability and the neutron-electron scattering length are supported by the Bi- and Pb-scattering length of this work. (orig.) Reiner, G; Waschkowski, W; Koester, L [Technische Univ. Muenchen, Garching (Germany, F.R.). Fakultaet fuer Physik Very accurate values of the coherent neutron scattering lengths of the heavy elements Bi and Pb are important quantities for the investigation of the electric interactions of neutrons with atoms. We performed, therefore, a series of experiments to determine accurate scattering lengths by means of neutron gravity refractometry on liquid mirrors of molten metals. The possible perturbations of the necessary reflection measurements have been discussed in details. After taking into account the uncertainties and corrections associated with observable perturbations we obtained the following values for bound atoms: b(Bi)=8.532{plus minus}0.002 fm, b(Pb)=9.405{plus minus}0.003 fm, b(Tl)=8.776{plus minus}0.005 fm, b(Sn)=6.225{plus minus}0.002 fm and b(Ga)=7.288{plus minus}0.002 fm. These data are corrected for the local field effect occuring in the reflection on liquids. The recently reported results for the neutron's electric polarizability and the neutron-electron scattering length are supported by the Bi- and Pb-scattering length of this work. (orig.). Reiner, G.; Waschkowski, W.; Koester, L. (Technische Univ. Muenchen, Garching (Germany, F.R.). Fakultaet fuer Physik) Very accurate values of the coherent neutron scattering lengths of the heavy elements Bi and Pb are important quantities for the investigation of the electric interactions of neutrons with atoms. We performed, therefore, a series of experiments to determine accurate scattering lengths by means of neutron gravity refractometry on liquid mirrors of molten metals. The possible perturbations of the necessary reflection measurements have been discussed in details. After taking into account the uncertainties and corrections associated with observable perturbations we obtained the following values for bound atoms: b(Bi)=8.532{plus minus}0.002 fm, b(Pb)=9.405{plus minus}0.003 fm, b(Tl)=8.776{plus minus}0.005 fm, b(Sn)=6.225{plus minus}0.002 fm and b(Ga)=7.288{plus minus}0.002 fm. These data are corrected for the local field effect occuring in the reflection on liquids. The recently reported results for the neutron&apos;s electric polarizability and the neutron-electron scattering length are supported by the Bi- and Pb-scattering length of this work. (orig.). Activity report on neutron scattering research. V. 1, 1994 Fujii, Y.; Oohara, Y. In April, 1993, the Neutron Scattering Laboratory attached to the Institute for Solid State Physics, University of Tokyo, was newly established in Tokai, Ibaraki Prefecture, to promote nationwide users' programs for utilizing the university-owned neutron instruments installed at the JRR-3M reactor of Japan Atomic Energy Research Institute. This upgraded reactor (20 MW, the cold source is installed) has drastically expanded the number of users and research areas since 1990 when it became operational. Currently 8 and 3 out of 18 new spectrometers in total at the JRR-3M are owned by ISSP and Tohoku University, respectively, while the remaining 7 spectrometers belong to JAERI. In addition, 3 conventional spectrometers in the 30 years old JRR-2 reactor (10 MW) have also supported research activities. This is the first issue of 'Activity report on neutron scattering research', and it is to be published annually. In this report, the brief history of neutron scattering research, the users' programs, the committees, the neutron scattering instruments available at the JRR-3M and the JRR-2M, the activity reports on structures and excitation, magnetism, superconductors, liquid and glass, material science, polymers, biology and instrumentation, and publication list are reported. (K.I.) Quasielastic neutron scattering in biology: Theory and applications. Vural, Derya; Hu, Xiaohu; Lindner, Benjamin; Jain, Nitin; Miao, Yinglong; Cheng, Xiaolin; Liu, Zhuo; Hong, Liang; Smith, Jeremy C Neutrons scatter quasielastically from stochastic, diffusive processes, such as overdamped vibrations, localized diffusion and transitions between energy minima. In biological systems, such as proteins and membranes, these relaxation processes are of considerable physical interest. We review here recent methodological advances and applications of quasielastic neutron scattering (QENS) in biology, concentrating on the role of molecular dynamics simulation in generating data with which neutron profiles can be unambiguously interpreted. We examine the use of massively-parallel computers in calculating scattering functions, and the application of Markov state modeling. The decomposition of MD-derived neutron dynamic susceptibilities is described, and the use of this in combination with NMR spectroscopy. We discuss dynamics at very long times, including approximations to the infinite time mean-square displacement and nonequilibrium aspects of single-protein dynamics. Finally, we examine how neutron scattering and MD can be combined to provide information on lipid nanodomains. This article is part of a Special Issue entitled "Science for Life" Guest Editor: Dr. Austen Angell, Dr. Salvatore Magazù and Dr. Federica Migliardo. Copyright © 2016 Elsevier B.V. All rights reserved. The Manuel Lujan Jr. Neutron Scattering Center (LANSCE) experiment reports 1993 run cycle. Progress report Farrer, R.; Longshore, A. [comps. This year the Manuel Lujan Jr. Neutron Scattering Center (LANSCE) ran an informal user program because the US Department of Energy planned to close LANSCE in FY1994. As a result, an advisory committee recommended that LANSCE scientists and their collaborators complete work in progress. At LANSCE, neutrons are produced by spallation when a pulsed, 800-MeV proton beam impinges on a tungsten target. The proton pulses are provided by the Clinton P. Anderson Meson Physics Facility (LAMPF) accelerator and a associated Proton Storage Ring (PSR), which can Iter the intensity, time structure, and repetition rate of the pulses. The LAMPF protons of Line D are shared between the LANSCE target and the Weapons Neutron Research (WNR) facility, which results in LANSCE spectrometers being available to external users for unclassified research about 80% of each annual LAMPF run cycle. Measurements of interest to the Los Alamos National Laboratory (LANL) may also be performed and may occupy up to an additional 20% of the available beam time. These experiments are reviewed by an internal program advisory committee. This year, a total of 127 proposals were submitted. The proposed experiments involved 229 scientists, 57 of whom visited LANSCE to participate in measurements. In addition, 3 (nuclear physics) participating research teams, comprising 44 scientists, carried out experiments at LANSCE. Instrument beam time was again oversubscribed, with 552 total days requested an 473 available for allocation. Farrer, R.; Longshore, A. This year the Manuel Lujan Jr. Neutron Scattering Center (LANSCE) ran an informal user program because the US Department of Energy planned to close LANSCE in FY1994. As a result, an advisory committee recommended that LANSCE scientists and their collaborators complete work in progress. At LANSCE, neutrons are produced by spallation when a pulsed, 800-MeV proton beam impinges on a tungsten target. The proton pulses are provided by the Clinton P. Anderson Meson Physics Facility (LAMPF) accelerator and a associated Proton Storage Ring (PSR), which can Iter the intensity, time structure, and repetition rate of the pulses. The LAMPF protons of Line D are shared between the LANSCE target and the Weapons Neutron Research (WNR) facility, which results in LANSCE spectrometers being available to external users for unclassified research about 80% of each annual LAMPF run cycle. Measurements of interest to the Los Alamos National Laboratory (LANL) may also be performed and may occupy up to an additional 20% of the available beam time. These experiments are reviewed by an internal program advisory committee. This year, a total of 127 proposals were submitted. The proposed experiments involved 229 scientists, 57 of whom visited LANSCE to participate in measurements. In addition, 3 (nuclear physics) participating research teams, comprising 44 scientists, carried out experiments at LANSCE. Instrument beam time was again oversubscribed, with 552 total days requested an 473 available for allocation The performance of neutron scattering spectrometers at a long-pulse spallation source The first conclusion the author wants to draw is that comparison of the performance of neutron scattering spectrometers at CW and pulsed sources is simpler for long-pulsed sources than it is for the short-pulse variety. Even though detailed instrument design and assessment will require Monte Carlo simulations (which have already been performed at Los Alamos for SANS and reflectometry), simple arguments are sufficient to assess the approximate performance of spectrometers at an LPSS and to support the contention that a 1 MW long-pulse source can provide attractive performance, especially for instrumentation designed for soft-condensed-matter science. Because coupled moderators can be exploited at such a source, its time average cold flux is equivalent to that of a research reactor with a power of about 15 MW, so only a factor of 4 gain from source pulsing is necessary to obtain performance that is comparable with the ILL. In favorable cases, the gain from pulsing can be even more than this, approaching the limit set by the peak flux, giving about 4 times the performance of the ILL. Because of its low duty factor, an LPSS provides the greatest performance gains for relatively low resolution experiments with cold neutrons. It should thus be considered complementary to short pulse sources which are most effective for high resolution experiments using thermal or epithermal neutrons Data reduction for neutron scattering from plutonium samples. Final report An experiment performed in August, 1993, on the Low-Q Diffractometer (LQD) at the Manual Lujan Jr. Neutron Scattering Center (MLNSC) was designed to study the formation and annealing of He bubbles in aged 239 Pu metal. Significant complications arise in the reduction of the data because of the very high total neutron cross section of 239 Pu, and also because the sample are difficult to make uniform and to characterize. This report gives the details of the data and the data reduction procedures, presents the resulting scattering patterns in terms of macroscopic cross section as a function of momentum transfer, and suggests improvements for future experiments Spin observables in proton-neutron scattering at intermediate energy Spinka, H. A summary of np elastic scattering spin measurements at intermediate energy is given. Preliminary results from a LAMPF experiment to measure free neutron-proton elastic scattering spin-spin correlation parameters are presented. A longitudinally polarized proton target was used. These measurements are part of a program to determine the neutron-proton amplitudes in a model independent fashion at 500, 650, and 800 MeV. Some new proton-proton total cross sections in pure helicity states (Δσ/sub L/(pp)) near 3 GeV/c are also given. 37 refs., 2 figs The lineshape of inelastic neutron scattering in the relaxor ferroelectrics Ivanov, M.A.; Kozlovski, M.; Piesiewicz, T.; Stephanovich, V.A.; Weron, A.; Wymyslowski, A. The possibilities of theoretical and experimental investigations of relaxor ferroelectrics by inelastic neutron scattering method are considered. The simple model to description of the peculiarities of inelastic neutron scattering lineshapes in ferroelectric relaxors is suggested. The essence of this model is to consider the interaction of the phonon subsystem of relaxor ferroelectrics with the ensemble of defects and impurities. The modification of the Latin Hypercube Sampling (LHS) method is presented. The optimization of planning of experiment by the modified LHS method is considered [ru An inelastic neutron scattering study of hematite nanoparticles Hansen, Mikkel Fougt; Klausen, Stine Nyborg; Lefmann, K We have studied the magnetic dynamics in nanocrystalline hematite by inelastic neutron scattering at the high-resolution time-of-flight spectrometer IRIS at ISIS. Compared to previous inelastic neutron scattering experiments an improvement of the resolution function is achieved and more detailed...... moment at the antiferromagnetic Bragg reflection. We have studied different weightings of the particle size distribution. The data and their temperature dependence can with good agreement be interpreted on the basis of the Neel-Brown theory for superparamagnetic relaxation and a model for the collective... In this document the author considers the performance of a long pulse spallation source for those neutron scattering experiments that are usually performed with a monochromatic beam at a continuous wave (CW) source such as a nuclear reactor. The first conclusion drawn is that comparison of the performance of neutron scattering spectrometers at CW and pulsed sources is simpler for long-pulsed sources than it is for the short-pulse variety. Even though detailed instrument design and assessment will require Monte Carlo simulations (which have already been performed at Los Alamos for SANS and reflectometry), simple arguments are sufficient to assess the approximate performance of spectrometers at an LPSS and to support the contention that a 1 MW long-pulse source can provide attractive performance, especially for instrumentation designed for soft-condensed-matter science. Because coupled moderators can be exploited at such a source, its time average cold flux is equivalent to that of a research reactor with a power of about 15 MW, so only a factor of 4 gain from source pulsing is necessary to obtain performance that is comparable with the ILL. In favorable cases, the gain from pulsing can be even more than this, approaching the limit set by the peak flux, giving about 4 times the performance of the ILL. Because of its low duty factor, an LPSS provides the greatest performance gains for relatively low resolution experiments with cold neutrons. It should thus be considered complementary to short pulse sources which are most effective for high resolution experiments using thermal or epithermal neutrons Fast-neutron total and scattering cross sections of niobium Neutron total cross sections of niobium were measured from approx. = 0.7 to 4.5 MeV at intervals of less than or equal to 50 keV with broad resolution. Differential-elastic-scattering cross sections were measured from approx. = 1.5 to 4.0 MeV at intervals of 0.1 to 0.2 MeV and at 10 to 20 scattering angles distributed between approx. = 20 and 160 degrees. Inelastically-scattered neutrons, corresponding to the excitation of levels at: 788 +- 23, 982 +- 17, 1088 +- 27, 1335 +- 35, 1504 +- 30, 1697 +- 19, 1971 +- 22, 2176 +- 28, 2456 +- (.), and 2581 +- (.) keV, were observed. An optical-statistical model, giving a good description of the observables, was deduced from the measured differential-elastic-scattering cross sections. The experimental-results were compared with the respective evaluated quantities given in ENDF/B-V. Anomalous neutron Compton scattering cross section in zirconium hydride Abdul-Redah, T.; Krzystyniak, M.; Mayers, J.; Chatzidimitriou-Dreismann, C.A. In the last few years we observed a shortfall of intensity of neutrons scattered from protons in various materials including metal hydrogen systems using neutron Compton scattering (NCS) on the VESUVIO instrument (ISIS, UK). This anomaly has been attributed to the existence of short-lived quantum entangled states of protons in these materials. Here we report on results of very recent NCS measurements on ZrH 2 at room temperature. Also here an anomalous shortfall of scattering intensity due to protons is observed. In contrast to previous experiments on NbH 0.8 , the anomalies found in ZrH 2 are independent of the scattering angle (or momentum transfer). These different results are discussed in the light of recent criticisms and experimental tests related to the data analysis procedure on VESUVIO Neutron total cross sections of niobium were measured from approx. = 0.7 to 4.5 MeV at intervals of less than or equal to 50 keV with broad resolution. Differential-elastic-scattering cross sections were measured from approx. = 1.5 to 4.0 MeV at intervals of 0.1 to 0.2 MeV and at 10 to 20 scattering angles distributed between approx. = 20 and 160 degrees. Inelastically-scattered neutrons, corresponding to the excitation of levels at: 788 +- 23, 982 +- 17, 1088 +- 27, 1335 +- 35, 1504 +- 30, 1697 +- 19, 1971 +- 22, 2176 +- 28, 2456 +- (.), and 2581 +- (.) keV, were observed. An optical-statistical model, giving a good description of the observables, was deduced from the measured differential-elastic-scattering cross sections. The experimental-results were compared with the respective evaluated quantities given in ENDF/B-V Status of the WNR/PSR at Los Alamos A proton storage ring is presently under construction at Los Alamos for initial operation in 1985 to provide the world's highest peak neutron flux for neutron scattering experiments. The operational WNR pulsed neutron source is in use for TOF instrument development and condensed matter research. Experimental results have been obtained in incoherent inelastic scattering, liquids and powder diffraction, single crystal diffraction and eV spectroscopy using nuclear resonances. Technical problems being addressed include chopper phasing, scintillator detector development, shielding and collimation. A crystal analyzer spectrometer in the constant Q configuration is being assembled. The long range plan for the WNR/PSR facility is described Neutron scattering studies of biological molecules suggest tions of temperature, pressure or solvent environment for survival. ... scale that depends on the scattering vector range and energy resolution of the in- .... the structures) are good indicators of global evolutionary adaptation mechanisms. Ultra-small-angle neutron scattering. History, developments and applications Koizumi, Satoshi; Yamaguchi, Daisuke Ultra-small-angle neutron scattering (USANS), which is a scattering method observing in a q-region of q=10 -3 nm -1 , was initiated by double crystal (Bonse-Hart) method. Recently, a focusing USANS method was developed by combining a pin-hole type spectrometer and focusing lenses. These two methods, which are complementary to each other, were employed to achieve wide q-observations on microbial cellulose, actin cytoskeleton, tire, and membrane-electrolyte assembly of fuel cell. (author) VLAD for epithermal neutron scattering experiments at large energy transfers Tardocchi, M; Gorini, G; Perelli-Cippo, E; Andreani, C; Imberti, S; Pietropaolo, A; Senesi, R; Rhodes, N R; Schooneveld, E M The Very Low Angle Detector (VLAD) bank will extend the kinematical region covered by today's epithermal neutron scattering experiments to low momentum transfer ( -1 ) together with large energy transfer 0 -4 0 . In this paper the design of VLAD is presented together with Montecarlo simulations of the detector performances. The results of tests made with prototype VLAD detectors are also presented, confirming the usefulness of the Resonance Detector for measurements at very low scattering angles Small angle neutron scattering from hydrated cement pastes Sabine, T.M.; Bertram, W.K.; Aldridge, L.P. Small angle neutron scattering (SANS) was used to study the microstructure of hydrating cement made with, and without silica fume. Some significant differences were found between the SANS spectra of pastes made from OPC (ordinary Portland cement) and DSP (made with silica fume and superplasticiser). The SANS spectra are interpreted in terms of scattering from simple particles. Particle growth was monitored during hydration and it was found that the growth correlated with the heat of hydration of the cement Development of neutron detectors for neutron scattering experiments Moon, Myungkook; Kim, Jongyul; Kim, Jeong ho; Lee, Suhyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Changhwy [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of) Various kinds of detectors are used in accordance with the experimental purpose, such as zero dimensional detector, 1-D or 2-D position-sensitive detectors. Most of neutron detectors use He-3 gas because of its high neutron sensitivity. Since the He-3 supply shortage took place in early 2010, various He-3 alternative detectors have been developed even for the other neutron application. We have developed a new type alternative detector on the basis of He-3 detector technology. Although B- 10 has less neutron detection efficiency compared with He-3, it can be covered by the use of multiple B-10 layers. In this presentation, we would like to introduce the neutron detectors under development and developed detectors. Various types of detector were successfully developed and result of the technical test performance is promising. Even though the detection efficiency of the B-10 detector lower than He-3 one, the continuous research and development is needed for currently not available He-3. Optical diagnostics based on elastic scattering: Recent clinical demonstrations with the Los Alamos Optical Biopsy System Bigio, I.J.; Loree, T.R.; Mourant, J.; Shimada, T. [Los Alamos National Lab., NM (United States); Story-Held, K.; Glickman, R.D. [Texas Univ. Health Science Center, San Antonio, TX (United States). Dept. of Ophthalmology; Conn, R. [Lovelace Medical Center, Albuquerque, NM (United States). Dept. of Urology A non-invasive diagnostic tool that could identify malignancy in situ and in real time would have a major impact on the detection and treatment of cancer. We have developed and are testing early prototypes of an optical biopsy system (OBS) for detection of cancer and other tissue pathologies. The OBS invokes a unique approach to optical diagnosis of tissue pathologies based on the elastic scattering properties, over a wide range of wavelengths, of the microscopic structure of the tissue. The use of elastic scattering as the key to optical tissue diagnostics in the OBS is based on the fact that many tissue pathologies, including a majority of cancer forms, manifest significant architectural changes at the cellular and sub-cellular level. Since the cellular components that cause elastic scattering have dimensions typically on the order of visible to near-IR wavelengths, the elastic (Mie) scattering properties will be strongly wavelength dependent. Thus, morphology and size changes can be expected to cause significant changes in an optical signature that is derived from the wavelength dependence of elastic scattering. The data acquisition and storage/display time with the OBS instrument is {approximately}1 second. Thus, in addition to the reduced invasiveness of this technique compared with current state-of-the-art methods (surgical biopsy and pathology analysis), the OBS offers the possibility of impressively faster diagnostic assessment. The OBS employs a small fiber-optic probe that is amenable to use with any endoscope, catheter or hypodermic, or to direct surface examination (e.g. as in skin cancer or cervical cancer). It has been tested in vitro on animal and human tissue samples, and clinical testing in vivo is currently in progress. Neutron Scattering Investigations of Correlated Electron Systems and Neutron Instrumentation Holm, Sonja Lindahl are a unique probe for studying the atomic and molecular structure and dynamics of materials. Even though neutrons are very expensive to produce, the advantages neutrons provide overshadow the price. As neutrons interact weakly with materials compared to many other probes, e.g. electrons or photons...... contains antiferromagnetically coupled Cu2+ S = 1=2 ions forming truncated 24-spin cube clusters of linked triangles. The clusters in boleite afford a situation intermediate between molecular and bulk magnetism, accessible to both experiment and numerical theory, in which a spin liquid can be studied...... the impact of the time structure (pulse length and repetition frequency) choice for ESS are appended. McStas simulations of a low resolution cold powder diffractometer and high resolution thermal powder diffractometer with wavelength frame multiplication have been carried out for 20 different settings... Kartini Research Reactor prospective studies for neutron scattering application Widarto The Kartini Research Reactor (KRR) is located in Yogyakarta Nuclear Research Center, Yogyakarta - Indonesia. The reactor is operated for 100 kW thermal power used for research, experiments and training of nuclear technology. There are 4 beam ports and 1 column thermal are available at the reactor. Those beam ports have thermal neutron flux around 10 7 n/cm 2 s each other and used for sub critical assembly, neutron radiography studies and Neutron Activation Analysis (NAA). Design of neutron collimator has been done for piercing radial beam port and the calculation result of collimated neutron flux is around 10 9 n/cm 2 s. This paper describes experiment facilities and parameters of the Kartini research reactor, and further more the prospective studies for neutron scattering application. The purpose of this paper is to optimize in utilization of the beam ports facilities and enhance the manpower specialty. The special characteristic of the beam ports and preliminary studies, pre activities regarding with neutron scattering studies for KKR is presented. (author) Latest developments of neutron scattering instrumentation at the Juelich Centre for Neutron Science Jülich Centre for Neutron Science (JCNS) is operating a number of world-class neutron scattering instruments situated at the most powerful and advanced neutron sources (FRM II, ILL and SNS) and is continuously undertaking significant efforts in the development and upgrades to keep this instrumentation in line with the continuously changing scientific request. These developments are mostly based upon the latest progress in neutron optics and polarized neutron techniques. For example, the low-Q limit of the suite of small angle-scattering instruments has been extended to 4·10 -5 Å -1 by the successful use of focusing optics. A new generation of correction elements for the neutron spin-echo spectrometer has allowed for the use of the full field integral available, thus pushing further the instrument resolution. A significant progress has been achieved in the developments of 3 He neutron spin filters for purposes of the wide-angle polarization analysis for off-specular reflectometry and (grazing incidence) small-angle neutron scattering, e.g. the on-beam polarization of 3 He in large cells is allowing to achieve a high neutron beam polarization without any degradation in time. The wide Q-range polarization analysis using 3 He neutron spin filters has been implemented for small-angle neutron scattering that lead to the reduction up to 100 times of the intrinsic incoherent background from non-deuterated biological molecules. Also the work on wide-angle XYZ magnetic cavities (Magic PASTIS) will be presented. (author) Steel research using neutron beam techniques. In-situ neutron diffraction, small-angle neutron scattering and residual stress analysis Sueyoshi, Hitoshi; Ishikawa, Nobuyuki; Yamada, Katsumi; Sato, Kaoru; Nakagaito, Tatsuya; Matsuda, Hiroshi; Arakaki, Yu; Tomota, Yo Recently, the neutron beam techniques have been applied for steel researches and industrial applications. In particular, the neutron diffraction is a powerful non-destructive method that can analyze phase transformation and residual stress inside the steel. The small-angle neutron scattering is also an effective method for the quantitative evaluation of microstructures inside the steel. In this study, in-situ neutron diffraction measurements during tensile test and heat treatment were conducted in order to investigate the deformation and transformation behaviors of TRIP steels. The small-angle neutron scattering measurements of TRIP steels were also conducted. Then, the neutron diffraction analysis was conducted on the high strength steel weld joint in order to investigate the effect of the residual stress distribution on the weld cracking. (author) Grazing incidence polarized neutron scattering in reflection ... journal of. January 2012 physics pp. 1–58. Grazing incidence polarized ..... atomic distances, the neutron has an energy which is low compared to molecular binding ...... cores and that of the Co ions in the AF oxide coatings. ...... [32] C Leighton, M R Fitzsimmons, P Yashar, A Hoffmann, J Nogus, J Dura, C F Majkrzak and. Neutron-scattering studies of magnetic superconductors Sinha, S.K.; Crabtree, G.W.; Hinks, D.G.; Mook, H.A.; Pringle, O.A. Results obtained in the last few years obtained by neutron diffraction on the nature of the magnetic ordering in magnetic superconductors are reviewed. Emphasis is given to studies of the complex intermediate phase in ferromagnetic superconductors where both superconductivity and ferromagnetism appear to coexist Optics for Advanced Neutron Imaging and Scattering Moncton, David E.; Khaykovich, Boris During the report period, we continued the work as outlined in the original proposal. We have analyzed potential optical designs of Wolter mirrors for the neutron-imaging instrument VENUS, which is under construction at SNS. In parallel, we have conducted the initial polarized imaging experiment at Helmholtz Zentrum, Berlin, one of very few of currently available polarized-imaging facilities worldwide. Investigation of static and dynamic properties of condensed matter by using neutron scattering Davidovic, M. Possibilities of using neutron scattering for investigating microscopic properties of materials are analyzed. Basic neutron scattering theory is presented and its use in structure and dynamics analyses of condense systems. (author) Scattered Neutron Tomography Based on A Neutron Transport Inverse Problem William Charlton Neutron radiography and computed tomography are commonly used techniques to non-destructively examine materials. Tomography refers to the cross-sectional imaging of an object from either transmission or reflection data collected by illuminating the object from many different directions Anomalous neutron scattering in nuclear-polarized media Bashkin, E.P. A novel inelastic scattering exchange mechanism involving spin flip is considered for slow neutrons moving through a nuclear-polarized medium. The scattering is accompanied by the emission or absorption of thermal fluctuations of the transverse magnetization of the medium. The main role in the fluctuations is played by weakly decaying Larmor precession of the nuclear spins in an external magnetic field. Under 'giant opalescence' conditions the effect is enormous and the respective cross sections exceed significantly those for ordinary elastic scattering. Thus, for 29 Si and 3 He in typical experimental conditions the cross sections for the inelastic processes are of the order of 10 5 -10 6 barn A Neutron Scattering Study of Collective Excitations in Superfluid Helium Graf, E. H.; Minkiewicz, V. J.; Bjerrum Møller, Hans Extensive inelastic-neutron-scattering experiments have been performed on superfluid helium over a wide range of energy and momentum transfers. A high-resolution study has been made of the pressure dependence of the single-excitation scattering at the first maximum of the dispersion curve over...... of the multiexcitation scattering was also studied. It is shown that the multiphonon spectrum of a simple Debye solid with the phonon dispersion and single-excitation cross section of superfluid helium qualitatively reproduces these data.... Neutron scattering facilities at China Institute of Atomic Energy. Present and future situations Ye, C.T.; Gou, C.; Yang, T.H. The 15 MW Heavy Water Research Reactor (HWRR) at CIAE in Beijing is the only neutron source available for neutron scattering experiments in China at present. So far totally 5 neutron scattering spectrometers are installed at 4 beam tubes. A 60 MW new research reactor, China Advanced Research Reactor (CARR), now is being built at CIAE to meet the increasing demand of neutron scattering research in China. A brief description of HWRR, the presently existing neutron scattering equipments at HWRP, CARR, and the neutron scattering facilities to be installed at CARR are presented. (J.P.N.) Small angle neutron scattering studies of mixed micelles of sodium The aqueous solutions of sodium cumene sulphonate (NaCS) and its mixtures with each of cetyl trimethylammonium bromide (CTAB) and sodium dodecyl sulphate (SDS) are characterized by small angle neutron scattering (SANS). NaCS when added to CTAB solution leads to the formation of long rod-shaped micelles with ... Spin-Echo Small-Angle Neutron Scattering Development Uca, O. Spin-Echo Small-Angle Neutron Scattering (SESANS) instrument is a novel SANS technique which enables one to characterize distances from a few nanometers up to the micron range. The most striking difference between normal SANS and SESANS is that in SESANS one gets information in real space, whereas Neutron transport in two dissimilar media anisotropic scattering Burkart, A.R.; Ishiguro, Y.; Siewert, C.E. The elementary solution of the one-speed neutron-transport equation with linearly anisotropic scattering are used in conjunction with Chandrasekhar's invariance principles to solve in a concise manner the Milne problem for two adjoining half-spaces and the critical reactor problem for a reflected slab Inelastic Neutron Scattering Investigations of the Magnetic Excitations Feile, R; Kjems, Jørgen; Hauser, A. The magnetic excitations perpendicular to the antiferromagnetic chains in CsVX3 (X = Cl, Br, I) have been measured in the ordered state by inelastic neutron scattering. The dispersion relations and intensity distributions are those expected for ordinary spin waves in a triangular xy-model.... Small-angle neutron scattering studies on water soluble complexes ... ... by small-angle neutron scattering. SANS data showed a positive indication of the formation of RCP-SDS complexes. Even though the complete structure of the polyion complexes could not be ascertained, the results obtained give us the information on the local structure in these polymer-surfactant systems. The data were ... One-phonon scattering of ultra cold neutrons in copper Holas, A. Experiments with ultra cold neutrons (UCN) showed that their lifetime in a closed vessel is much smaller than expected. In order to explain this phenomenon, many different mechanisms leading to heating of UCN were proposed, among other things one-phonon coherent inelastic scattering (with phonon absorption). This paper shows quantitatively the contribution of this process to the total heating of UCN Spin-wave and critical neutron scattering from chromium Als-Nielsen, Jens Aage; Axe, J.D.; Shirane, G. Chromium and its dilute alloys are unique examples of magnetism caused by itinerant electrons. The magnetic excitations have been studied by inelastic neutron scattering using a high-resolution triple-axis spectrometer. Spin-wave peaks in q scans at constant energy transfer �ω could, in general... A national facility for small angle neutron scattering Buyers, W.J.L.; Katsaras, J.; Mellors, W.; Potter, M.M.; Powell, B.M.; Rogge, R.B.; Root, J.H.; Tennant, D.C.; Tun, Z. A world-class small angle neutron scattering (SANS) facility is proposed for Canada. It will provide users from the fields of biology, chemistry, physics, materials science and engineering with a uniquely powerful tool for investigating microstructural properties whose length scales lie in the optical to atomic range. (author). 7 refs Neutron scattering and the 1994 Nobel Physics Prize Sun Xiangdong Neutron scattering is an efficient method for detecting the microstructure of matter by which we can study, for example, details of the phonon spectrum in solids, and the isotopic effect. Bertram N. Brockhouse and Clifford G. Shull earned the Nobel Physics Prize in 1994 for their significant contributions in this domain Benchmarking the inelastic neutron scattering soil carbon method The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext... Characterization of alumina using small angle neutron scattering (SANS) Megat Harun Al Rashidn Megat Ahmad; Abdul Aziz Mohamed; Azmi Ibrahim; Che Seman Mahmood; Edy Giri Rachman Putra; Muhammad Rawi Muhammad Zin; Razali Kassim; Rafhayudi Jamro Alumina powder was synthesized from an aluminium precursor and studied using small angle neutron scattering (SANS) technique and complemented with transmission electron microscope (TEM). XRD measurement confirmed that the alumina produced was high purity and highly crystalline αphase. SANS examination indicates the formation of mass fractals microstructures with fractal dimension of about 2.8 on the alumina powder. (Author) Studies of magnetism with inelastic scattering of cold neutrons Jacrot, B. Inelastic scattering of cold neutrons can be used to study some aspects of magnetism: spins waves, exchange integrals, vicinity of Curie point. After description of the experimental set-up, several experiments, in the fields mentioned above, are analysed. (author) [fr Abdul Aziz Bin Mohamed; Azali Bin Muhamad; Shukri Bin Mohd The current status of SANS (Small Angle Neutron Scattering facility) activities in Malaysia has been presented. Many works need to be done for system improvement before the system can be confidently used as one of effective quality control tools in materials production and engineering sectors. (author) Neutron-proton elastic scattering at high energies Saleem, M.; Fazal-e-Aleem (Punjab Univ., Lahore (Pakistan). Dept. of Physics) The most recent measurements of the differential and total cross sections of neutron-proton elastic scattering from 70 to 400 GeV/c have been explained by using rho as a simple pole and pomeron as a dipole. The predictions are also made regarding the energy dependence of dip and bump structure in angular distribution. Buyers, W J.L.; Katsaras, J; Mellors, W; Potter, M M; Powell, B M; Rogge, R B; Root, J H; Tennant, D C; Tun, Z [Atomic Energy of Canada Ltd., Chalk River, ON (Canada). Chalk River Nuclear Labs.; Epand, R; Gaulin, B D [McMaster Univ., Hamilton, ON (Canada) A world-class small angle neutron scattering (SANS) facility is proposed for Canada. It will provide users from the fields of biology, chemistry, physics, materials science and engineering with a uniquely powerful tool for investigating microstructural properties whose length scales lie in the optical to atomic range. (author). 7 refs. High energy spin waves in iron measured by neutron scattering Boothroyd, A.T.; Paul, D.M.; Mook, H.A. We present new results for the spin were dispersion relation measured along the [ζζ0] direction in bcc iron (12% silicon) by time-of-flight, neutron inelastic scattering. The excitations were followed to the zone boundary, where they are spread over a range of energies around 300meV. 6 refs., 2 figs Small angle neutron scattering studies on the interaction of cationic The structure of the protein–surfactant complex of bovine serum albumin (BSA) and cationic surfactants has been studied by small angle neutron scattering. At low concentrations, the CTAB monomers are observed to bind to the protein leading to an increase in its size. On the other hand at high concentrations, surfactant ... Small-angle neutron scattering studies of nonionic surfactant: Effect Micellar solution of nonionic surfactant -dodecyloligo ethyleneoxide surfactant, decaoxyethylene monododecyl ether [CH3(CH2)11(OCH2CH2)10OH], C12E10 in D2O solution have been analysed by small-angle neutron scattering (SANS) at different temperatures (30, 45 and 60°C) both in the presence and absence of ... Lattice dynamics of solid deuterium by inelastic neutron scattering Nielsen, Mourits; Bjerrum Møller, Hans The dispersion relations for phonons in solid ortho-deuterium have been measured at 5 °K by inelastic neutron scattering. The results are in good agreement with recent calculations in which quantum effects are taken into account. The data have been fitted to a third-neighbor general force model... Dynamics of Magnetic Nanoparticles Studied by Neutron Scattering We present the first triple-axis neutron scattering measurements of magnetic fluctuations in nanoparticles using an antiferromagnetic reflection. Both the superparamagnetic relaxation and precession modes in similar to 15 nm hematite particles are: observed. The results have been consistently...... analyzed on the basis of a simple model with uniaxial anisotropy and the Neel-Brown theory for the relaxation.... Spin dynamics in Tb studied by critical neutron scattering Dietrich, O. W.; Als-Nielsen, Jens Aage The inelasticity of the critical neutron scattering in Tb was measured at and above the Neel temperature. In the hydrodynamic region the line width Gamma (q=0, kappa 1)=C kappa z1, with z=1.4+or-0.1 and c=4.3+or-0.3 meVAAz. This result deviates from the conventional theory, which predicts... Diffuse neutron scattering from anion-excess strontium chloride Goff, J.P.; Clausen, K.N.; FÃ¥k, B. The defect structure and diffusional processes have been studied in the anion-excess fluorite (Sr, Y)Cl2.03 by diffuse neutron scattering techniques. Static cuboctahedral clusters found at ambient temperature break up at temperatures below 1050 K, where the anion disorder is highly dynamic. The a... Inelastic neutron scattering from non-framework species within zeolites Newsam, J.M.; Brun, T.O.; Trouw, F.; Iton, L.E.; Curtiss, L.A. Inelastic and quasielastic neutron scattering have special advantages for studying certain of the motional properties of protonated or organic species within zeolites and related microporous materials. In this paper these advantages and various experimental methods are outlined, and illustrated by measurements of torsional vibrations and rotational diffusion of tetramethylammonium (TMA) cations occluded within zeolites TMA-sodalite, omega, ZK-4 and SAPO-20 Neutron Scattering from fcc Pr and Pr3Tl Birgeneau, R. J.; Als-Nielsen, Jens Aage; Bucher, E. Elastic-neutron-scattering measurements on the singlet-ground-state ferromagnets fcc Pr and Pr3 Tl are reported. Both exhibit magnetic phase transitions, possibly to a simple ferromagnetic state at 20 and 11.6 °K, respectively. The transitions appear to be of second order although that in fcc Pr... Small-angle neutron scattering in materials science - an introduction Fratzl, P. The basic principles of the application of small-angle neutron scattering to materials research are summarized. The text focusses on the classical methods of data evaluation for isotropic and for anisotropic materials. Some examples of applications to the study of alloys, porous materials, composites and other complex materials are given. (author) 9 figs., 38 refs Small-angle neutron scattering from colloidal dispersions Ottewill, R.H. A survey is given of recent work on the use of small-angle neutron scattering to examine colloidal dispersions. Particular attention is given to the determination of particle size and polydispersity, the determination of particle morphology and the behaviour of concentrated colloidal dispersions, both at rest and under the influence of an applied shear field. (orig.) Inelastic neutron scattering from synthetic and biological polymers Neutron elastic and inelastic scattering measurements have provided many unique insights into structure, and by reviewing progress on synthetics, important differences likely to arise in biological systems are identified and a direction for studies of the latter is suggested. By neutron inelastic scattering it is possible to measure the frequency of thermally excited interatomic and intermolecular vibrations in crystals. With perfect organic and inorganic crystals the technique is now classical and has given great insight into the crystal forces responsible for the observed structures as well as the phase transitions they undergo. The study of polymer crystals immediately presents two problems of disorder: (1) Macroscopic disorder arises because the sample is a mixture of amorphous and crystalline fractions, and it may be acute enough to inhibit growth of a single crystal large enough for neutron studies. (2) Microscopic disorder in the packing of polymer chains in the ''crystalline'' regions is indicated by broadening of Bragg peaks. Both types of disorder problem arise in biological systems. The methods by which they were partially overcome to allow neutron measurements with synthetic polymers are described but first a classical example of the determination of interatomic forces by inelastic neutron scattering is given Pynn, Roger [Indiana Univ., Bloomington, IN (United States) The goal of this project was to develop improved instrumentation for studying the microscopic structures of materials using neutron scattering. Neutron scattering has a number of advantages for studying material structure but suffers from the well-known disadvantage that neutrons' ability to resolve structural details is usually limited by the strength of available neutron sources. We aimed to overcome this disadvantage using a new experimental technique, called Spin Echo Scattering Angle Encoding (SESAME) that makes use of the neutron's magnetism. Our goal was to show that this innovation will allow the country to make better use of the significant investment it has recently made in a new neutron source at Oak Ridge National Laboratory (ORNL) and will lead to increases in scientific knowledge that contribute to the Nation's technological infrastructure and ability to develop advanced materials and technologies. We were successful in demonstrating the technical effectiveness of the new method and established a baseline of knowledge that has allowed ORNL to start a project to implement the method on one of its neutron beam lines. Neutron-neutron quasifree scattering in nd breakup at 10 MeV Malone R.C. We are conducting new measurements of the cross section for nn QFS in nd breakup. The measurements are performed at incident neutron beam energies below 20 MeV. The neutron beam is produced via the 2H(d, n3He reaction. The target is a deuterated plastic cylinder. Our measurements utilize time-of-flight techniques with a pulsed neutron beam and detection of the two emitted neutrons in coincidence. A description of our initial measurements at 10 MeV for a single scattering angle will be presented along with preliminary results. Also, plans for measurements at other energies with broad angular coverage will be discussed. Neutron Scattering at HIFAR—Glimpses of the Past Margaret Elcombe Full Text Available This article attempts to give a description of neutron scattering down under for close on forty-six years. The early years describe the fledgling group buying parts and cobbling instruments together to its emergence as a viable neutron scattering group with up to ten working instruments. The second section covers the consolidation of this group, despite tough higher level management. The Australian Science and Technology Council (ASTEC enquiry in 1985 and the Government decision not to replace the HIgh Flux Australian Reactor (HIFAR, led to major expansion and upgrading of the existing neutron beam facilities during the 1990s. Finally, there were some smooth years of operation while other staff were preparing for the replacement reactor. It has concentrated on the instruments as they were built, modified, replaced with new ones, and upgraded at different times. Diffuse scattering of neutrons and X-rays Novion, C.H. de Diffuse scattering is used to study defect concentrations of about 10 -4 in the case of X-rays and 10 -3 in the case of neutrons. The foundations of diffuse scattering formalism are given, some experimental devices described and a few applications discussed: study by diffraction on powders of defects in CeOsub(2-x); short-range order study by X-rays on Cusub(0.75) Ausub(0.25); short-range order study by neutrons on Cusub(0.435)Nisub(0.565); short-range order study by electrons TiOx; study of irradiation-induced self-interstitials in Al; study of holes created by neutrons in Al [fr Long-Lifetime Low-Scatter Neutron Polarization Target Richardson, Jonathan M. Polarized neutrons scattering is an important technology for characterizing magnetic and other materials. Polarized helium three (P-3He) is a novel technology for creating polarized beams and, perhaps more importantly, for the analysis of polarization in highly divergent scattered beams. Analysis of scattered beams requires specialized targets with complex geometries to ensure accurate results. Special materials and handling procedures are required to give the targets a long useful lifetime. In most cases, the targets must be shielded from stray magnetic fields from nearby equipment. SRL has developed and demonstrated hybrid targets made from glass and aluminum. We have also developed and calibrated a low-field NMR system for measuring polarization lifetimes. We have demonstrated that our low-field system is able to measure NMR signals in the presence of conducting (metallic) cell elements. We have also demonstrated a non-magnetic valve that can be used to seal the cells. We feel that these accomplishments in Phase I are sufficient to ensure a successful Phase II program. The commercial market for this technology is solid. There are over nine neutron scattering centers in the US and Canada and over 22 abroad. Currently, the US plans to build a new $1.4B scattering facility called the Spallation Neutron Source (SNS). The technology developed in this project will allow SRL to supply targets to both existing and future facilities. SRL is also involved with the application of P-3He to medical imaging Fast neutron scattering on actinide nuclei More and more sophisticated neutron experiments have been carried out with better samples in several laboratories and it was necessary to intercompare them. In this respect, let us quote for example (n,n'e) and (n,n'#betta#) measurements. Moreover, high precision (p,p), (p,p') and (p,n) measurements have been made, thus supplementing neutron experiments in the determination of the parameters of the optical model, still widely used to describe the neutron-nucleus interaction. The optical model plays a major role and it is therefore essential to know it well. The spherical optical model is still very useful, especially because of its simplicity and of the relatively short calculation times, but is obviously insufficient to treat deformed nuclei such as actinides. For accurate calculations about these nuclei, it is necessary to use a deformed potential well and solve a set of coupled equations, hence long computational times. The importance of compound nucleus formation at low energy requires also a good knowledge of the statistical model together with that of all the reaction mechanisms which are involved, including fission for which an accurate barrier is necessary and, of course, well-adjusted level densities. The considerations form the background of the Scientific Programme set up by a Programme Committee whose composition is given further on in this book Magnetic anisotropy and neutron scattering studies of some rare earth metals Day, R. The thesis is concerned with magnetic anisotropy of dysprosium and alloys of gadolinium: yttrium, and also neutron scattering studies of dysprosium. The experiments are discussed under the topic headings: magnetic anisotropy, rare earths, torque measurements, elastic neutron scattering, inelastic neutron scattering, dysprosium measurements, and results for the gadolinium: yttrium alloys. (U.K.) Small-angle neutron scattering instrument at MINT Mohd Ali Sufi; Yusof Abdullah; Razali Kassim; Hamid; Shahidan Radiman; Mohammad Deraman; Abdul Ghaffar Ramli The Small Angle Neutron Scattering (SANS) Instrument has been developed at Malaysian Institute for Nuclear Technology Research (MINT) for studying structural properties of materials on the length scale 1 nm to 100 nm. This is the length scale which is relevant for many topics within soft condensed matter, like polymers, colloids, biological macromolecules, etc. The SANS is a complementary technique to X-ray and electron scattering. However, while these later techniques give information on structures near surface, SANS concerns the structure of the bulk. Samples studied by SANS technique are typically bulk materials of the sizes mm's to cm's, or materials dissolved in a liquid. This paper described the general characteristics of SANS instrument as well as the experimental formulation in neutron scattering. The preliminary results obtained by this instrument are shown Fast-neutron scattering cross sections of elemental silver Differential neutron elastic- and inelastic-scattering cross sections of elemental silver are measured from 1.5 to 4.0 MeV at intervals of less than or equal to 200 keV and at 10 to 20 scattering angles distributed between 20 and 160 0 . Inelastically-scattered neutron groups are observed corresponding to the excitation of levels at; 328 +- 13, 419 +- 50, 748 +- 25, 908 +- 26, 1150 +- 38, 1286 +- 25, 1507 +- 20, 1623 +- 30, 1835 +- 20 and 1944 +- 26 keV. The experimental results are used to derive an optical-statistical model that provides a good description of the observed cross sections. The measured values are compared with corresponding quantities given in ENDF/B-V Incoherent neutron scattering in acetanilide and three deuterated derivatives Barthes, Mariette; Almairac, Robert; Sauvajol, Jean-Louis; Moret, Jacques; Currat, Roland; Dianoux, José Incoherent-neutron-scattering measurements of the vibrational density of states of acetanilide and three deuterated derivatives are presented. These data allow one to identify an intense maximum, assigned to the N-H out-of-plane bending mode. The data display the specific behavior of the methyl torsional modes: large isotopic shift and strong low-temperature intensity; confirm our previous inelastic-neutron-scattering studies, indicating no obvious anomalies in the range of frequency of the acoustic phonons. In addition, the data show the existence of thermally activated quasielastic scattering above 100 K, assigned to the random diffusive motion of the methyl protons. These results are discussed in the light of recent theoretical models proposed to explain the anomalous optical properties of this crystal. Slow neutron scattering in molecular crystals. 5-4 Inoue, Kazuhiko The utilization of incoherent inelastic neutron scattering (INS) as a probe for molecular crystals is reviewed. In particular, some typical examples of the measurement of incoherent inelastic neutron scattering spectra (INSS) in molecular crystals are presented in the first section of this report. The results of measurement are shown for theta-xylene, benzene, polypropylene oxide, deuteride, and formic acid. The second section presents an equation for the incoherent scattering cross section of a crystal by dividing the molecular motion into the outer and inner modes. Phonon expansion is also used for the easy understanding of the relation between the INSS and the dynamic characteristics of molecular crystals. In the third section, the measured results are analyzed on the basis of the theory presented in the previous section. And the difference between the van der Waals bond and the hydrogen bond is shortly discussed. (Aoki, K.) Extraction of the neutron-neutron scattering length ann from kinematically complete neutron-deuteron breakup experiments Witala, H.; Hueber, D.; Gloeckle, W.; Tornow, W.; Gonzalez Trotter, D.E. Data for the neutron-neutron final-state-interaction cross section obtained recently in a kinematically complete neutron-deuteron breakup experiment have been reanalyzed using rigorous solutions of the three-nucleon Faddeev equations with realistic nucleon-nucleon interactions. A discrepancy was found with respect to a recent analysis based on the W-matrix approximation to the Paris potential. We also estimate theoretical uncertainties in extracting the neutron-neutron scattering length resulting from the use of different nucleon-nucleon interactions and the possible action of the two pion-exchange three-nucleon force. We find that there exists a certain production angle for the interacting neutron-neutron pair where the uncertainties become minimal. (author) The analysis and correction of neutron scattering effects in neutron imaging Raine, D.A.; Brenizer, J.S. A method of correcting for the scattering effects present in neutron radiographic and computed tomographic imaging has been developed. Prior work has shown that beam, object, and imaging system geometry factors, such as the L/D ratio and angular divergence, are the primary sources contributing to the degradation of neutron images. With objects smaller than 20--40 mm in width, a parallel beam approximation can be made where the effects from geometry are negligible. Factors which remain important in the image formation process are the pixel size of the imaging system, neutron scattering, the size of the object, the conversion material, and the beam energy spectrum. The Monte Carlo N-Particle transport code, version 4A (MCNP4A), was used to separate and evaluate the effect that each of these parameters has on neutron image data. The simulations were used to develop a correction algorithm which is easy to implement and requires no a priori knowledge of the object. The correction algorithm is based on the determination of the object scatter function (OSF) using available data outside the object to estimate the shape and magnitude of the OSF based on a Gaussian functional form. For objects smaller than 1 mm (0.04 in.) in width, the correction function can be well approximated by a constant function. Errors in the determination and correction of the MCNP simulated neutron scattering component were under 5% and larger errors were only noted in objects which were at the extreme high end of the range of object sizes simulated. The Monte Carlo data also indicated that scattering does not play a significant role in the blurring of neutron radiographic and tomographic images. The effect of neutron scattering on computed tomography is shown to be minimal at best, with the most serious effect resulting when the basic backprojection method is used Neutron scattering from a substitutional mass defect Williams, R.D.; Lovesey, S.W. The dynamic structure factor is calculated for a low concentration of light mass scatterers substituted in a cubic crystal matrix. A new numerical method for the exact calculation is demonstrated. A local density of states for the low momentum transfer limit, and the shifts and widths of the oscillator peaks in the high momentum transfer limit are derived. The limitations of an approximation which decouples the defect from the lattice is discussed. (author) Studies of molecular dynamics with neutron scattering techniques. Part of a coordinated programme on neutron scattering techniques Vinhas, L.A. Molecular dynamics was studied in samples of tert-butanol, cyclohexanol and methanol, using neutron inelastic and quasi-elastic techniques. The frequency spectra of cyclohexanol in crystalline phase were interpreted by assigning individual energy peaks to hindered rotation of molecules, lattice vibration, hydrogen bond stretching and ring bending modes. Neutron quasi-elastic scattering measurements permitted the testing of models for molecular diffusion as a function of temperature. The interpretation of neutron incoherent inelastic scattering on methanol indicated the different modes of molecular dynamics in this material; individual inelastic peaks in the spectra could be assigned to vibrations of crystalline lattice, stretching of hydrogen bond and vibrational and torsional modes of CH 3 OH molecule. The results of the experimental work on tertbutanol indicate two distinct modes of motion in this material: individual molecular librations are superposed to a cooperative rotation diffusion which occurs both in solid and in liquid state A thermal neutron scattering law for yttrium hydride Zerkle, Michael; Holmes, Jesse Yttrium hydride (YH2) is of interest as a high temperature moderator material because of its superior ability to retain hydrogen at elevated temperatures. Thermal neutron scattering laws for hydrogen bound in yttrium hydride (H-YH2) and yttrium bound in yttrium hydride (Y-YH2) prepared using the ab initio approach are presented. Density functional theory, incorporating the generalized gradient approximation (GGA) for the exchange-correlation energy, is used to simulate the face-centered cubic structure of YH2 and calculate the interatomic Hellmann-Feynman forces for a 2 × 2 × 2 supercell containing 96 atoms. Lattice dynamics calculations using PHONON are then used to determine the phonon dispersion relations and density of states. The calculated phonon density of states for H and Y in YH2 are used to prepare H-YH2 and Y-YH2 thermal scattering laws using the LEAPR module of NJOY2012. Analysis of the resulting integral and differential scattering cross sections demonstrates adequate resolution of the S(α,β) function. Comparison of experimental lattice constant, heat capacity, inelastic neutron scattering spectra and total scattering cross section measurements to calculated values are used to validate the thermal scattering laws. Neutron-optical effects at very cold neutrons scattering on the spherical particles of different sizes Grinev, V.G.; Kudinova, O.I.; Novokshonova, L.A.; Kuznetsov, S.P.; Udovenko, A.I.; Shelagin, A.V. Very cold neutrons (VCN) with the wavelength λ > 4.0 ran are convenient tool for investigating the super molecular structures of different nature. Using a Born approximation (BA) to the analysis of dependencies on the wavelength of the VCN scattering cross sections, it is possible to obtain information about average sizes (R) and concentrations of the scattering particles with R∼ λ. However, with an increasing the sizes of scatterers the conditions for BA applicability can be disrupted. In this work we investigated the possibilities of BA, eikonal and geometric-optical approximations for the analysis of VCN scattering on the spherical particles with R ≥ λ Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments Dawidowski, J; Blostein, J J; Granada, J R Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments are analyzed. The theoretical basis of the method is stated, and a Monte Carlo procedure to perform the calculation is presented. The results are compared with experimental data. The importance of the accuracy in the description of the experimental parameters is tested, and the implications of the present results on the data analysis procedures is examined Neutron scattering investigation of magnetic excitations at high energy transfers Loong, C.K. With the advance of pulsed spallation neutron sources, neutron scattering investigation of elementary excitations in magnetic materials can now be extended to energies up to several hundreds of MeV. We have measured, using chopper spectrometers and time-of-flight techniques, the magnetic response functions of a series of d and f transition metals and compounds over a wide range of energy and momentum transfer. In PrO 2 , UO 2 , BaPrO 3 and CeB 6 we observed crystal-field transitions between the magnetic ground state and the excited levels in the energy range from 40 to 260 MeV. In materials exhibiting spin-fluctuation or mixed-valent character such as Ce 74 Th 26 , on the other hand, no sharp crystal-field lines but a broadened quasielastic magnetic peak was observed. The line width of the quasielastic component is thought to be connected to the spin-fluctuation energy of the 4f electrons. The significance of the neutron scattering results in relation to the ground state level structure of the magnetic ions and the spin-dynamics of the f electrons is discussed. Recently, in a study of the spin-wave excitations in itinerant magnetic systems, we have extended the spin-wave measurements in ferromagnetic iron up to about 160 MeV. Neutron scattering data at high energy transfers are of particular interest because they provide direct comparison with recent theories of itinerant magnetism. 26 references, 7 figures US-Japan Cooperative Program on neutron scattering Wilkinson, M.K.; Blume, M.; Stevens, D.K.; Iizumi, M.; Yamada, Y. The US-Japan Cooperative Program on Neutron Scattering was implemented through arrangements by the United States Department of Energy with the Science and Technology Agency (STA) and the Ministry of Science, Education, and Culture (Monbusho) of Japan. It involves research collaboration in neutron scattering by Japanese scientists with scientists at Oak Ridge National Laboratory (ORNL) and Brookhaven National Laboratory (BNL) and the construction of new neutron scattering equipment at both laboratories with funds provided by the Japanese government. The United States provides neutrons in exchange for the new equipment, and other costs of the program are equally shared by the two countries. The assignments of Japanese scientists to ORNL and BNL vary in length, but they correspond to about two person years annually at each laboratory. An equal number of US scientists also participate in the research program. The main research collaboration is centered around the new equipment provided by the Japanese, but other facilities are utilized when they are needed. The new equipment includes a new type of wide-angle diffractometer and equipment for maintaining extreme sample environments at ORNL and a sophisticated polarized-beam triple-axis spectrometer at BNL. 13 refs., 3 figs Soil-Carbon Measurement System Based on Inelastic Neutron Scattering Orion, I.; Wielopolski, L. Increase in the atmospheric CO 2 is associated with concurrent increase in the amount of carbon sequestered in the soil. For better understanding of the carbon cycle it is imperative to establish a better and extensive database of the carbon concentrations in various soil types, in order to develop improved models for changes in the global climate. Non-invasive soil carbon measurement is based on Inelastic Neutron Scattering (INS). This method has been used successfully to measure total body carbon in human beings. The system consists of a pulsed neutron generator that is based on D-T reaction, which produces 14 MeV neutrons, a neutron flux monitoring detector and a couple of large NaI(Tl), 6'' diameter by 6'' high, spectrometers [4]. The threshold energy for INS reaction in carbon is 4.8 MeV. Following INS of 14 MeV neutrons in carbon 4.44 MeV photons are emitted and counted during a gate pulse period of 10 μsec. The repetition rate of the neutron generator is 104 pulses per sec. The gamma spectra are acquired only during the neutron generator gate pulses. The INS method for soil carbon content measurements provides a non-destructive, non-invasive tool, which can be optimized in order to develop a system for in field measurements 2009 International Conference on Neutron Scattering (ICNS 2009) Gopal Rao, PhD; Gillespie, Donna The ICNS provides a focal point for the worldwide neutron user community to strengthen ties within this diverse group, while at the same time promoting neutron research among colleagues in related disciplines identified as would-be neutron users. The International Conference on Neutron Scattering thus serves a dual role as an international user meeting and a scientific meeting. As a venue for scientific exchange, the ICNS showcases recent results and provides forums for scientific discussion of neutron research in diverse fields such as hard and soft condensed matter, liquids, biology, magnetism, engineering materials, chemical spectroscopy, crystal structure, and elementary excitations, fundamental physics and development of neutron instrumentation through a combination of invited talks, contributed talks and poster sessions. Each of the major national neutron facilities (NIST, LANSCE, ANL, HFIR and SNS), along with their international counterparts, has an opportunity to exchange information with each other and to update users, and potential users, of their facility. This is also an appropriate forum for users to raise issues that relate to the facilities. Activity report on neutron scattering research Nagao, M.; Tawata, N.; Fujii, Y. The experiments performed on the thirteen university-owned spectrometers installed at JRR-3M of JAERI in the fiscal year of 1997 were described in this report. The latest ''Neutron News'' (vol. 9, issue 3, 1998) has featured highlights of the activities based on the JRR-3M and its cover displays a graph showing an endless increase of the number of proposals to the users program in the fiscal 1997. The university-owned spectrometers are available for general users all over Japan. The users' requirement for a higher flux beam reactor became larger and larger with time. Thus, JAERI has refurbished JRR-3 to satisfy these demands. In 1997, a joint project between Chiba University and Institute for Solid State Physics (ISSP) started to build a new 4-cycle diffractometer for crystal physics/chemistry at T 2-2 beam port on a thermal guide. (M.N.) Liquid dynamics and inelastic scattering of neutrons De Gennes, P. G. [Commissariat a l' energie atomique et aux energies alternatives - CEA, CEN de Saclay, Gif sur Yvette (France) The energy transfers in one collision between a neutron and a liquid are computed by a method of moments. It is shown that for large momentum transfers a perfect gas model is correct. For small momentum transfers a macroscopic description of the density fluctuations in the liquid is applicable. It is in the intermediate region (where diffraction peaks are observed) that the method of moments is most useful. These different experimental situations are discussed for liquids where recoil and quantum effects are negligible, and numerical results are given for argon. An approximation for the so called autocorrelation function, valid for both long and short time scales and all distances, is also presented. Reprint of a paper published in Physica 25, 1959, p. 825-839. Applications of neutron scattering to heterogeneous catalysis Parker, Stewart F; Lennon, David Historically, most studies of heterogeneous catalysts that have used neutron vibrational spectroscopy have employed indirect geometry instruments with a low (<40 cm -1 ) final energy. In this paper we examine the reasons why this has been the case and highlight the advantages and disadvantages of this approach. We then show how some of these may be overcome by the use of direct geometry spectrometers. We illustrate the use of direct geometry spectrometers with examples from reforming of methane to synthesis gas (CO + H 2 ) over Ni/Al 2 O 3 catalysts and an operando study of CO oxidation. We conclude with a proposal for a unique instrument that combines both indirect and direct geometry spectrometers. (paper) Studies of Water by Scattering of Slow Neutrons Skoeld, K.; Pilcher, E.; Larsson, K.E. The quasielastic scattering peak in light water at room temperature has been studied with neutrons of energy ∼ 5x10 -3 eV. The width and shape of the peak has been determined by the time-of-flight technique at two scattering angles. If it is assumed that the broad inelastic spectrum is due to scattering by a monoatomic gas of mass 18, it is found that the quasielastic scattering is in good agreement with the predictions by the continuous diffusion model. Inelastic spectra were recorded up to 13x10 -3 eV. Indications of two discrete energy transfers (8 and 14x10 -4 eV) are observed in the 90 deg run. The results are discussed and compared with earlier observations Skoeld, K; Pilcher, E; Larsson, K E The quasielastic scattering peak in light water at room temperature has been studied with neutrons of energy {approx} 5x10{sup -3} eV. The width and shape of the peak has been determined by the time-of-flight technique at two scattering angles. If it is assumed that the broad inelastic spectrum is due to scattering by a monoatomic gas of mass 18, it is found that the quasielastic scattering is in good agreement with the predictions by the continuous diffusion model. Inelastic spectra were recorded up to 13x10{sup -3} eV. Indications of two discrete energy transfers (8 and 14x10{sup -4} eV) are observed in the 90 deg run. The results are discussed and compared with earlier observations. Optimization of virtual source parameters in neutron scattering instrumentation Habicht, K; Skoulatos, M We report on phase-space optimizations for neutron scattering instruments employing horizontal focussing crystal optics. Defining a figure of merit for a generic virtual source configuration we identify a set of optimum instrumental parameters. In order to assess the quality of the instrumental configuration we combine an evolutionary optimization algorithm with the analytical Popovici description using multidimensional Gaussian distributions. The optimum phase-space element which needs to be delivered to the virtual source by preceding neutron optics may be obtained using the same algorithm which is of general interest in instrument design. Reports of the study group for neutron scattering This report covers the activities from July 1980 to December 1981. Within this period, the project for reactor extension (including a thermal neutron source and a hall for the neutron guide), was worked out in detail. Like the Fritz-Haber Institute, the Institute for Crystallography of Tuebingen University decided to send a number of guest-scientists for studies at the Hahn-Meitner Institute on a permanent basis. The HMI also organized the 5th International Conference on Small-Angle Scattering, held in Berlin in October 1980. The scientific research work was mainly concerned with magnetic systems, molecular crystals, and the determination of electron densities. (orig.) Progress report on JAERI-ORNL cooperative neutron scattering research Iizumi, Masashi One year activities done under the JAERI-DOE(ORNL) cooperative neutron scattering program are summarized. This period just followed the completion of the wide-angle neutron diffractometer dedicated to the cooperative research. The report contains results of the performance test of the instrument and early research activities. The latter part includes the time-resolved measurements of the transition kinetics in tin and Ni-Mn alloy as well as the single-crystal diffraction by the flat-cone method. (author) Inelastic neutron scattering of H2 adsorbed in HKUST-1 Liu, Y.; Brown, C.M.; Neumann, D.A.; Peterson, V.K.; Kepert, C.J. A series of inelastic neutron scattering (INS) investigations of hydrogen adsorbed in activated HKUST-1 (Cu 3 (1,3,5-benzenetricarboxylate) 2 ) result in INS spectra with rich features, even at very low loading ( 2 :Cu). The distinct inelastic features in the spectra show that there are three binding sites that are progressively populated when the H 2 loading is less than 2.0 H 2 :Cu, which is consistent with the result obtained from previous neutron powder diffraction experiments. The temperature dependence of the INS spectra reveals the relative binding enthalpies for H 2 at each site DNS: Diffuse scattering neutron time-of-flight spectrometer Yixi Su Full Text Available DNS is a versatile diffuse scattering instrument with polarisation analysis operated by the Jülich Centre for Neutron Science (JCNS, Forschungszentrum Jülich GmbH, outstation at the Heinz Maier-Leibnitz Zentrum (MLZ. Compact design, a large double-focusing PG monochromator and a highly efficient supermirror-based polarizer provide a polarized neutron flux of about 107 n cm-2 s-1. DNS is used for the studies of highly frustrated spin systems, strongly correlated electrons, emergent functional materials and soft condensed matter. A quasi-elastic neutron scattering and neutron spin-echo study of hydrogen bonded system Branca, C.; Faraone, A.; Magazu, S.; Maisano, G.; Mangione, A This work reports neutron spin echo results on aqueous solutions of trehalose, a naturally occurring disaccharide of glucose, showing an extraordinary bioprotective effectiveness against dehydration and freezing. We collected data using the SPAN spectrometer (BENSC, Berlin) on trehalose aqueous solutions at different temperature values. The obtained findings are compared with quasi-elastic neutron scattering results in order to furnish new results on the dynamics of the trehalose/water system on the nano and picoseconds scale. On determination of the dynamics of hydrocarbon molecules on catalyst's surfaces by means of neutron scattering Stockmeyer, R. The intensity distribution of slow neutrons scattered by adsorbed hydrocarbon molecules contains information on the dynamics of the molecules. In this paper the scattering law for incoherently scattering molecules is derived taking into account the very different mobility perpendicular and parallel to the surface. In contrast to the well known scattering law of threedimensionally diffusing particles the scattering law for twodimensional diffusion diverges logarithmically at zero energy transfer. Conclusions relevant to the interpretation of neutron scattering data are discussed. (orig.) [de New neutron-based isotopic analytical methods; An explorative study of resonance capture and incoherent scattering Perego, R.C. Two novel neutron-based analytical techniques have been treated in this thesis, Neutron Resonance Capture Analysis (NRCA), employing a pulsed neutron source, and Neutron Incoherent Scattering (NIS), making use of a cold neutron source. With the NRCA method isotopes are identified by the Neutron spectral modulation as a new thermal neutron scattering technique. Pt. 1 Ito, Y.; Nishi, M.; Motoya, K. A thermal neutron scattering technique is presented based on a new idea of labelling each neutron in its spectral position as well as in time through the scattering process. The method makes possible the simultaneous determination of both the accurate dispersion relation and its broadening by utilizing the resolution cancellation property of zero-crossing points in the cross-correlated time spectrum together with the Fourier transform scheme of the neutron spin echo without resorting to the echoing. The channel Fourier transform applied to the present method also makes possible the determination of the accurate direct energy scan profile of the scattering function with a rather broad incident neutron wavelength distribution. Therefore the intensity sacrifice for attaining high accurarcy is minimized. The technique is used with either a polarized or unpolarized beam at the sample position with no precautions against beam depolarization at the sample for the latter case. Relative time accurarcy of the order of 10 -3 to 10 -4 may be obtained for the general dispersion relation and for the quasi-elastic energy transfers using correspondingly the relative incident neutron wavelength spread of 10 to 1% around an incident neutron energy of a few meV. (orig.) THERMAL: A routine designed to calculate neutron thermal scattering Cullen, D.E. THERMAL is designed to calculate neutron thermal scattering that is isotropic in the center of mass system. At low energy thermal motion will be included. At high energies the target nuclei are assumed to be stationary. The point of transition between low and high energies has been defined to insure a smooth transition. It is assumed that at low energy the elastic cross section is constant in the center of mass system. At high energy the cross section can be of any form. You can use this routine for all energies where the elastic scattering is isotropic in the center of mass system. In most materials this will be a fairly high energy Solution of neutron slowing down equation including multiple inelastic scattering El-Wakil, S.A.; Saad, A.E. The present work is devoted the presentation of an analytical method for the calculation of elastically and inelastically slowed down neutrons in an infinite non absorbing homogeneous medium. On the basis of the Central limit theory (CLT) and the integral transform technique the slowing down equation including inelastic scattering in terms of the Green function of elastic scattering is solved. The Green function is decomposed according to the number of collisions. A formula for the flux at any lethargy O (u) after any number of collisions is derived. An equation for the asymptotic flux is also obtained Fast-neutron scattering cross sections of elemental zirconium Differential neturon-elastic-scattering cross sections of elemental zirconium are measured from 1.5 to 4.0 MeV at intervals of less than or equal to 200 keV. Inelastic-neutron-scattering cross sections corresponding to the excitation of levels at observed energies of: 914 +- 25, 1476 +- 37, 1787 +- 23, 2101 +- 26, 2221 +- 17, 2363 +- 14, 2791 +- 15 and 3101 +- 25 keV are determined. The experimental results are interpreted in terms of the optical-statistical model and are compared with corresponding quantities given in ENDF/B-V A local dynamic correlation function from inelastic neutron scattering McQueeney, R.J. Information about local and dynamic atomic correlations can be obtained from inelastic neutron scattering measurements by Fourier transform of the Q-dependent intensity oscillations at a particular frequency. A local dynamic structure function, S(r,ω), is defined from the dynamic scattering function, S(Q,ω), such that the elastic and frequency-integrated limits correspond to the average and instantaneous pair-distribution functions, respectively. As an example, S(r,ω) is calculated for polycrystalline aluminum in a model where atomic motions are entirely due to harmonic phonons Small-angle neutron scattering in materials science Small-angle scattering (SAS) in an ideal tool for studying the structure of materials in the mesoscopic size range between 1 and about 100 nanometers. The basic principles of the method are reviewed, with particular emphasis on data evaluation and interpretation for isotropic as well as oriented or single-crystalline materials. Examples include metal alloys, composites and porous materials. The last section gives a comparison between the use of neutrons and (synchrotron) x-rays for small-angle scattering in materials physics. (author) Malone, R. C.; Crowe, B.; Crowell, A. S.; Cumberbatch, L. C.; Esterline, J. H.; Fallin, B. A.; Friesen, F. Q. L.; Han, Z.; Howell, C. R.; Markoff, D.; Ticehurst, D.; Tornow, W.; Witała, H. The neutron-deuteron (nd) breakup reaction provides a rich environment for testing theoretical models of the neutron-neutron (nn) interaction. Current theoretical predictions based on rigorous ab-initio calculations agree well with most experimental data for this system, but there remain a few notable discrepancies. The cross section for nn quasifree (QFS) scattering is one such anomaly. Two recent experiments reported cross sections for this particular nd breakup configuration that exceed theoretical calculations by almost 20% at incident neutron energies of 26 and 25 MeV [1, 2]. The theoretical values can be brought into agreement with these results by increasing the strength of the 1S0 nn potential matrix element by roughly 10%. However, this modification of the nn effective range parameter and/or the 1S0 scattering length causes substantial charge-symmetry breaking in the nucleon-nucleon force and suggests the possibility of a weakly bound di-neutron state [3]. We are conducting new measurements of the cross section for nn QFS in nd breakup. The measurements are performed at incident neutron beam energies below 20 MeV. The neutron beam is produced via the 2H(d, n)3He reaction. The target is a deuterated plastic cylinder. Our measurements utilize time-of-flight techniques with a pulsed neutron beam and detection of the two emitted neutrons in coincidence. A description of our initial measurements at 10 MeV for a single scattering angle will be presented along with preliminary results. Also, plans for measurements at other energies with broad angular coverage will be discussed. Neutrons for Catalysis: A Workshop on Neutron Scattering Techniques for Studies in Catalysis Overbury, Steven H.; Coates, Leighton; Herwig, Kenneth W.; Kidder, Michelle This report summarizes the Workshop on Neutron Scattering Techniques for Studies in Catalysis, held at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) on September 16 and 17, 2010. The goal of the Workshop was to bring experts in heterogeneous catalysis and biocatalysis together with neutron scattering experimenters to identify ways to attack new problems, especially Grand Challenge problems in catalysis, using neutron scattering. The Workshop locale was motivated by the neutron capabilities at ORNL, including the High Flux Isotope Reactor (HFIR) and the new and developing instrumentation at the SNS. Approximately 90 researchers met for 1 1/2 days with oral presentations and breakout sessions. Oral presentations were divided into five topical sessions aimed at a discussion of Grand Challenge problems in catalysis, dynamics studies, structure characterization, biocatalysis, and computational methods. Eleven internationally known invited experts spoke in these sessions. The Workshop was intended both to educate catalyst experts about the methods and possibilities of neutron methods and to educate the neutron community about the methods and scientific challenges in catalysis. Above all, it was intended to inspire new research ideas among the attendees. All attendees were asked to participate in one or more of three breakout sessions to share ideas and propose new experiments that could be performed using the ORNL neutron facilities. The Workshop was expected to lead to proposals for beam time at either the HFIR or the SNS; therefore, it was expected that each breakout session would identify a few experiments or proof-of-principle experiments and a leader who would pursue a proposal after the Workshop. Also, a refereed review article will be submitted to a prominent journal to present research and ideas illustrating the benefits and possibilities of neutron methods for catalysis research. Overbury, Steven {Steve} H [ORNL; Coates, Leighton [ORNL; Herwig, Kenneth W [ORNL; Kidder, Michelle [ORNL Uses of neutron scattering in supramolecular chemistry Lindoy, L.F. Full text: A major thrust in recent chemical research has been the development of supramolecular chemistry 1 - broadly the chemistry of large multicomponent molecular assemblies in which the component structural units are held together by either covalent linkages or by a variety of weaker (non-covalent) interactions that include hydrogen bonding, dipole stacking, π-stacking, van der Waals q forces and favourable hydrophobic interactions. Much of the activity in the area has been motivated by the known behaviour of biological molecules (such as enzymes). Thus molecular assemblies are ubiquitous in natural systems but, with a limited number of exceptions, have only recently been the subject of increasing investigation by chemists. A feature of much of this recent work has been its focus on molecular design for achieving complementarity between single molecule hosts and guests. The use of single crystal neutron diffraction coupled with molecular modelling and a range of other techniques to investigate the nature of individual supramolecular systems will be discussed. By way of example, in one such study the supramolecular array formed by co-crystallisation of 1,2- diaminoethane and benzoic acid has been investigated; the system self-assembles into an unusual layered structure composed of two-dimensional hydrogen bonded networks sandwiched between layers of edge-to-face stacked aromatic systems. The number of hydrogen-bond donors and acceptors is balanced in this structure Characterization of the γ background in epithermal neutron scattering measurements at pulsed neutron sources Pietropaolo, A.; Tardocchi, M.; Schooneveld, E.M.; Senesi, R. This paper reports the characterization of the different components of the γ background in epithermal neutron scattering experiments at pulsed neutron sources. The measurements were performed on the VESUVIO spectrometer at ISIS spallation neutron source. These measurements, carried out with a high purity germanium detector, aim to provide detailed information for the investigation of the effect of the γ energy discrimination on the signal-to-background ratio. It is shown that the γ background is produced by different sources that can be identified with their relative time structure and relative weight Development of Cold Neutron Scattering Kernels for Advanced Moderators Granada, J. R.; Cantargi, F. The development of scattering kernels for a number of molecular systems was performed, including a set of hydrogeneous methylated aromatics such as toluene, mesitylene, and mixtures of those. In order to partially validate those new libraries, we compared predicted total cross sections with experimental data obtained in our laboratory. In addition, we have introduced a new model to describe the interaction of slow neutrons with solid methane in phase II (stable phase below T = 20.4 K, atmospheric pressure). Very recently, a new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was also developed. The main dynamical characteristics of that system are contained in the formalism, the elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. Optical model calculation of neutron-nucleus scattering cross sections Smith, M.E.; Camarda, H.S. A program to calculate the total, elastic, reaction, and differential cross section of a neutron interacting with a nucleus is described. The interaction between the neutron and the nucleus is represented by a spherically symmetric complex potential that includes spin-orbit coupling. This optical model problem is solved numerically, and is treated with the partial-wave formalism of scattering theory. The necessary scattering theory required to solve this problem is briefly stated. Then, the numerical methods used to integrate the Schroedinger equation, calculate derivatives, etc., are described, and the results of various programming tests performed are presented. Finally, the program is discussed from a user's point of view, and it is pointed out how and where the program (OPTICAL) can be changed to satisfy particular needs Neutron scattering investigations of the lipid bilayer structure pressure dependence D. V. Soloviov Full Text Available Lipid bilayer structure investigation results obtained with small angle neutron scattering method at the Joint Institute for Nuclear Research IBR-2M nuclear reactor (Dubna, Russia are presented. Experiment has been per-formed with small angle neutron scattering spectrometer YuMO, upgraded with the apparatus for performing P-V-T measurements on the substance under investigation. D2O-1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC liquid system, presenting the model of natural live membrane, has been taken as the sample for investiga-tions. The lipid bilayer spatial period was measured in experiment along with isothermal compressibility simulta-neously at different pressures. It has been shown, that the bilayer structural transition from ripple (wavelike gel-phase phase to liquid-crystal phase is accompanied with anomalous rise of isothermal compressibility, indicat-ing occurrence of the phase transition. Solovjov, D.V.; Gordelyij, V.Yi.; Gorshkova, Yu.Je.; Yivan'kov, O.Yi.; Koval'ov, Yu.S.; Kuklyin, A.Yi.; Solovjov, D.V.; Bulavyin, L.A.; Yivan'kov, O.Yi.; Nyikolajenko, T.Yu.; Kuklyin, A.Yi.; Gordelyij, V.Yi.; Gordelyij, V.Yi. Lipid bilayer structure investigation results obtained with small angle neutron scattering method at the Joint Institute for Nuclear Research IBR-2M nuclear reactor (Dubna, Russia) are presented. Experiment has been performed with small angle neutron scattering spectrometer YuMO, upgraded with the apparatus for performing PV-T measurements on the substance under investigation. D 2 O-1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) liquid system, presenting the model of natural live membrane, has been taken as the sample for investigations. The lipid bilayer spatial period was measured in experiment along with isothermal compressibility simultaneously at different pressures. It has been shown, that the bilayer structural transition from ripple (wavelike gel-phase) phase to liquid-crystal phase is accompanied with anomalous rise of isothermal compressibility, indicating occurrence of the phase transition. A Monte Carlo evaluation of analytical multiple scattering corrections for unpolarised neutron scattering and polarisation analysis data Mayers, J.; Cywinski, R. Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author) Computer Program for Inelastic Neutron Scattering by an Anharmonic Crystal Bohlin, L.; Ebbsjoe, I.; Hoegberg, T. A description is given of the program SAW (Shift and Width), which calculates the energy-dependent shift and width of the intensity peaks obtained for thermal neutrons scattered inelastically by an anharmonic crystal. The program has been coded in FORTRAN IV and may be applied to every solid with a monatomic face-centered cubic lattice where the intermolecular interactions can be described by a centro-symmetrical potential. Interactions beyond third neighbours are neglected Simulation of a complete inelastic neutron scattering experiment Edwards, H.; Lefmann, K.; Lake, B. A simulation of an inelastic neutron scattering experiment on the high-temperature superconductor La2-xSrxCuO4 is presented. The complete experiment, including sample, is simulated using an interface between the experiment control program and the simulation software package (McStas) and is compared...... with the experimental data. Simulating the entire experiment is an attractive alternative to the usual method of convoluting the model cross section with the resolution function, especially if the resolution function is nontrivial.... Bohlin, L; Ebbsjoe, I; Hoegberg, T A description is given of the program SAW (Shift and Width), which calculates the energy-dependent shift and width of the intensity peaks obtained for thermal neutrons scattered inelastically by an anharmonic crystal. The program has been coded in FORTRAN IV and may be applied to every solid with a monatomic face-centered cubic lattice where the intermolecular interactions can be described by a centro-symmetrical potential. Interactions beyond third neighbours are neglected. Complex eigenvalues for neutron transport equation with quadratically anisotropic scattering Sjoestrand, N.G. Complex eigenvalues for the monoenergetic neutron transport equation in the buckling approximation have been calculated for various combinations of linearly and quadratically anisotropic scattering. The results are discussed in terms of the time-dependent case. Tables are given of complex bucklings for real decay constants and of complex decay constants for real bucklings. The results fit nicely into the pattern of real and purely imaginary eigenvalues obtained earlier. (author) Timelike Compton scattering off the neutron and generalized parton distributions Boer, M.; Guidal, M. [CNRS-IN2P3, Universite Paris-Sud, Institut de Physique Nucleaire d&apos; Orsay, Orsay (France); Vanderhaeghen, M. [Johannes Gutenberg Universitaet, Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany) We study the exclusive photoproduction of an electron-positron pair on a neutron target in the Jefferson Lab energy domain. The reaction consists of two processes: the Bethe-Heitler and the Timelike Compton Scattering. The latter process provides potentially access to the Generalized Parton Distributions (GPDs) of the nucleon. We calculate all the unpolarized, single- and double-spin observables of the reaction and study their sensitivities to GPDs. (orig.) Extraction of neutron-neutron scattering length from nn coincidence-geometry nd breakup data E. S. Konobeevski Full Text Available We report preliminary results of a kinematically complete experiment on measurement of nd breakup reaction yield at neutron beam RADEX of Institute for Nuclear Research (Moscow, Russia. In the experiment two secondary neutrons are detected in geometry of neutron-neutron final-state interaction. Data are obtained at energy of incident neutrons En = 40 - 60 MeV for various divergence angles of two neutrons ΔΘ = 4, 6, 8º. 1S0 neutron-neutron scattering length ann were determined by comparison of the experimental dependence of reaction yield on the relative energy of two secondary neutrons with results of simulation depending on ann. For En = 40 MeV and ΔΘ = 6º (the highest statistics in the experiment the value ann = -17.9 ± 1.0 fm is obtained. The further improving of accuracy of the experiment and more rigorous theoretical analysis will allow one to remove the existing difference in ann values obtained in different experiments. The single-angle neutron scattering facility at Pelindaba Hofmeyr, C.; Mayer, R.M.; Tillwick, D.L.; Starkey, J.R. The small-angle neutron scattering facility at the SAFARI-1 reactor is described in detail, and with reference to theoretical and practical design considerations. Inexpensive copper microwave guides used as a guide-pipe for slow neutrons provided the basis for a useful though comparatively simple facility. The neutron-spectrum characteristics of the final facility in different configurations of the guide-pipe (both S and single-curved) agree wel with expected values based on results obtained with a test facility. The design, construction, installation and alignment of various components of the facility are outlined, as well as intensity optimisation. A general description is given of experimental procedures and data-aquisition electronics for the four-position sample holder and counter array of up to 18 3 He detectors and a beam monitor [af The MCLIB library: Monte Carlo simulation of neutron scattering instruments Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example. Thermal diffuse scattering in angular-dispersive neutron diffraction Popa, N.C.; Willis, B.T.M. The theoretical treatment of one-phonon thermal diffuse scattering (TDS) in single-crystal neutron diffraction at fixed incident wavelength is reanalysed in the light of the analysis given by Popa and Willis [Acta Cryst. (1994), (1997)] for the time-of-flight method. Isotropic propagation of sound with different velocities for the longitudinal and transverse modes is assumed. As in time-of-flight diffraction, there exists, for certain scanning variables, a forbidden range in the one-phonon TDS of slower-than-sound neutrons, and this permits the determination of the sound velocity in the crystal. A fast algorithm is given for the TDS correction of neutron diffraction data collected at a fixed wavelength: this algorithm is similar to that reported earlier for the time-of-flight case. (orig.) Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC RUN) which use the library are shown as an example Scientific opportunities with advanced facilities for neutron scattering Lander, G.H.; Emery, V.J. The present report documents deliberations of a large group of experts in neutron scattering and fundamental physics on the need for new neutron sources of greater intensity and more sophisticated instrumentation than those currently available. An additional aspect of the Workshop was a comparison between steady-state (reactor) and pulsed (spallation) sources. The main conclusions were: (1) the case for a new higher flux neutron source is extremely strong and such a facility will lead to qualitatively new advances in condensed matter science and fundamental physics; (2) to a large extent the future needs of the scientific community could be met with either a 5 x 10 15 n cm -2 s -1 steady state source or a 10 17 n cm -2 s -1 peak flux spallation source; and (3) the findings of this Workshop are consistent with the recommendations of the Major Materials Facilities Committee Neutron diffuse scattering in magnetite due to molecular polarons Yamada, Y.; Wakabayashi, N.; Nicklow, R.M. A detailed neutron diffuse scattering study has been carried out in order to verify a model which describes the property of valence fluctuations in magnetite above T/sub V/. This model assumes the existence of a complex which is composed of two excess electrons and a local displacement mode of oxygens within the fcc primitive cell. The complex is called a molecular polaron. It is assumed that at sufficiently high temperatures there is a random distribution of molecular polarons, which are fluctuating independently by making hopping motions through the crystal or by dissociating into smaller polarons. The lifetime of each molecular polaron is assumed to be long enough to induce an instantaneous strain field around it. Based on this model, the neutron diffuse scattering cross section due to randomly distributed dressed molecular polarons has been calculated. A precise measurement of the quasielastic scattering of neutrons has been carried out at 150 K. The observed results definitely show the characteristics which are predicted by the model calculation and, thus, give evidence for the existence of the proposed molecular polarons. From this standpoint, the Verwey transition of magnetite may be viewed as the cooperative ordering process of dressed molecular polarons. Possible extensions of the model to describe the ordering and the dynamical behavior of the molecular polarons are discussed Some neutron scattering studies on magnetic and molecular phase transitions Bevaart, L. In this thesis neutron-scattering investigations on two different systems are described. The first study is concerned with the magnetic ordering phenomena in pseudo two-dimensional (d = 2), two-component antiferromagnets K 2 Mnsub(1-x)Msub(x)F 4 (M = Fe, Co), as a function of the composition x and temperature T. For one of the samples in this series, K 2 Musub(0.978)Fesub(0.022)F 4 , the influence of an external magnetic field on the ordering characteristics was studied in addition. The second study deals with the rotational motions of the NH 4 + groups in NH 4 ZnF 3 in relation with the structural phase transition at Tsub(c) = 115.1 K. The experimental techniques were chosen according to the requirements of each of these two subjects. The former study was carried out by observing the elastic magnetic neutron scattering with a double-axis diffractometer, whereas for the latter study time-of-flight (TOF) techniques were applied to observe the inelastic and quasi-elastic incoherent neutron scattering by the protons of the rotating NH 4 + groups. (Auth.) Modern quantum magnetism by means of neutron scattering Grenier, B.; Ziman, T. We review a selection of recent applications of neutron scattering to the field of quantum magnetism. We focus on systems where, because of quantum fluctuations enhanced by frustration and low dimension, there is no long range magnetic order in the ground state. We select two examples that we treat in more depth to show how neutron studies, in conjunction with the results of other experimental techniques, can give new insights. The first is the case of the spin ladder NaV 2 O 5 , where the origin of the spin gap at low temperature is now understood in detail. Apparent contradictions between quantitative measures of the charge order from neutron inelastic scattering, resonant X-ray scattering and NMR have been resolved giving interesting insights into the correlations. The second case is that of spin dimer system Cs 3 Cr 2 X 9 (X = Br, Cl), undergoing transitions to field induced transverse magnetic order. The Br compound is attractive as the critical fields are sufficiently low that a complete study, in different field directions, is possible. In addition, it is noteworthy in that the magnon that softens and condenses is incommensurable with the lattice. The common description in terms of Bose-Einstein condensation must be extended to include a continuous degeneracy and single ion anisotropy, and conclusions can be drawn by comparison with the Cl compound. (authors) VI European Conference on Neutron Scattering (ECNS2015) It was a great pleasure for the Materials Science Institute of Aragón (CSIC- University of Zaragoza) to host the VI European Conference on Neutron Scattering (ECNS) from the 30th of August until the 4th of September 2015. The meeting was held in Zaragoza, Spain, a prosperous and well communicated city founded by the Emperor Octavio Augustus over 2000 years ago. Zaragoza, a city where different cultures, Muslims, Jewish, and Christians have left their mark is famous for its landmarks such as the Basilica del Pilar, La Seo Cathedral and the Aljaferfa Palace as well as the local cuisine. The conference is organized every four years as a forum for the European neutron scattering community to discuss recent developments and advances in all branches of science in which neutron scattering is, or eventually could be used. In 2015 the conference gathered more than 650 participants from 31 different countries from around the globe including Japan, United States of America, Taiwan, Republic of Korea, India, Argentina, China and Australia. This volume assembles the proceedings of the ECNS 2015. The main topics in the conference were; Neutron Sources and Facilities, Neutron Instrumentation (Optics, Sample Environment, Detectors and Software), Fundamental Science, Chemistry of Materials, Magnetism, Superconductivity, Functional Materials, Glasses and Liquids, Thin Films and Interfaces, Soft Condensed Matter, Health and Life Sciences, Engineering applications, Cultural Heritage and Archaeometry. Jean-Marie Tarascon, J. Manuel Perez-Mato, Roberto Caciuffo, Paul Schofield, Peter Fierlinger, Helmut Schober and Frank Gabel presented plenary talks. In addition, 33 keynote talks and 222 oral presentations were given during the four parallel sessions as well as 325 poster presentations. The poster sessions, which were held during the lunch and coffee breaks, were well attended and participants had time to visit not only the posters but also the exhibition. The VI European Conference on Integral Parameters of the Thermal Neutron Scattering Law Purohit, S.N. Integral parameters of the thermal neutron scattering law - the thermalization binding parameter (M 2 ), the Placzek's moments of the generalized frequency spectrum of dynamical modes and the energy transfer moments of the scattering law - are theoretically discussed. A detailed study of the variation of M 2 , the thermalization time constant and the effective temperature of the vibrating atoms, with the relative weight between intra-molecular vibrations and hindered rotations for H 2 O, is presented. Theoretical results for different scattering models of H 2 O are compared with the measurements of integral experiments. A set of integral parameters for D 2 O, using Butler's model, have been obtained. Importance of the structure of hindered rotations of H 2 O and D 2 O in the study of integral parameters has also been discussed Purohit, S N Integral parameters of the thermal neutron scattering law - the thermalization binding parameter (M{sub 2}), the Placzek's moments of the generalized frequency spectrum of dynamical modes and the energy transfer moments of the scattering law - are theoretically discussed. A detailed study of the variation of M{sub 2}, the thermalization time constant and the effective temperature of the vibrating atoms, with the relative weight between intra-molecular vibrations and hindered rotations for H{sub 2}O, is presented. Theoretical results for different scattering models of H{sub 2}O are compared with the measurements of integral experiments. A set of integral parameters for D{sub 2}O, using Butler's model, have been obtained. Importance of the structure of hindered rotations of H{sub 2}O and D{sub 2}O in the study of integral parameters has also been discussed. Effect of neutron anisotropic scattering in fast reactor analysis Chiba, Gou Numerical tests were performed about an effect of a neutron anisotropic scattering on criticality in the Sn transport calculation. The simplest approximation, the consistent P approximation and the extended transport approximation were compared with each other in one-dimensional slab fast reactor models. JAERI fast set which has been used for fast reactor analyses is inadequate to evaluate the effect because it doesn't include the scattering matrices and the self-shielding factors to calculate the group-averaged cross sections weighted by the higher-order moment of angular flux. In the present study, the sub-group method was used to evaluate the group-averaged cross sections. Results showed that the simplest approximation is inadequate and the transport approximation is effective for evaluating the anisotropic scattering. (author) Light and neutron scattering study of strongly interacting ionic micelles Degiorgio, V.; Corti, M.; Piazza, R. Dilute solutions of ionic micelles formed by biological glycolipids (gangliosides) have been investigated at various ionic strengths by static and dynamic light scaterring and by small-angle neutron scattering. The size and shape of the micelle is not appreciably affected by the added salt concentration in the range 0-100 mM NaCL. From the measured intensity of scattered light we derive the electric charge Z of the micelle by fitting the data to a theoretical calculation which uses a screened Coulomb potential for the intermicellar interaction, and the hypernetted chain approximation for the calculation of the radial distribution function. The correlation function derived from dynamic light scattering shows the long time contribution typical of concentrated polydisperse systems (author). 15 refs.; 6 figs Anomalous scattering of neutrons in spin-polarized media A new exchange mechanism of inelastic scattering with spin flip for slow neutrons propagating through a spin-polarized medium is studied. The scattering is accompanied by emission or absorption of thermal fluctuations of the transverse magnetization of the medium; the weakly damped Larmor precession of nuclear spins in the external magnetic field plays the main role in these fluctuations. Under the conditions of giant opalescence the effect is enormous and the corresponding cross sections are significantly greater than the standard elastic scattering cross sections. Thus in the case of 29 Si↑ and 3 He↑ under typical experimental conditions the cross sections of these inelastic processes are of the order of 10 5 -10 6 b TOF-SEMSANS—Time-of-flight spin-echo modulated small-angle neutron scattering Strobl, M.; Tremsin, A.S.; Hilger, A.; Wieder, F.; Kardjilov, N.; Manke, I.; Bouwman, W.G.; Plomp, J. We report on measurements of spatial beam modulation of a polarized neutron beam induced by triangular precession regions in time-of-flight mode and the application of this novel technique spin-echo modulated small-angle neutron scattering (SEMSANS) to small-angle neutron scattering in the very Progress report: determinations of the neutron-neutron scattering length ann from kinematically incomplete neutron-deuteron breakup data revisited Tornow, W.; Braun, R.T.; Witala, H. We review published analyses of the final-state-interaction enhancement observed in proton energy distributions obtained from kinematically incomplete neutron-deuteron breakup experiments. We compare the results derived from these analyses for the neutron-neutron scattering length, a nn with our results based on a rigorous treatment of the three-nucleon Faddeev equations in conjunction with the use of realistic nucleon-nucleon potentials. Our values for a nn deviate outside the quoted uncertainties from the ones obtained in the previous analyses where simplified nucleon-nucleon interaction models were employed. In contrast to the previous determinations, the present results for a nn are in clear disagreement with the values for a nn based on π - -deuteron capture experiments. Unless inconsistencies in the experimental neutron-deuteron breakup data at low energies can be resolved and the influence of possible three-nucleon-force effects can be reliably determined, we recommend that one not resort to the kinematically incomplete neutron-deuteron breakup reaction as a tool for determining a quantity as important for nuclear and particle physics as is the neutron-neutron scattering length a nn . (author) Evaluation of new pharmaceuticals using in vivo neutron inelastic scattering and neutron activation analysis Kehayias, J.J. Nutritional status of patients can be evaluated by monitoring changes in body composition, including depletion of protein and muscle, adipose tissue distribution and changes in hydration status, bone or cell mass. Fast neutron activation (for N and P) and neutron inelastic scattering (for C and O) are used to assess in vivo elements characteristic of specific body compartments. The fast neutrons are produced with a sealed deuterium-tritium (D-T) neutron generator. This method provides the most direct assessment of body composition. Non-bone phosphorus for muscle is measured by the 31 P(n,α) 28 Al reaction, and nitrogen for protein via the (n,2n) fast neutron reaction. Inelastic neutron scattering is used for the measurement of total body carbon and oxygen. Carbon is used to derive body fat, after subtracting carbon contributions due to protein, bone and glycogen. Carbon-to-oxygen (C/O) ratio is used to measure distribution of fat and lean tissue in the body and to monitor small changes of lean mass and its quality. In addition to evaluating the efficacy of new treatments, the system is used to study the mechanisms of lean tissue depletion with aging and to investigate methods for preserving function and quality of life in the elderly. (author) Fast-neutron elastic scattering from elemental vanadium Smith, A.B.; Guenther, P.T.; Lawson, R.D. Differential neutron elastic- and inelastic-scattering cross sections of vanadium were measured from 4.5 to 10 MeV. These results were combined with previous 1.5 to 4.0 MeV data from this laboratory, the 11.1 MeV elastic-scattering results obtained at Ohio University, and the reported neutron total cross sections to energies of ∼20.0 MeV, to form a data base which was interpreted in terms of the spherical optical-statistical model. A fit to the data was achieved by making both the strengths and geometries of the optical-model potential energy dependent. This energy dependence was large below ∼6.0 MeV. Above ∼6.0 MeV the energy dependencies are smaller, and similar to those characteristic of global models. Using the dispersion relationship and the method of moments, the optical-model potential energy deduced from 0.0 to 11.1 MeV neutron-scattering data was extrapolated to higher energies and to the bound-state regime. This extrapolation leads to predicted neutron total cross sections that are within 3% of the experimental values throughout the energy range 0.0 to 20.0 MeV. Furthermore, the values of the volume-integral-per-nucleon of the real potential are in excellent agreement with those needed to reproduce the observed binding energies of particle- and hole-states. The latter gives clear evidence of the Fermi surface anomaly. Using only the 0.0 to 11.1 MeV data, the predicted E < O behavior of the strength and radius of the real shell-model Woods-Saxon potential are somewhat different from those obtained by Mahaux and Sartor in their analysis of nuclei near closed shells. 61 refs., 9 figs., 2 tabs Electron scattering from high-momentum neutrons in deuterium Klimenko, A.V.; Kuhn, S.E.; Bueltmann, S.; Careccia, S.L.; Dharmawardane, K.V.; Dodge, G.E.; Guler, N.; Hyde-Wright, C.E.; Klein, A.; Tkachenko, S.; Weinstein, L.B.; Zhang, J.; Butuceanu, C.; Griffioen, K.A.; Baillie, N.; Fersch, R.G.; Funsten, H.; Egiyan, K.S.; Asryan, G.; Dashyan, N.B. We report results from an experiment measuring the semiinclusive reaction 2 H(e,e ' p s ) in which the proton p s is moving at a large angle relative to the momentum transfer. If we assume that the proton was a spectator to the reaction taking place on the neutron in deuterium, the initial state of that neutron can be inferred. This method, known as spectator tagging, can be used to study electron scattering from high-momentum (off-shell) neutrons in deuterium. The data were taken with a 5.765 GeV electron beam on a deuterium target in Jefferson Laboratory's Hall B, using the CEBAF large acceptance spectrometer. A reduced cross section was extracted for different values of final state missing mass W*, backward proton momentum p → s , and momentum transfer Q 2 . The data are compared to a simple plane wave impulse approximation (PWIA) spectator model. A strong enhancement in the data observed at transverse kinematics is not reproduced by the PWIA model. This enhancement can likely be associated with the contribution of final state interactions (FSI) that were not incorporated into the model. Within the framework of the simple spectator model, a 'bound neutron structure function' F 2n eff was extracted as a function of W* and the scaling variable x* at extreme backward kinematics, where the effects of FSI appear to be smaller. For p s >0.4 GeV/c, where the neutron is far off-shell, the model overestimates the value of F 2n eff in the region of x* between 0.25 and 0.6. A dependence of the bound neutron structure function on the neutron's 'off-shell-ness' is one possible effect that can cause the observed deviation A neutron scattering study of triblock copolymer micelles Gerstenberg, M.C. The thesis describes the neutron scattering experiments performed on poly(ethylene oxide)/poly(propylene oxide)/poly(ethylene oxide) triblock copolymer micelles in aqueous solution. The studies concern the non-ionic triblock copolymer P85 which consists of two outer segments of 25 monomers of ethylene oxide attached to a central part of 40 monomers of propylene oxide. The amphiphilic character of P85 leads to formation of various structures in aqueous solution such as spherical micelles, rod-like structures, and a BCC liquid-crystal mesophase of spherical micelles. The present investigations are centered around the micellar structures. In the first part of this thesis a model for the micelle is developed for which an analytical scattering form factor can be calculated. The micelle is modeled as a solid sphere with tethered Gaussian chains. Good agreement was found between small-angle neutron scattering experiments and the form factor of the spherical P85 micelles. Above 60 deg. C some discrepancies were found between the model and the data which is possibly due to an elongation of the micelles. The second part focuses on the surface-induced ordering of the various micellar aggregates in the P85 concentration-temperature phase diagram. In the spherical micellar phase, neutron reflection measurements indicated a micellar ordering at the hydrophilic surface of quartz. Extensive modeling was performed based on a hard sphere description of the micellar interaction. By convolution of the distribution of hard spheres at a hard wall, obtained from Monte Carlo simulations, and the projected scattering length density of the micelle, a numerical expression was obtained which made it possible to fit the data. The hard-sphere-hard-wall model gave an excellent agreement in the bulk micellar phase. However, for higher concentrations (25 wt % P85) close to the transition from the micellar liquid into a micellar cubic phase, a discrepancy was found between the model and the A New Measurement of the 1S0 Neutron-Neutron Scattering Length using the Neutron-Proton Scattering Length as a Standard Trotter, D. E. Gonzalez; Salinas, F.; Chen, Q.; Crowell, A. S.; Gloeckle, W.; Howell, C. R.; Roper, C. D.; Schmidt, D.; Slaus, I.; Tang, H.; Tornow, W.; Walter, R. L.; Witala, H.; Zhou, Z. The present paper reports high-accuracy cross-section data for the 2H(n,nnp) reaction in the neutron-proton (np) and neutron-neutron (nn) final-state-interaction (FSI) regions at an incident mean neutron energy of 13.0 MeV. These data were analyzed with rigorous three-nucleon calculations to determine the 1S0 np and nn scattering lengths, a_np and a_nn. Our results are a_nn = -18.7 +/- 0.6 fm and a_np = -23.5 +/- 0.8 fm. Since our value for a_np obtained from neutron-deuteron (nd) breakup agr... Random pulsing of neutron source for inelastic neutron scattering gamma ray spectroscopy Hertzog, R.C. Method and apparatus are described for use in the detection of inelastic neutron scattering gamma ray spectroscopy. Data acquisition efficiency is enhanced by operating a neutron generator such that a resulting output burst of fast neutrons is maintained for as long as practicably possible until a gamma ray is detected. Upon the detection of a gamma ray the generator burst output is terminated. Pulsing of the generator may be accomplished either by controlling the burst period relative to the burst interval to achieve a constant duty cycle for the operation of the generator or by maintaining the burst period constant and controlling the burst interval such that the resulting mean burst interval corresponds to a burst time interval which reduces contributions to the detected radiation of radiation occasioned by other than the fast neutrons Neutron-Proton Scattering Experiments at ANKE-COSY Kacharava, A.; Chiladze, D.; Chiladze, B.; Keshelashvili, I.; Lomidze, N.; Macharashvili, G.; McHedlishvili, D.; Nioradze, M.; Rathmann, F.; Ströher, H.; Wilkin, C. The nucleon-nucleon interaction (NN) is fundamental for the whole of nuclear physics and hence to the composition of matter as we know it. It has been demonstrated that stored, polarised beams and polarised internal targets are experimental tools of choice to probe spin effects in NN-scattering experiments. While the EDDA experiment has dramatically improved the proton-proton date base, information on spin observables in neutron-proton scattering is very incomplete above 800 MeV, resulting in large uncertainties in isoscalar n p phase shifts. Experiments at COSY, using a polarised deuteron beam or target, can lead to significant improvements in the situation through the study of quasi-free reactions on the neutron in the deuteron. Such a measurements has already been started at ANKE by using polarised deuterons on an unpolarised target to study the dp → ppn deuteron charge-exchange reaction and the full program with a polarised storage cell target just has been conducted. At low excitation energies of the final pp system, the spin observables are directly related to the spin- dependent parts of the neutron-proton charge-exchange amplitudes. Our measurement of the deuteron-proton spin correlations will allow us to determine the relative phases of these amplitudes in addition to their overall magnitudes. Measurement of the scattering cross section of slow neutrons on liquid parahydrogen from neutron transmission Grammer, K. B.; Alarcon, R.; Barrón-Palos, L.; Blyth, D.; Bowman, J. D.; Calarco, J.; Crawford, C.; Craycraft, K.; Evans, D.; Fomin, N.; Fry, J.; Gericke, M.; Gillis, R. C.; Greene, G. L.; Hamblen, J.; Hayes, C.; Kucuker, S.; Mahurin, R.; Maldonado-Velázquez, M.; Martin, E.; McCrea, M.; Mueller, P. E.; Musgrave, M.; Nann, H.; Penttilä, S. I.; Snow, W. M.; Tang, Z.; Wilburn, W. S. Liquid hydrogen is a dense Bose fluid whose equilibrium properties are both calculable from first principles using various theoretical approaches and of interest for the understanding of a wide range of questions in many-body physics. Unfortunately, the pair correlation function g (r ) inferred from neutron scattering measurements of the differential cross section d/σ d Ω from different measurements reported in the literature are inconsistent. We have measured the energy dependence of the total cross section and the scattering cross section for slow neutrons with energies between 0.43 and 16.1 meV on liquid hydrogen at 15.6 K (which is dominated by the parahydrogen component) using neutron transmission measurements on the hydrogen target of the NPDGamma collaboration at the Spallation Neutron Source at Oak Ridge National Laboratory. The relationship between the neutron transmission measurement we perform and the total cross section is unambiguous, and the energy range accesses length scales where the pair correlation function is rapidly varying. At 1 meV our measurement is a factor of 3 below the data from previous work. We present evidence that these previous measurements of the hydrogen cross section, which assumed that the equilibrium value for the ratio of orthohydrogen and parahydrogen has been reached in the target liquid, were in fact contaminated with an extra nonequilibrium component of orthohydrogen. Liquid parahydrogen is also a widely used neutron moderator medium, and an accurate knowledge of its slow neutron cross section is essential for the design and optimization of intense slow neutron sources. We describe our measurements and compare them with previous work. Inelastic neutron scattering method in hard coal quality monitoring Cywicka-Jakiel, T.; Loskiewicz, J.; Tracz, G. Nuclear methods in mining industry and power generation plants are nowadays very important especially because of the need for optimization of combustion processes and reduction of environmental pollution. On-line analysis of coal quality not only economic benefits but contribute to environmental protection too. Neutron methods especially inelastic scattering and PGNAA are very useful for analysis of coal quality where calorific valve, ash and moisture content are the most important. Using Pu-Be or Am-Be isotopic sources and measuring carbon 4.43 MeV γ-rays from neutron inelastic scattering: 12 C(n,n'γ) 12 C we can evaluate calorific valve in hard coals with precision better than in PGNAA method. This is mainly because of large cross-section for inelastic scattering and the strong correlation between carbon content and calorific value shown in the paper for different coal basins. The influence of moisture on 4.43 MeV carbon γ-rays in considered in the paper in theoretical and experimental aspects and appropriate formula is introduced. Also the possibilities of determine ash, moisture, Cl, Na and Si in coal are shown. (author). 11 refs, 15 figs Cywicka-Jakiel, T.; Loskiewicz, J.; Tracz, G. [Institute of Nuclear Physics, Cracow (Poland) Nuclear methods in mining industry and power generation plants are nowadays very important especially because of the need for optimization of combustion processes and reduction of environmental pollution. On-line analysis of coal quality not only economic benefits but contribute to environmental protection too. Neutron methods especially inelastic scattering and PGNAA are very useful for analysis of coal quality where calorific valve, ash and moisture content are the most important. Using Pu-Be or Am-Be isotopic sources and measuring carbon 4.43 MeV {gamma}-rays from neutron inelastic scattering: {sup 12}C(n,n`{gamma}){sup 12}C we can evaluate calorific valve in hard coals with precision better than in PGNAA method. This is mainly because of large cross-section for inelastic scattering and the strong correlation between carbon content and calorific value shown in the paper for different coal basins. The influence of moisture on 4.43 MeV carbon {gamma}-rays in considered in the paper in theoretical and experimental aspects and appropriate formula is introduced. Also the possibilities of determine ash, moisture, Cl, Na and Si in coal are shown. (author). 11 refs, 15 figs. Neutron scattering studies of mixed-valence semiconductors Mignot, J M [Laboratoire Leon Brillouin (LLB) - Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France); Alekseev, P A [Kurchatov Institute, Moscow (Russian Federation) Neutron scattering experiments on the mixed-valence (MV) compounds SmB{sub 6} are reported. The inelastic magnetic response of SmB{sub 6} at T = 2 K, measured on a double-isotope single crystal,displays a strongly damped peak at 35 meV corresponding to the inter multiplet transition of Sm{sup 2+}. At lower energies ( h.{omega} {approx_equal} 14 meV), a narrow magnetic excitation is observed, with remarkable scattering-vector and temperature dependences of its intensity. This novel feature is discussed in terms of recent theoretical works describing the formation of an anisotropic local bound state in semiconducting MV materials. If the average samarium valence is decreased by substituting La for Sm, a peak is found to appear at high energies. The elastic magnetic form factor of SmB{sub 6} was determined using polarised neutrons and no significant difference is observed in its Q-dependence with respect to that of pure divalent samarium. This surprising behaviour is constant with previous measurements on the gold (high-pressure) phase of SmS. The above results are compared to those already reported for other MV materials. In particular existing information for TmSe is supplemented by recent inelastic scattering measurements carried out on a large stoichiometric single crystal. (author). 44 refs., 7 figs. Diffuse neutron scattering study of metallic interstitial solid solutions Barberis, P. We studied two interstitial solid solutions (Ni-C(1at%) and Nb-O(2at%) and two stabilized zirconia (ZrO2-CaO(13.6mol%) and ZrO2-Y2O3(9.6mol%) by elastic diffuse neutron scattering. We used polarized neutron scattering in the case of the ferromagnetic Ni-based sample, in order to determine the magnetic perturbation induced by the C atoms. Measurements were made on single crystals in the Laboratoire Leon Brillouin (CEA-CNRS, Saclay, France). An original algorithm to deconvolve time-of-flight spectra improved the separation between elastically and inelastically scattered intensities. In the case of metallic solutions, we used a simple non-linear model, assuming that interstitials are isolated and located in octahedral sites. Results are: - in both compounds, nearest neighbours are widely displaced away from the interstitial, while next nearest neighbours come slightly closer. - the large magnetic perturbation induced by carbon in Nickel decreases with increasing distance on the three first neighbour shells and is in good agreement with the total magnetization variation. - no chemical order between solute atoms could be evidenced. Stabilized zirconia exhibit a strong correlation between chemical order and the large displacements around vacancies and dopants. (Author). 132 refs., 38 figs., 13 tabs Annual report on neutron scattering studies in JAERI, September 1, 1978 - August 31, 1979 Iizumi, Masashi; Endoh, Yasuo Preparation of a bent crystal and its use in neutron scattering Kraxenberger, H. The aim of this thesis was the construction of a horizontally bendable neutron monochromator e.g. analyzator, the application to different measuring problem in neutron scattering, and the development of an exact theory. (HSI) [de Scattering of neutrons by catalase: a study of molecules, subunits, and tubules Randall, J.; Starling, D.; Baldwin, J.P.; Ibel, K. The paper deals with small-angle scattering of neutrons by catalase tetramers, dimers, and monomers and with neutron diffraction by helical assemblies of tetramers in the form of tubules. A preliminary study of catalase is described Magnetic particles studied with neutron depolarization and small-angle neutron scattering Rosman, R. Materials containing magnetic single-domain particles, referred to as 'particulate media', have been studied using neutron depolarization (ND) and small-angle neutron scattering (SANS). In a ND experiment the polarization vector of a polarized neutron beam is analyzed after transmission through a magnetic medium. Such an analysis in general yields the correlation length of variations in magnetic induction along the neutron path (denoted 'magnetic correlation length'), mean orientation of these variations and mean magnetic induction. In a SANS experiment, information about nuclear and magnetic inhomogeneities in the medium is derived from the broadening of a generally unpolarized neutron beam due to scattering by these inhomogeneities. Spatial and magnetic microstructure of a variety of particulate media have been studied using ND and/or SANS, by determination of the magnetic or nuclear correlation length in these media in various magnetic states. This thesis deals with the ND theory and its application to particulate media. ND and SANS experiments on a variety of particulate media are discussed. (author). 178 refs., 97 figs., 8 tabs Measurements and applications of neutron multiple scattering in resonance region Ohkubo, Makio Capture yield of neutrons impinging on a thick material is complicated due to self-shielding and multiple scattering, especially in the resonance region. When the incident neutron energy is equal to a resonance energy of the material, capture probability of the neutron increases with sample thickness and reaches a saturation value P sub(CO). There is a simple relation between P sub(CO) and GAMMA sub(n)/GAMMA and the recoil energy by the Monte-Carlo calculation. To examine validity of the relation, P sub(CO) was measured for 19 resonances in 12 nuclides with thick samples, using a JAERI linac time-of-flight spectrometer with Moxon-Rae type gamma ray detector and transmission type neutron flux monitor. Results of the measurements confirmed the validity. With this relation, the GAMMA sub(n)/GAMMA or GAMMA sub(γ)/GAMMA value can be obtained from the measured P sub(CO), and also the level spins be determined by combining the transmission data. Because of the definition of P sub(CO), determination of the resonance parameters is not sensitive to the sample thickness as far as it is sufficiently thick. (auth.) Neutron scattering on liquid He4 at high momentum transfers Parlinski, K. Using the Sears method of expansion of the dynamic structure factor into a series over the inverse powers of the wave vector and five moments of the velocity correlation function, the distribution of neutrons scattered on liquid helium at T=0 K and at the momentum transfer k=14.33 A -1 is calculated. The calculated distribution takes into account the interaction among helium atoms. The distributions are compared with the experimental data. The results show that proper information of the occupation fraction of the zero-momentum state - the condensate - can be obtained by the neutron scatterng method at high-momentum transfers only when the interaction among helium atoms is taken into account. (author) Neutron scatter studies of chromatin structures related to functions Bradbury, E.M. Despite of setbacks in the lack of neutrons for the proposed We have made considerable progress in chromatin reconstitution with the VLR histone H1/H5 and in understanding the dynamics of nucleosomes. A ferromagnetic fluid was developed to align biological molecules for structural studies using small-angle-neutron-scattering. We have also identified and characterized an intrinsically bent DNA region flanking the RNA polymerase I binding site of the ribosomal RNA gene in Physarum Polycephalum. Finally projects in progress are in the areas of studying the interatctions of histone H4 amino-terminus peptide 1-23 and acetylated 1-23 peptide with DNA using thermal denaturation; study of GGAAT repeats found in human centromeres using high resolution Nuclear magnetic Resonance and nuclease sentivity assay; and the role of histones and other sperm specific proteins with sperm chromatin np Elastic-scattering experiments with polarized neutron beams Chalmers, J.S.; Ditzler, W.R.; Hill, D. Measurements of the spin transfer parameters, K/sub NN/ and K/sub LL/, at 500, 650, and 800 MeV are presented for the reaction p-vector d → n-vector pp at 0 0 . The data are useful input to the NN data base and indicate that the quasi-free charge exchange (CEX) reaction is a useful mechanism for producing neutrons with at least 40% polarization at energies as low as 500 MeV. Measurements of np elastic scattering observables C/sub LL/ and C/sub SL/ covering 35 0 to 172 0 are performed using a polarized neutron beam at 500, 650, and 800 MeV. Preliminary results are presented. 3 refs., 6 figs Deconvolution of neutron scattering data: a new computational approach Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D. In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)
CommonCrawl
Is it made out of tin foil ? IACR Newsletter The newsletter of the International Association for Cryptologic Research . Vol. 25, No. 1, Summer 2010, (Publication date: 15 June 2010 ). Registration for Crypto and CHES open IACR Elections 2009 / Results Service to members and the cryptographic community IACR Reading Room IACR Archive Reports on past events List of books for review Both for Crypto 2010 and CHES 2010, the registration is open now. A novum this year, both events are co-located in Santa Barbara, CA, USA. Crypto 2010, August 15-19 Early bird registration: Thursday, July 15, 2010 Homepage: http://www.iacr.org/conferences/crypto2010/ CHES 2010, August 17-20 Homepage: http://www.cs.ucsb.edu/~koc/ches/ches2010/ I would like to invite you to read the Spring 2010 edition of our Newsletter. Christopher Wolf has done an excellent job in bringing you the news of our association; his initiative to start with book reviews has been an overwhelming success. In spite of financial crisis and the ash cloud in the European skies, IACR is doing well: we have about 1600 members, and our finances are in good shape. This is only possible through the continuous efforts of the volunteers, who generously donate their time to run the website, update the membership database and archive, manage our finances, and organize our flagship conferences and workshops. I would like to express my sincere thanks to all these volunteers, as well as to those who are involved in the scientific dimension, by running program committees, editing the Journal of Cryptology, and reviewing or writing papers. The Eurocrypt 2010 membership meeting has accepted the proposal from the Board to reduce the membership fee for 2012 (charged during 2011 events) from US$88 to US$70 (and from US$44 to US$35 for students). This reduction is justified because a move towards an electronic infrastructure has results in decreasing operating costs. The IACR Board is working on our publication strategy: we are planning to gradually move away from paper as distribution format (by making it optional) and we are considering to evolve towards free access to all our scientific publications. In 2010, the terms of three Directors and four Officers will expire. If you are interested in contributing to the IACR, please contact the members of the 2010 election committee (Josh Benaloh, Jean-Jacques Quisquater, and Serge Vaudenay). After a successful trial this Winter (see http://www.iacr.org/elections/eVoting/ ), the following resolution was submitted by the IACR Board and approved at the Eurocrypt 2010 membership meeting: "The IACR will adopt the Helios remote e-voting system for future IACR elections, after correcting issues that arose during the recent demo election; this includes the provision of a solution for non-Java clients. At the same time, the IACR will clearly publish a statement that its use of this system does not constitute an endorsement of this or other remote-voting systems for public-sector elections." In the next months, the remaining issues will hopefully be resolved; if so, a second vote will be held at the Crypto 2010 membership meeting. I would like to thank the Helios team (Ben Adida, Olivier de Marneffe, Olivier Pereira) and the IACR e-voting committee (Josh Benaloh, Stuart Haber, Shai Halevi) for their contributions. Bart Preneel IACR President Election of Directors The elected directors are: Tom Berson David Naccache Serge Vaudenay Their terms started on 1 January 2010 and will expire on 31 December 2012. We thank all the candidates, whether they were successful or not, for their significant support of IACR. In total, 325 ballots were cast. The detailed results are also available on the IACR-website. The members of the IACR 2009 Election Committee were Josh Benaloh, Ed Dawson, Christian Cachin IACR Conferences Crypto 2010 , August 15-August 19, 2010, Santa Barbara, USA. Asiacrypt 2010 , December 5-December 9, 2010, Singapore, Singapore. Eurocrypt 2011 , May 15-19, 2011, Tallinn, Estonia. Crypto 2011, August 14-18, 2011, Santa Barbara, USA. Asiacrypt 2011, December 4-8, 2011, Seoul, Korea. Eurocrypt 2012, April 15-19, 2012, Cambridge, UK. Crypto 2012 (tentative), August 19-23, Santa Barbara, USA. IACR Workshops Workshop on Cryptographic Hardware and Embedded Systems (CHES 2010) , August 17-August 20, 2010, Santa Barbara, USA. 18th International Workshop on Fast Software Encryption (FSE 2011) , February 14-February 16, 2011, Lyngby, Denmark. 14th International Conference on Practice and Theory in Public Key Cryptography (PKC 2011) , March 6-March 9, 2011, Taormina, Italy. Theory of Cryptography Conference (TCC 2011), March 27-30, 2011 Providence, RI, USA. Events in cooperation with IACR International Conference on Security and Cryptography (SECRYPT 2010) , July 26-28, 2010, Athens, Greece. First International Conference on Cryptology and Information Security in Latin A (Latincrypt 2010) , August 8-11, 2010, Puebla, Mexico. The 17th Annual Workshop on Selected Areas in Cryptography (SAC 2010) , August 12-13, 2010, Waterloo, Canada. 6th China International Conference on Information Security and Cryptology (Inscrypt 2010) , October 20-23, 2010, Shanghai, China. Africacrypt 2011 , July 4-8, 2011, Dakar, Senegal. Further events can be found here . You can also add your events or calls for special issues of journals there. Among others, IACR offers the following benefits: a. Springer operates the so-called "IACR reading room". You can have online access to the online proceedings of IACR workshops and the Journal of Cryptology. If you don't have access yet, follow the following link b. IACR provides a listing of open positions with a focus on cryptology. The listing is available on the Web here and kept up to date on a weekly basis. c. The Cryptology ePrint Archive provides rapid access to recent research in cryptology. Papers have been placed here by the authors and did not undergo any refereeing process other than verifying that the work seems to be within the scope of cryptology and meets some minimal acceptance criteria and publishing conditions. d. The proceedings of some conferences past are made available by the IACR in an archive . The copyright for these papers is held by the IACR. The following reviews are intended to help the IACR members and also the wider community to buy books in the area of cryptology and related areas. If you have any questions regarding the IACR book reviewing system, or would like to volunteer a review, please contact Axel Poschmann (Nanyang Technological University, Singapore) via books at iacr.org . In the latter case, first check the list of reviewable books if your favourite book is still available. At the moment, this list contains books of Taylor & Francis and Springer whose support we hereby gratefully acknowledge. Since 12 Feb 2010, we have many new titles available ! In general, new books will be added around January and July to these lists. An updated list of book reviews can be found on the IACR-website. Below are the abstracts of all 42 reviews added since September 2009. You can access the full list via the following link . M.W. Baldoni, C. Ciliberto, and G.M. Piacentini Cattaneo: "Elementary Number Theory, Cryptography and Codes", 2009: The book is an almost classical treatment of number theory and its applications to cryptography and coding theory. It involves more abstract notions than a classical elementary number theory book does and requires the reader to be familiar with certain algebraic structures. A prerequisite to fully benefit from this book would be a course in abstract algebra. I would recommend the book to various readers though the book speaks more to a mathematically mature reader who has a good understanding of abstraction. Review written by Yesem Kurt Peker (Randolph College, Lynchburg, Virginia, USA). (PDF) Publisher: Springer. ISBN: 978-3-540-69199-0 (Date: 2010-06-07) Kim-Kwang R. Choo: "Secure Key Establishment", 2009: This book is targeted for researchers interested in designing secure cryptographic protocols. It begins with analysing and criticising previous security models for protocols and ends with tools to design better protocols. I would recommend this book, since it is a very valuable reference for me. Review written by Lakshmi Kuppusamy (Queensland University of Technology, Brisbane, Australia). (PDF) Noureddine Boudriga: "Security of Mobile Communications", 2005: This book explores security features related to IP-mobility, mobile payments, multimedia applications, VoIP, and SIM-like cards. It includes information about various attacks and architectures capable of providing security features such as authentication, authorization, and access control in mobile communication systems. For this reason I recommend the book as a good resource for those interested in identifying and solving security issues in mobile communication systems and as a starting point for research in secure mobile communication. Review written by S.V. Nagaraj, (Hadhramout University, Yemen). (PDF) Publisher: CRC Press, Taylor & Francis ISBN: 978-0-8493-7941-3 (Date: 2010-06-07) Richard A. Mollin: "Codes: The Guide to Secrecy from Ancient to Modern Times", 2005: This book is an encyclopedic work of a very high standard covering the most widely known and used cryptographic codes throughout history up until 2004 (book published in 2005). As well as describing cryptographic codes, there are pictures and biographies of key personnel in the field, as well as exercises and problems which may be used for creating courses that will reference this book. Review written by Kenneth J. Radke (Queensland University of Technology, Brisbane, Australia). (PDF) Publisher: CRC Press, Taylor & Francis ISBN: 978-1-584-884-705 (Date: 2010-05-27) Hans Delfs and Helmut Knebl: "Introduction to Cryptography, Principles and Applications" (2nd Edition), 2007: I really enjoyed reading this book and I recommend it for students who have very basic understanding of cryptography and want to know more about mathematical basis and deeper concepts underlying cryptography. People who are focused more on topics like security management, system security, and network security are suggested to look for other books for introduction to cryptography. Review written by Hasan Mirjalili, (École Polytechnique Fédérale de Lausanne, Switzerland). (PDF) Kerstin Lemke, Christof Paar, Marko Wolf (Eds.): "Embedded Security in Cars", 2006: Although this book was published around four years ago, it remains a very timely summary of security considerations in automotive electronics specification, design and use. Much of the material can be applied generically to embedded electronics, but there are also specific problems in vehicle electronics that need special attention. In any case, this book is an excellent security primer for those working in automotive electronics, and its lessons can be applied to many areas of embedded design beyond that. I commend it. Review written by Andrew Waterhouse, (Pacific Research, Sydney, Australia). (PDF) Lawrence C. Washington: "Elliptic Curves - Number Theory and Cryptography" (2nd Edition), 2008: This book presents the theory of elliptic curves from the ground up leading to advanced topics of that area, including several parts on number theory. It is written in a dense style and is most suited for cryptographers and mathematicians. The book is a very valuable reference and qualifies for self-study. After digesting the book, the reader will have a thorough knowledge on elliptic curves as well as number theory. Half of the book will already be enough for most students and engineers. Review written by Vincent C. Immler (Horst Görtz Institute, Ruhr University Bochum, Germany). (PDF) Richard A. Mollin: "Fundamental Number Theory with Applications" (2nd Edition), 2008: This book, written by a well-known Canadian number theorist, is intended for a one-semester undergraduate introductory course in number theory. Therefore, only undergraduates and the occasional dilettante (which may include professionals from affine branches of science who need this or that elementary result) will find it useful. The presentation flows smoothly and the main results can be perused quickly, although, to gain a deeper understanding, more time has to be devoted to their study. I have found the biographical sketches, with their anecdotical flavor, to be very interesting (it is the lesson I got from this book). Review written by Francesco Sica (University of Calgary, Canada). (PDF) Joachim Biskup: "Security in Computing Systems", 2009: The book tries to focus on the essentials of secure computing and aims to provide a collection of the most promising security mechanisms. To a large extent the book achieves this objective and this is one reason why I recommend this book. It is best suited for readers with a strong background in various aspects of securing computer systems. Jintai Ding, Jason E. Gower, Dieter S. Schmidt: "Multivariate Public Key Cryptosystems", 2006: This book gives an overview of multivariate cryptography. It presents both multivariate schemes and attacks against them in great detail and contains many toy examples for them. The book is suitable both for master students and as a starting point for young researchers who want to start their own work in this new field of cryptography. Unfortunately, some of the more recent developments in multivariate cryptography are not contained in the book. Review written by Albrecht Petzold (TU Darmstadt, Germany). (PDF) Joseph Migga Kizza: "Guide to Computer Network Security", 2009: This book gives a limited overview about ``Computer Network Security''. Although it gives a good historic overview about the topics mentioned it lacks a bit of up-to-dateness. Since most of the relevant topics are covered but only reviewed superficially the book is adequate for practitioners or undergraduates but not suitable for researchers. As a nice feature, the author offers additional comprehensive documents on his homepage (like a syllabus and complete set of Powerpoint slides covering a 15-week-course). An additional benefit is also given through the advanced exercises and complex projects at the end of each chapter. Review written by Kilian David (IT Auditor, Germany). (PDF) Publisher: Springer. ISBN: 978-1-84800-916-5 (Date: 2010-03-19) Gildas Avoine, Philippe Oechslin, Pascal Junod: "Computer System Security: Basic Concepts and Solved Exercises", 2007: This book presents about 100 solved exercises on 8 main topics of Computer System Security. Each topic is briefly introduced before proposing the exercises. The exercises test your theoretical knowledge and your ability to solve more pragmatic problems through a few complex exercises. Review written by Eric Diehl (Security Competence Center, Thomson, Rennes, France). (PDF) Publisher: EPFL Press. ISBN: 978-1-420-04620-5 (Date: 2010-03-19) Çetin Kaya Koç: "Cryptographic Engineering", 2009: This book is the first complete introduction to a Cryptographic Engineering. It addresses cryptanalysis of security systems for the purpose of checking their robustness and their strength against attacks, and building countermeasures in order to thwart such attacks by reducing their probability of success. I really recommend Cryptographic Engineering to students and engineers working on implementations of cryptography in real life. As a cryptographic hardware level (ASIC and FPGA) designer, I am going to use this book as a reference in my daily work. Review written by Azzeddine Ramrami (CryptoDisk, France). (PDF) Serge Vaudenay: "A Classical Introduction to Cryptography: Applications for Communications Security", 2006: This book is aimed at bridging the gap between cryptography and its standard applications. Most of the sections are rich in theory and hence, from my point of view, this is more suitable for research than for industry purposes. For me, it is one of the most precious books that I ever had and will always be on my shelf for any quick reference. Review written by Jothi Rangasami (Queensland University of Technology, Brisbane, Australia). (PDF) Frank Nielsen: "A Concise and Practical Introduction to Programming Algorithms in Java", 2005: The book at hand by Frank Nielsen is a textbook mainly targeted to undergraduate students as a very first course in programming. Following the demands of the targeted audience, the book introduces the topics programming and algorithms without requiring prior knowledge. More advanced topics and concepts such as for example object orientation are intentionally omitted in order to stay focused with the book's goal. This book is not only a valuable source for undergraduate students but also for lecturer who can benefit from this book in terms of a source for many programming examples and exercises. Review written by Luigi Lo Iacono (NEC Laboratories Europe, Heidelberg, Germany). (PDF) Jürgen Rothe: "Complexity Theory and Cryptology - An Introduction to Cryptocomplexity", 2005: This book about complexity theory and its application in modern cryptology is interesting and highly valuable for educational purposes, mainly because it yields a new and ingenious way to access modern cryptographic research results. The target audience comprises undergraduate and graduate students in computer science, mathematics, and engineering, but the book is also recommended reading (and a valuable source of information) for researchers, university teachers, and practitioners working in the �eld. Furthermore, it is exceptionally well suited for self-study. This makes the book so unique that it should be part of any library on cryptology or complexity theory. Review written by Rolf Oppliger (eSECURITY Technologies and University of Zurich, Switzerland). (PDF) Jürgen Rothe: "Komplexitätstheorie und Kryptologie - Eine Einführung in die Kryptokomplexität", 2008: Das zur Diskussion stehende Buch ist die deutschsprachige Übersetzung des Buches Complexity Theory and Cryptology - An Introduction to Cryptocomplexity (s.o.). Es behandelt die Komplexitätstheorie bzw. deren Anwendung in der Kryptologie und ist aus didaktischer Sicht wertvoll, weil es insbesondere einen neuen und in seiner Art auch einzigartigen Zugang zu Forschungsresultaten der modernen Kryptograï¬�e verschafft. Das Buch wendet sich an Studenten und Studentinnen der Informatik, der Mathematik und des Ingenieurwesens. Natürlich kann das Buch auch Forschern, Dozierenden und Praktikern empfohlen werden. Schliesslich eignet sich das Buch auch zum Selbststudium. Vom Thema und Aufbau her ist das Buch so einzigartig, dass es in jede Bibliothek über Kryptologie oder Komplexitätstheorie gehört. Dirk Henrici: "RFID Security and Privacy", 2008: This book presents the topic of RFID Security and Privacy in the framework of pervasive computing. Written in a dense style, which requires careful digestion and analysis, this book presents a novel and very useful picture of an outspread RFID system with many tag owners and tags, interacting in a standardised infrastructure. I would strongly recommend this book to anyone interested in an in-depth study of the potential uses and constraints of large-scale RFID authentication. A preferred target would be academic researchers in this field, although the practical considerations included in this work may interest industry research labs as well. Review written by Cristina Onete (CASED - Center for Advanced Security Research Darmstadt, Germany). (PDF) Martin Aigner, Günter M. Ziegler: "Proofs from THE BOOK, 4th Edition", 2010: "The Book", as promulgated by Paul ErdÅ's, is God's collection of the most elegant proofs of any and all mathematical theorems, including those still to be discovered. In "Proofs from THE BOOK" Martin Aigner and Günter M. Ziegler attempt to gather together a collection of proofs which, in their opinion, should be included in "The Book". Browsing through the proofs one gets a sense of the rich creative process involved in proving theorems. "Proofs from THE BOOK" is written in a relaxed style which can be best described as a blend between a university level textbook and an article from Scientific American. It is highly recommendable, for unlike many popularizations of science and mathematics, it delves into real theorems not muddy metaphors or inconsistent analogies. Review written by Gregory Kohring (Freelance Analyst, Germany). (PDF) Douglas Jacobson: "Introduction to Network Security", 2009: This book gives a good overview on Network Security. It starts from the lower layer and shows how each other layer can contribute to the overall security of the system. On the one hand students in Computer Science / Network Security might be interested in this book and on the other hand security professionals can use it as a convenient reference book. It won't get dusty on my shelf, as it contains so many precious information, and is enjoyable to read. Review written by Olivier Blazy (Ecole Normale Supérieure, Paris, France). (PDF) Publisher: CRC Press, Taylor & Francis ISBN: 978-1-58488-543-6 (Date: 2010-02-01) Hsinchun Chen, Edna Reid, Joshua Sinai, Andrew Silke, Boaz Ganor: "Terrorism Informatics", 2009: The book gives a good state of the art of the Terrorism Informatics field, focusing mainly on methodological issues in the first part and on how to handle suspicious data on the second. Its audience is very broad: on the one hand, specialists (scientific, experts, policy makers) can find useful information, on the other hand, the book is really accessible to students. Jonathan Katz, Yehuda Lindell: "Introduction to Modern Cryptography", 2008: This book is a comprehensive, rigorous introduction to what the authors name ''Modern'' Cryptography. One of the book's best qualities is the remarkably logical and systematic style in which the authors present several cryptographic primitives and constructions. A disadvantage of this book in my opinion is that it does not delve deeper into cryptographic methods such as authentication with limited resources, such as RFID, or PUF-based authentication. The reader must be familiar with some basic mathematical concepts and the science of proving statements, thus this book is not suited for the industry but rather for graduate students. However, even a versed cryptographer will benefit from the rigorous and complete treatment of the mentioned topics. I would heartily recommend this book to anyone who is interested in cryptography. Publisher: CRC Press, Taylor & Francis Group. ISBN: 978-1-58488-551-1 (Date: 2010-01-13) Gregory V. Bard: "Algebraic Cryptanalysis", 2009: This book introduces the predominant topics in multivariate-base cryptanalysis. It can be described to be a complementary text book in the �eld of algebraic attack as a result of the author's experience and knowledge. For a person who did not know much about algebraic cryptanalysis, this book is a good starting point. Review written by Wael Said Abdel mageed Mohamed (Cryptography and Computeralgebra, Informatik, TU Darmstadt, Germany). (PDF) Friederich L. Bauer: "Historische Notizen zur Informatik" (German) , 2009: This book is a collection of trivia about the history of computer science and mathematics. You can learn this and that from it, but it is nevertheless a book to enjoy reading. Maybe a nice gift to everybody from this field who likes to read. Review written by Jannik Pewny (Horst Görtz Institute, Ruhr University Bochum, Germany). (PDF) M. Jason Hinek: "Cryptanalysis of RSA and its Variants", 2010: This book sums up traditional attacks on RSA and gives a lot of information about the newer lattice-based attacks. It uses a lot of mathematics, but explains it pretty well. It seems like a very good book to get an overview over attacks on RSA. Publisher: CRC Press, Taylor & Francis Group. ISBN: 978-1-4200-7512-2 (Date: 2010-01-08) Song Y. Yan: "Primality Testing and Integer Factorization in Public-Key Cryptography", 2009: The author knows how to show that "the theory of numbers is one of the most beautiful and pure parts of mathematics" and how to fascinate the reader for this subject. The book can be recommended without any restrictions. It is suitable as text book and/or reference book for anybody interested in Primality Testing or Integer Factorization being student, researcher or amateur. This book will definitely not get dusty in the reviewer's book shelf! Review written by Joerg Gerschuetz (International School of IT Security, Bochum, Germany). (PDF) Yan Sun, Wade Trappe and K.J.R. Liu: "Network-Aware Security for Group Communications", 2008: This book gives an introduction to group key management protocols in different network settings. It can be recommended to early researchers in the areas of group key management, secure multicast and secure communication in sensor networks. The book discusses various security issues in group communications in a network-aware approach. However, it fails to show how to rigorously analyze group key management protocols with respect to these identified security issues. Review written by Choudary Gorantla (Information Security Institute, Queensland University of Technology, Australia). (PDF) Darel W. Hardy, Fred Richman, and Carol L. Walker: "Applied Algebra - Codes, Ciphers, And Discrete Algorithms", 2009: The book introduces algebraical concepts which are used in cryptography and coding and shows their applications in these fields. The strength of the book is clearly the number of examples which on the other side in some case unfortunately leads to a lack of general definitions and theorems. Therefore this book is suitable for student who prefer learning by doing (the book provides many exercise) but is not suitable as a handbook. I would also not recommend the book for mathematics student or students which already have a good mathematical background or a strong background in cryptography or coding as they would know already large parts of the book. Review written by Julia Borghoff (DTU Mathematics, Technical University of Denmark). (PDF) Johannes Buchmann: "Introduction to Cryptography", 2004: As the title states the book by Johannes Buchmann provides an introduction to cryptography. It gives a general mathematical background in the beginning and particular mathematical preliminaries are provided at the time they are needed to understand some specific cryptographic method. This text is recommended for undergraduate students or readers who want to get an overview of some modern cryptographic methods and their mathematical preliminaries, like for example RSA and DES. Review written by Mohamed Saied Emam Mohamed (Technical University of Darmstadt, Germany). (PDF) Publisher: Springer. ISBN: 0-387-21156-X (Date: 2010-01-05) Michael Hafner and Ruth Breu: "Security Engineering for Service-Oriented Architectures", 2009: The book by Hafner and Breu gives an overview on how to systematically design and realize security-critical service-based applications following the model-driven development methodology. Whenever the book talks about SOA or services, it is talking about the technical realisation of SOA using SOAP and related technologies and standards. Currently the audience mainly benefiting from this book is regarded students and researchers. Peter H. Cole and Damith C. Ranasinghe: "Networked RFID Systems and Lightweight Cryptography", 2008: This book is a comprehensive guide to networks of Radio Frequency Identification (RFID) based Electronic Product Codes (EPCs) in supply chains. Written in a fluent, but not overworded fashion, this work represents both a good starting point for students beginning to work in the area of RFID, and a reference for those who are rather more advanced in this field. It provides a great background for those interested in the topic of RFID in general and supply-chain-RFID in particular. A preferred target audience would be researchers in this field, rather than those working in the industry. Further study of the various references quoted in the book is not only recommendable, but necessary, as the authors present only succinctly the topic of other papers or books. F.L. Bauer: "Decrypted Secrets - Methods and Maxims of Cryptology", 2009: As the subtitle reveals, the book discusses different methods and maxims of cryptology. This book can be recommended to everyone who has mathematical, informatical, historical or linguistic interests in cryptography. There are different ways of approaching this book. Due to its vivid style, it can be read linear as a novel, but it can also be used as reference work for specific topics. Review written by Denise Reinert (ISEB---Institute for Security in E-Business, Ruhr University, Bochum, Germany). (PDF) F.L. Bauer: "Entzifferte Geheimnisse - Methoden und Maximen der Kryptologie" (German) , 2009: Wie der Untertitel bereits verrät, behandelt das Buch Entzifferte Geheimnisse verschiedene Methoden und Maximen der Kryptologie. Dieses Buch ist für jeden empfehlenswert, der sich aus mathematischer, informationstechnischer, historischer oder sprachlicher Sicht für Kryptographie interessiert. Dabei gibt es verschiedene Herangehensweisen, das Buch zu lesen. Durch den lebhaften Stil kann es durchaus linear als Roman gelesen werden, jedoch ist es auch als Nachschlagewerk für einzelne Bereiche geeignet. Massimiliano Sala, Teo Mora, Ludovic Perret, Shojiro Sakata, and Carlo Traverso (Editors): "Gröbner Bases, Coding, and Cryptography", 2009: The book edited by Max Sala and other renowned experts is a collection of chapters and small notes devoted to the topic of application of Gröbner bases in coding and cryptography. Gröbner bases appeared in 1960s and nowadays is an established tool in computational algebra. Quite recently applications of this technique have been found in coding theory (decoding, fining minimum distance) and cryptology (multivariate-based cryptography, algebraic cryptanalysis). This book has all the material needed to get an overview of the exciting area. Review written by Stanislav Bulygin (CASED - Center for Advanced Security Research Darmstadt, Germany). (PDF) Keith Mayes and Konstantinos Markantonakis (editors): "Smart Cards, Tokens, Security and Applications", 2008: This book is an introduction to the world of smart cards and secure components. It describes some of the main applications using smart cards: mobile phone, banking, Pay TV and ID cards. It briefly explores advanced topics such as life cycle management, development environments (Java card, MultOS, SIM toolkit, ...) or Common Criteria. If you're looking for a quick tour about smart cards, then this may be your book. Abhishek Singh and Baibhav Singh: "Identifying Malicious Code Through Reverse Engineering", 2009: This book gives a little introduction into assembly, shows how a PE looks like, how vulnerabilities look like in assembly code and shows you some stumbling blocks when reverse-engineering code. It is full of spelling mistakes and does not cover the topic the title promises. Carlos Cid, Sean Murphy, and Matthew Robshaw: "Algebraic Aspects of the Advanced Encryption Standard", 2006: In their book the authors give an algebraic perspective of the Advanced Encryption Standard (AES). The way the book is written is overall pleasant. The reader who is ok with mathematical language should have no problem reading it. The material is not overwhelmed with heavy mathematical results/proofs/notions. Considering that the book contains also necessary mathematical background overview, it is readable for engineers and cryptographers without a particular pre-knowledge of algebra. Publisher: Springer. ISBN: 0-387-24363-1, 978-0-387-24363-4 (Date: 2009-11-02) Song Y. Yan: "Cryptanalytic Attacks on RSA", 2008: The book is the state of the art encyclopedia of RSA encryption algorithm. It is well-structured and can be used as lecture notes for any university cryptographic course or student research project. It is the most relevant and self-explanatory book about RSA and is very helpful for students and teachers. Review written by Yuriy R. Aydarov (Perm State University, Russia). (PDF) Adam J. Elbirt: "Understanding and Applying Cryptography and Data Security", 2009: And now how do I implement that? If you have some day wondered how to implement your cryptographic result, this book is here to help you... From symmetric-key to public-key cryptography, from signatures to MAC, you'll may find the answer you are looking for in there. Karl de Leeuw and Jan Bergstra (editors): "The History of Information Security - A Comprehensive Handbook", 2007: This magisterial book, of almost 900 pages, has joined Kahn, Yardley and Welchmann on my shelf of serious reference works. Yet it contains much that I found new, surprising and even delightful, despite a quarter century of working in the field. Review written by Ross Anderson (University of Cambridge, Computer Laboratory). (PDF) Publisher: Elsevier. ISBN: 0444516085, 978-0444516084 (Date: 2009-10-27) Shiguo Lian: "Multimedia Content Encryption: Techniques and Applications", 2009: This book gives a good starting point for research concerning the special requirements multimedia content has of cryptography. It takes various types of encryption, compression, watermarking and fingerprinting into account. Readers with background in cryptography and interest in the topic of multimedia encryption should be satisfied. Publisher: CRC Press, Taylor & Francis Group. ISBN: 1-4200-6527-0, 978-1-4200-6527-5 (Date: 2009-10-20) Gabriel Valiente: "Combinatorial Pattern Matching Algorithms in Computational Biology using Perl and R", 2009: The book holds what its cover promises: It is a well-sorted collection of pattern matching algorithms that are used to work with problems in computational biology. Only shortcoming is the missing runtime-analysis. All in all, it is recommended, in particular for students of computational biology or bioinformatics. Publisher: CRC Press, Taylor & Francis Group. ISBN: 1-4200-6973-X, 978-1-4200-6973-0 (Date: 2009-10-07) Crypto 2009 , August 16-20th, 2009, Santa Barabara, USA CRYPTO 2009 was held, as always, in Santa Barbara, California, from August 16 to 20 under mostly-clear skies. The schedule was standard, with catered dinners Sunday and Monday along with a beach bbq on Wednesday. In attendance were 352 delegates from 37 countries, 113 being students. Delegates enjoyed a schedule of 38 regular conference presentations and two invites talks, one by Ed Felten and the other by Ueli Maurer. Dan Bernstein was Rump Session chair. Video and slides for most talks and the rump session can be found on the conference site at http://www.iacr.org/conferences/crypto2009/program.html . Providing a small-font copy of the conference schedule as a badge insert turned out to be a hit, as were the black fleece vests. Any delegates who paid full registration and did not receive a vest should have been mailed one; if not, contact CRYPTO 2009 General Chair John Black. The conference puzzle was \sum_{n=0}^\infty (2n^7+n6^+n^5)/n! which turned out to be 2009e. Eight correct answers were received, with the winner being randomly drawn from them to earn his locally-produced bottle of wine. Many thanks to all who worked hard to produce this successful conference, especially to Program Chair Shai Halevi and his Program Committee along with Sally Vito and her staff as well as the tireless IACR board. Asiacrypt 2009 , December 6-10, 2009, Tokyo, Japan Asiacrypt 2009 was held in Tokyo Japan from December 6 to 10. There were more than 300 participants from over 35 countries. The program included 41 papers selected out of 300 submissions, a rump session in the evening of December 7 and an IACR distinguished lecture given by Dr. Tatsuaki Okamoto in the afternoon session of December 9. All these technical sessions were held at Hitotsubashi Memorial Hall and the banquet was held at Meiji Kinenkan in the evening of December 9 with Japanese food and Mochitsuki (Rice ball cooking) and Koto (Japanese traditional instruments) performances. General chair was Eiji Okamoto and program chair was Mitsuru Matsui. Eurocrypt 2010 , May 30-June 3, 2010, French Riviera, France Eurocrypt 2010 was held May 30-June 3, 2010 on the French Riviera: the technical sessions were held at the Grimaldi Forum in Monaco while the social event took place on the beach in Nice. The conference was attended by 389 participants 88 of whom were students. The participants came from 40 countries. After an intensive reviewing process (606 reports were produced) of the 188 valid submissions, 33 papers were eventually selected (17.6% acceptation rate). The conference program covered a very large range of topics among which lattice based designs, cryptanalyses, and cryptographic protocols. The best paper award was given to David Cash, Dennis Hofheinz, Eike Kiltz, and Chris Peikert for their paper "Bonsai Trees, or How to Delegate a Lattice Basis". Moti Yung gave the 2010 IACR Distinguished Lecture entitled "Cryptography between Wonderland and Underland". Dan Bernstein and Tanja Lange served as rump session chairs and their four vuvuzela holders enforced the time schedule for the 23 presentations. The full program can be seen at http://crypto.rd.francetelecom.com/events/eurocrypt2010/program . The conference organizers are grateful to the sponsors I3S, Ingenico, Microsoft, Nagravision, Oberthur, Orange Labs, Sagem Sécurité, Technicolor, and Qualcomm for their generous support in the difficult economical setting. Qualcomm's support allowed the fees of 7 students attendees to be waived. Program chair was Henri Gilbert, general chairs were Olivier Billet and Matt Robshaw. CHES 2009 , September 6-9, 2009, Lausanne, Switzerland The 11th International Workshop on Cryptographic Hardware and Embedded Systems (CHES 2009) was held in Lausanne, Switzerland, from September 6 to 9. With 312 registered participants from 32 countries (including 70 students), it was not only the largest CHES ever, but also the largest IACR International Workshop. The local organization was in the hands of the Laboratory for Cryptologic Algorithms (LACAL) at the École Polytechnique Fédérale de Lausanne (EPFL). The conference venue was at the EPFL. On Monday evening, the participants enjoyed a Dinner cruise on Lake Geneva and appreciated the Lavaux vineyards (designated part of the UNESCO World Heritage), the Chillon Castle and experienced the sunset in the Alpine scenery. The Rump Session took place at the Casino Montbenon, a Centrex for cultural and social event, surrounded by magnificent gardens with an unparalleled view of the mountains and the lake. The CHES program committee received 148 submissions (the largest number of submissions in CHES ever). After an intensive review and discussion process, 29 regular papers (19.8%) were accepted. The program was complemented by three invited talks: Srini Devadas spoke on "Physical Unclonable Functions and Secure Processors"; Christof Paar on "Crypto Engineering: Some History and Some Case Studies" and Randy Torrance on "The State-of-the-Art in IC Reverse Engineering". The program included two special sessions "DPA Contest" chaired by Elisabeth Oswald and "Benchmarking of Cryptographic Hardware" chaired by Patrick Schaumont. Three best papers were awarded: "Faster and Timing-Attack Resistant AES-GCM" by Emilia Käsper and Peter Schwabe, "Hardware Accelerator for the Tate Pairing in Characteristic Three Based on Karatsuba-Ofman Multipliers" by Jean-Luc Beuchat, Jérémie Detrey, Nicolas Estibals, Eiji Okamoto, and Francisco Rodríguez-Henríquez, and "A New Side-Channel Attack on RSA Prime Generation" by Thomas Finke, Max Gebhardt and Wener Schindler. The rump session chaired by Guido Bertoni consisted on 9 scientific presentations, and 7 funny talks and announcements. Program co-chairs were Kris Gaj and Christophe Clavier, General chair was Marcelo E. Kaihara. FSE 2010 , February 7-10, 2010, Seoul, Korea FSE 2010 was held in Seoul, Korea, from February 7 to 10, 2010. The program included 21 papers that cover wide aspects of symmetric cryptography and two invited talks; "The Survey of Cryptanalysis on Hash Functions" by Xiaoyun Wang and "A Provable-Security Perspective on Hash Function Design" by Thomas Shrimpton. Also, the program included the rump session which was organized and chaired by Orr Dunkelman. The Program Committee selected the paper "Attacking the Knudsen-Preneel Compression Functions" by Onur Ö>zen, Thomas Shrimpton, and Martijn Stam to receive the best paper award. The workshop took place at Koreana Hotel, and there were 118 participants from 25 countries. On Monday evening, participants enjoyed NANTA, a funny, non-verbal, and rhythmical Korean performance. The gala dinner was held at the traditional Korean restaurant, SamcheongGak on Tuesday evening. The workshop organizers gratefully acknowledge CIST, Korea University and Korea Institute of Information Security and Cryptology (KIISC) for their support in organizing the workshop. The financial support by Electronics and Telecommunications Research Institute (ETRI), Ellipsis, Korea University, LG CNS, and National Institute for Mathematical Science (NIMS) is also gratefully acknowledged. Program Co-chairs were Seokhie Hong and Tetsu Iwata, and General Co-chairs were Jongin Lim and Jongsung Kim. TCC 2010 , February 9-11, 2010, Zurich, Swiss TCC 2010 took place at ETH Zurich from February 9 to 11. More than 100 researches from 19 countries attended the conference. The technical program included 33 talks of accepted papers and two invited talks given by Jan Camenisch ("Privacy-enhancing cryptography: From theory into practice") and Yuval Ishai ("Secure computation and its diverse applications"). Furthermore, there were 18 short talks at the rump session, which was chaired by Nelly Fazio. The social program included, besides the usual lunches, coffee breaks, and rump session dinner, also a Fondue cooked by the participants, and a farewell reception with fresh juices and fruits. The conference was sponsored by Credit Suisse, Microsoft Research, Omnisec, and Google. Furthermore, we had sponsored chocolate from Laederach and sponsored coffee from Nespresso. Program chair was Daniele Micciancio, General chairs were Martin Hirt and Ueli Maurer. Public Key Cryptology (PKC 2010) , May 26-28, 2010, ENS Paris, France The 13th International Conference on Practice and Theory in Public Key Cryptography (PKC 2010) was held at the École Normale Supérieure (ENS) in Paris, France from May 26 to 28, 2010. With 162 registered participants (including 41 students) from 26 countries, it was the biggest PKC ever. Though most of the participants came from France (49), several countries such as Japan (18), United States (17), United Kingdom (13), Germany (11), and China (7) were also well represented. The local organization was led by the ENS Crypto Team and the Office for Courses and Colloquiums ( Bureau des Cours-Colloques ) from the French National Institute for Research in Computer Science and Control (INRIA). The conference received a record number of 145 submissions. After an intensive review and discussion process, 29 submissions were selected for publication and presentation at the conference. The final program was well balanced, covering various aspects of public key cryptography. The best paper was awarded to Petros Mol and Scott Yilek for their paper "Chosen-Ciphertext Security from Slightly Lossy Trapdoor Functions". The full program also included two invited talks. Jacques Stern from ENS gave a talk titled "Mathematics, Cryptography, Security" and Daniele Micciancio from UCSD spoke about "Duality in Lattice Based Cryptography". The social program involved a banquet at La Maison Des Polytechniciens on the second evening and a cocktail party at ENS on the last day of the conference. The conference organizers would like to thank our sponsors Google, Ingenico, and Technicolor for their financial support as well as ENS for hosting the conference. Program Co-chairs were Phong Q. Nguyen and David Pointcheval, and General Co-chairs were Michel Abdalla and Pierre-Alain Fouque. International Conference on Information Theoretic Security (ICITS 2009) , December 3-6, 2009, Shizuoka, Japan The 4th International Conference on Information Theoretic Security(ICITS 2009) was held in Shizuoka, Japan from December 3 to 6. There were 75 participants from over 13 contries. The meeting took place at the Shizuoka Convention & Arts Center "GRANSHIP". The banquet was held at the Fugetsuro Restaurant with featured live performances highlighting traditional Japanese culture to delight all those attending. The ICITS program committee received 50 submissions. After an intensive review and discussion process, 13 papers were accepted.The program was complemented by 6 invited talks. Yevgeniy Dodis "Leakage-Resilience and The Bounded Retrieval Model", Masato Koashi "Security of Key Distribution and Complementarity in Quantum Mechanics", Kazukuni Kobara "Code-Based Public-Key Cryptosystems And Their Applications", Prakash Narayan "Multiterminal Secrecy Generation and Tree Packing", Adi Shamir "Random Graphs in Security and Privacy" and Adam Smith "What Can Cryptography Do for Coding Theory?". The full program and slides of the invited speakers are available from the following URL. The conference was received financial support from Support Center for Advance Telecommunications Technology Research and Kayamori Foundation of Informational Science Advancement. We also received local support from Shizuoka Convention & Visitors Bureau. Program Chair was Kaoru Kurosawa, General Chair was Akira Otsuka. Inscrypt 2009 , December 12-15, 2009, Beijing, China Inscrypt 2009 was held in Beijing China from December 12 to 15, there were nearly 130 participants from 17 countries. This conference was held in Beijing Friendship Hotel. The banquet was held at the Ju Xiu Yuan Friendship Palace on the evening of December 14. The conference organizers are State Key Laboratory of Information Security and Chinese Association for Cryptologic Research. Programme chairs are Feng Bao and Moti Yung, General chair was Dengguo Feng. The books below are available for review. If you are interested or have any other question regarding the IACR book reviewing system, please contact Axel Poschmann (Nanyang Technological University, Singapore) via books at iacr.org . New book reviews are posted continiously. If you are interested in reviewing any other books from Taylor & Francis or Springer, please send me an eMail, too. I am pretty sure that I can organize this book. I did not try yet for other publishers, but the process is pretty straight forward, i.e. if you want to review a book from any other publisher, send me an eMail, too. However, it may take a while. Reviewing Guidelines So, what should a review look like? Keep in mind that your review should be helpful for the reader. So summarize its content and then give examples for very good and very bad parts. Give an overall conclusion (e.g. this book could be particular helpful for the following group, is over the top / too easy for...). If your review is longer than the book or shorter than the text on its back, something went wrong. Apart from that, there are not guidelines. Just start reviewing and assume you would be reading your review. Would you like it? So the key questions are: What is this book about (summary)? What is the book like (style)? Would you recommend this book (if yes: for whom?)? Would your review be helpful for yourself ? Prefered format is PDF, see previous reviews or our LaTeX-Template . In addition, I need a 3-10 line "teaser" which more or less summarizes the whole review. In addition, you can also look at other reviews to get an idea what to cover. When requesting a book, please do also include your surface address! After receiving the book, you have 2 month to complete the review. If you have any further questions, please contact Axel Poschmann via books at iacr.org . Please note that every book is only reviewed once and books currently under review are marked in the list below as follows: [Date Name] . Go to titles from: Below you find a selection of books from Springer. Further titles are available via Springer's website . Adjeroh: The Burrows-Wheeler Transform [done Gregory Kohring] Aigner: Proofs from THE BOOK [done Gregory Kohring] Aigner: Das BUCH der Beweise [German] [2009-12-17 Abdelhak Azhari] Baigneres: A Classical Introduction to Cryptography Exercise Book [done Yesem Kurt Peker] Baldoni: Elementary Number Theory, Cryptography and Codes [done Wael Said Abd Elmageed Mohamed] Bard: Algebraic Cryptanalysis [done Denise Reinert] Bauer: Decrypted Secrets [done Denise Reinert] Bauer: Entzifferte Geheimnisse [German] [done Jannik Pewny] Bauer: Historische Notizen zur Informatik [German] [!2010-05-01 Sebastian Gajek] Bella: Formal Correctness of Security Protocols [!2010-02-28 Ludovic Perret] Bernstein: Post-Quantum Cryptography Biggs: Codes: An Introduction to Information Communication and Cryptography [done S.V.Nagaraj] Biskup: Security in Computing Systems Buchmann: Binary Quadratic Forms [done Mohamed Saied Emam Mohamed] Buchmann: Introduction to Cryptography Calmet: Mathematical Methods in Computer Science [2009-08-19 Joerg Schwenk] Camp: Economics of Identity Theft [done Olivier Blazy] Chen: Terrorism Informatics [!!2010-06-15 Lakshmi Kuppusamy] Choo: Secure Key Establishment [done Stanislav Bulygin] Cid: Algebraic Aspects of the Advanced Encryption Standard [done Cristina Onete] Cole: Networked RFID Systems and Lightweight Cryptography [!2010-06-15 Meiko Jensen] Damiani: Open Source Systems Security Certification [done Seyyd Hasan Mirjalili] Delfs: Introduction to Cryptography [not yet published Safuat Hamdy] Desmedt: Secure Public Key Infrastructure Dietzfelbinger: Primality Testing in Polynomial Time [done Albrecht Petzold] Ding: Multivariate Public Key Cryptosystems Di Pietro: Intrusion Detection Systems Fine: Number Theory Gomes: Implicit Curves and Surfaces: Mathematics, Data Structures, and Algorithms [done Luigi Lo Iacono] Hafner: Security Engineering for Service-Oriented Architectures [done Cristina Onete] Henrici: RFID Security and Privacy [!2010-04-09 Paolo Palmieri] Higgins: Number Story Hoffstein: An Introduction to Mathematical Cryptography Hromkovic: Algorithmic Adventures [not yet published Marc Joye] Katz: Digital Signatures [done Kilian David] Kizza: Guide to Computer Network Security Koblitz: Random Curves [done Azzeddine Ramrami] Koç: Cryptographic Engineering Kuo: Precoding Techniques for Digital Communication Systems [2010-01-07 Joerg Gerschuetz] Lee: Botnet Detection [done Andrew Waterhouse] Lemke: Embedded Security in Cars Li: An Introduction to Kolmogorov Complexity and Its Applications [!2010-05-30 Arnaud Tisserand] Mangard: Power Analysis Attacks [done Eric Diehl] Mayes: Smart Cards, Tokens, Security and Applications Mehlhorn: Algorithms and Data Structures [2010-01-27 Ulrich Dürholz] Micheloni: Error Correction Codes for Non-Volatile Memories [done Luigi Lo Iacono] Nielsen: A Concise and Practical Introduction to Programming Algorithms in Java Onieva: Secure Multi-Party Non-Repudiation Protocols and Applications [2010-03-10 Luigi Lo Iacono] Paar: Understanding Cryptography - A Textbook for Students and Practioners Portnoy: Global Initiatives to Secure Cyberspace Robshaw: New Stream Cipher Designs Rodríguez-Henríquez: Cryptographic Algorithms on Reconfigurable Hardware Rosen: Concurrent Zero-Knowledge [done Rolf Oppliger] Rothe: Komplexitätstheorie und Kryptologie [German] [2010-03-31 Eric Diehl] Rousseau: Mathematics and Technology Salomon: A Concise Introduction to Data Compression [done Stas Bulygin] Sala: Gröbner Bases, Coding, and Cryptography Sammes: Forensic Computing Schellekens: A Modular Calculus for the Average Cost of Data Structuring [!2010-01-30 Erik Tews] Schneier: Beyond Fear Schroeder: Number Theory in Science and Communication Shi: Transactions on Data Hiding and Multimedia Security III [done Jannik Pewny] Singh: Identifying Malicious Code Through Reverse Engineering [2010-02-28 Steven Galbraith] Stichtenoth: Algebraic Function Fields and Codes Stolfo: Insider Attack and Cyber Security [done Choudary Gorantla] Sun: Network-Aware Security for Group Communications Traynor: Security for Telecommunications Networks Tuyls: Security with Noisy Data Vadhan: A Study of Statistical Zero-Knowledge Proofs [done Jothi Rangasamy] Vaudenay: A Classical Introduction to Cryptography Vöcking: Taschenbuch der Algorithmen [German] [!2010-07-15 Mario Strefler] Wang: Computer Network Security [done Joerg Gerschuetz] Yan: Primality Testing and Integer Factorization in Public-Key Cryptography [done Yuriy Aydarov] Yan: Cryptanalytic Attacks on RSA Yeung: Information Theory and Network Coding Below you find a selection of books from Taylor & Francis. Titles added on 12 February 2010 are marked with New! at the beginning. Further titles are available via Taylor & Francis's website . Acquisti, A.: Digital Privacy: Theory, Technologies, and Practices [done Eric Diehl] Avoine, Gildas: Computer System Security: Basic Concepts and Solved Exercises Blanchet-Sadri, Francine: Algorithmic Combinatorics on Partial Words [done S.V. Nagaraj] Boudriga, N.: Security of Mobile Communications Brualdi, Richard A.: A Combinatorial Approach to Matrix Theory and Its Applications Chartrand, Gary: Chromatic Graph Theory Cohen, H.: Handbook of Elliptic and Hyperelliptic Curve Cryptography Elaydi, Saber N.: Discrete Chaos, Second Edition: With Applications in Science and Engineering [done Olivier Blazy] Elbirt, Adam J.: Understanding and Applying Cryptography and Data Security Erickson, Martin: Introduction to Number Theory Gross, Jonathan L.: Combinatorial Methods with Computer Applications Gould, Ronald J: Mathematics in Games, Sports, and Gambling [done Julia Borghoff] Hardy, Darel W.: Applied Algebra: Codes, Ciphers and Discrete Algorithms, Second Edition Heubach, Silvia: Combinatorics of Compositions and Words [done Jannik Pewny] Hinek, M. Jason: Cryptanalysis of RSA and Its Variants Hsu, Lih-Hsing: Graph Theory and Interconnection Networks [done Olivier Blazy] Jacobson, Douglas: Introduction to Network Security Johnson, Norman: Handbook of Finite Translation Planes [2009-08-18 David M'Raihi / !2010-01-31 Julia Borghoff] Joux, Antoine: Algorithmic Cryptanalysis [done Cristina Onete] Katz, Jonathan: Introduction to Modern Cryptography: Principles and Protocols [!2010-02-22 Ladan Mahabadi] Katz, Jonathan: Introduction to Modern Cryptography: Principles and Protocols Kirovski, D.: Multimedia Watermarking Techniques and Applications [2010-03-08 Cristina Onete] Kitsos, P.: Security in RFID and Sensor Networks Koolen, Jack: Applications of Group Theory to Combinatorics [done Jannik Pewny] Lian, Shiguo: Multimedia Content Encryption: Techniques and Applications [? 2009-08-24 Landan Mahabadi] Lian, Shiguo: Multimedia Content Encryption: Techniques and Applications Lindner, Charles C.: Design Theory, Second Edition Macaulay, T.: Critical Infrastructure: Understanding Its Component Parts, Vulnerabilities, Operating Risks, and Interdependencies Moldovyan, Nikolai: Data-driven Block Ciphers for Fast Telecommunication Systems [done Francesco Sica] Mollin, Richard A.: Fundamental Number Theory with Applications, Second Edition Mollin, Richard A.: Advanced Number Theory with Applications [done Ken Radke] Mollin, Richard A.: Codes: The Guide to Secrecy From Ancient to Modern Times Newman, Robert C.: Computer Forensics: Evidence Collection and Management Paulsen, William: Abstract Algebra. An interactive Approach Peeva, Irena: Syzygies and Hilbert Functions Roberts, Fred: Applied Combinatorics, Second Edition Sklavos, N.: Wireless Security and Cryptography: Specifications and Implementations [!2010-07-31 Aka Bile Frederic Edoukou] Smith, Jonathan D. H.: Introduction to Abstract Algebra available from May 2010 Stanoyevitch, A.: Introduction to Cryptography with Mathematical Foundations and Computer Implementations Szabo, Sandor: Factoring Groups into Subsets [2010-04-13 Vincent Immler] Talukder, Asoke K.: Architecting Secure Software Systems [done Jannik Pewny] Valiente, Gabriel: Combinatorial Pattern Matching Algorithms in Computational Biology Using Perl and R Wallis, W.D.: Introduction to Combinatorial Designs, Second Edition [done Vincent Immler] Washington, Lawrence C.: Elliptic Curves: Number Theory and Cryptography, Second Edition Xiao, Y.: Security in Distributed, Grid, Mobile, and Pervasive Computing Young, S.: The Hacker's Handbook: The Strategy Behind Breaking into and Defending Networks Zhang, Y.: Security in Wireless Mesh Networks Wiley and Sons Below you find a selection of books from Wiley and Sons. Further titles are available via Wiley and Sons' website . [2009-12-16 Safuat Hamdy] Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems, 2nd Edition You may opt out of the newsletter either by editing your contact information and preferences here . Contributions are most welcome! Please include a URL and/or e-mail addresses for any item submitted (if possible). For things that are not on the Web, please submit a one-page ASCII version. Send your contributions to newsletter (at) iacr.org . IACR contact information . Current newsletter editor is Christopher Wolf.
CommonCrawl
What is meant by *sampling* in terms of the *sampling theorem*? Let $y:\left(-\frac T2,\frac T2\right)\to\mathbb{C}$ be a square integrable function. The Fourier coefficients of $y$ are $$\underline{Y}(k):=\frac 1T\int_{-T/2}^{T/2}y(t)e^{-i\omega_kt}\;dt\;\;\;\text{with }\omega_k:=k\frac{2\pi}T$$ for $k\in\mathbb{Z}$. The Fourier polynom of degree $n\in\mathbb{N}$ of $y$ is $$\mathcal{F}^{-1}_n[y](t):=\sum_{k=-n}^n\underline{Y}(k)e^{i\omega_kt}$$ and $$\mathcal{F}^{-1}[y]:=\lim_{n\to\infty}\mathcal{F}_n^{-1}[x]$$ is called inverse Fourier transformation of $y$. Now, I've got two questions: What is meant by sampling (in terms of the sampling theorem)? From my understanding, if we know the period $T$ all we need to "store" are the values $\underline{Y}(k)$. We cannot store all values, so we need to choose a "huge enough" $n$ and store only the values $\underline{Y}(-n),\cdots,\underline{Y}(n)$. So, where does "sampling" come into play? The only thing I could imagine is numerical integration: We consider an equidistant grid $$x_j=\left(\frac jN-1\right)\frac T2\;\;\;\text{for }j=0,\ldots,2N$$ and apprximate $\underline{Y}(k)$ using the composite trapezoidal rule, i.e. $$\underline{Y}(k)\approx\frac{1}{2N}\sum_{j=0}^{2N-1}y\left(x_j\right)e^{-i\omega_kx_j}$$ By doing so, we didn't take the whole "signal" $y$, but only the "sample points"$\left(x_j,y\left(x_j\right)\right)$ into account. Is this meant by "sampling"? Does the sampling theorem make a statement about $n$ or $N$ or something else? discrete-signals signal-analysis frequency-spectrum sampling 0xbadf00d 0xbadf00d0xbadf00d $\begingroup$ badf00d, i am working on this with a slightly different notational convention. like my $N$ will be the same as your $2N$. and i am not using "$x$" to depict "time", like $t$. note that my "$T_\text{s}$" is the same as your $\frac{T}{2N}$ or my $\frac{T}{N}$. and i am not dealing with any "composite trapezoidal rule". integrating in the continuous-time domain ($\int x(t) \ dt$) will be equivalent to summing in the discrete-time domain ($\sum x[n]$) due to the nature of the mathematics in this bandlimited and sampled situation. $\endgroup$ – robert bristow-johnson Feb 15 '15 at 21:36 I think, with respect to sampled function, that i disagree a bit with MBaz. perhaps i misunderstand what he/she says regarding needing an infinite number of coefficients or samples for this special case of sampled periodic functions. we had a little discussion about this at comp.dsp, and i'm gonna make use of $\LaTeX$ to spell out the same points. of course, whether $x(t)$ is periodic or not, if $x(t)$ is real and is sufficiently bandlimited (there are no frequency components as high or higher than $\frac{f_\text{s}}{2} = \frac{1}{2T_\text{s}}$), then the samples: $$ x[n] \triangleq x(n T_\text{s}) $$ are sufficient to completely represent the continuous-time $x(t)$. if $x(t)$ never repeats, than an infinite number of discrete $x[n]$ are necessary to represent $x(t)$ for all $t$. but if $x(t)$ is periodic, $$ x[n+N]=x[n] \quad\quad \forall n,N \in \mathbb{Z} $$ and $$ x(t+T)=x(t) \quad\quad \forall t $$ then samples existing over the span of one period are sufficient to represent $x[n]$ and $x(t)$. in order for the $x[n]$ to be periodic in the sampled domain: $$ \begin{align} x[n+N] & = x\left((n+N)T_\text{s} \right) \\ & = x\left(nT_\text{s}+NT_\text{s} \right) \\ & = x\left(nT_\text{s}+T \right) \\ \end{align} $$ because $ x(t+T) = x(t) $. which means the obvious, your function period $T$ has to be the same as $N$ times your sampling period $T_\text{s}$. $$ T = N T_\text{s} $$ now, what we know about this sampled periodic function is that $N$ samples of $x[n]$ are sufficient to tell us all about $x[n]$, and since $x(t)$ is bandlimited, $x[n]$ and the $N$ samples that fully define it, are sufficient to fully describe $x(t)$. if $x(t)$ is sufficiently bandlimited (as above), then $$ \begin{align} x(t) & = \sum\limits_{n=\infty}^{+\infty} x[n] \operatorname{sinc}\left(\frac{t-nT_\text{s}}{T_\text{s}}\right) \\ & = \sum\limits_{n=\infty}^{+\infty} x[n] \frac{\sin\left(\pi\frac{t-nT_\text{s}}{T_\text{s}}\right)}{\pi\frac{t-nT_\text{s}}{T_\text{s}}} \\ & = \sum\limits_{m=\infty}^{+\infty} \sum\limits_{n=0}^{N-1} x[n+mN] \frac{\sin\left(\pi\frac{t-(n+mN)T_\text{s}}{T_\text{s}}\right)}{\pi\frac{t-(n+mN)T_\text{s}}{T_\text{s}}} \\ & = \sum\limits_{n=0}^{N-1} \sum\limits_{m=\infty}^{+\infty} x[n] \frac{(-1)^{mN}\sin\left(\pi\frac{t-nT_\text{s}}{T_\text{s}}\right)}{\pi\frac{t-(n+mN)T_\text{s}}{T_\text{s}}} \\ & = \sum\limits_{n=0}^{N-1} x[n] \sin\left(\pi\frac{t-nT_\text{s}}{T_\text{s}}\right) \sum\limits_{m=\infty}^{+\infty} \frac{\frac{T_\text{s}}{\pi}(-1)^{mN}}{t-(n+mN)T_\text{s}} \\ \end{align} $$ $$ x(t) = \begin{cases} \sum\limits_{n=0}^{N-1} x[n] \frac{\sin\left(\pi\frac{t-nT_\text{s}}{T_\text{s}}\right)}{N\tan\left(\frac{\pi}{N}\frac{t-nT_\text{s}}{T_\text{s}}\right)}, & \text{if }N\text{ is even} \\ \sum\limits_{n=0}^{N-1} x[n] \frac{\sin\left(\pi\frac{t-nT_\text{s}}{T_\text{s}}\right)}{N\sin\left(\frac{\pi}{N}\frac{t-nT_\text{s}}{T_\text{s}}\right)}, & \text{if }N\text{ is odd} \end{cases} $$ proving the latter takes a little bit. you might recognize the $N$ odd case as the Dirichlet_kernel. the $N$ even case looks a teeny bit different. but both show exactly how the $N$ samples that fully define the sampled and periodic $x(t)$ are combined to get $x(t)$. now, since $x(t)$ is also periodic with period $T$, then $$ \begin{align} x(t) & = x\left(t+T \right) \\ & = x\left(t+N T_\text{s} \right) \\ & = \sum\limits_{k=-\infty}^{+\infty} X[k] e^{i 2 \pi \frac{k}{T} t} \\ & = \sum\limits_{k=-\infty}^{+\infty} X[k] e^{i 2 \pi \frac{k}{N T_\text{s}} t} \\ & = \sum\limits_{k=-\lfloor \frac{N}{2} \rfloor}^{+\lfloor \frac{N}{2} \rfloor} X[k] e^{i 2 \pi \frac{k}{N} \frac{t}{T_\text{s}}} \end{align} $$ where $\lfloor\cdot\rfloor$ is the $\operatorname{floor}(\cdot)$ operator and $$ \begin{align} X[k] & = \frac{1}{N} \sum\limits_{n=0}^{N-1} x[n] e^{-i 2 \pi \frac{nk}{N}} \\ & = \mathcal{DFT}\{x[n]\} \end{align} $$ there's actually something to fudge (a factor of $\frac{1}{2}$) about $X\left[\frac{N}{2}\right]$ for the $N$ even case: $$ \begin{align} X\left[-\frac{N}{2}\right] = X\left[\frac{N}{2}\right] & = \frac{1}{2} \ \frac{1}{N} \sum\limits_{n=0}^{N-1} x[n] e^{-i 2 \pi n\frac{n(N/2)}{N}} \\ & = \frac{1}{2N} \sum\limits_{n=0}^{N-1} x[n] (-1)^n \\ \end{align} $$ note that: $$ \begin{align} x[n] = x(n T_\text{s}) & = \sum\limits_{k=-\lfloor \frac{N}{2} \rfloor}^{+\lfloor \frac{N}{2} \rfloor} X[k] e^{i 2 \pi \frac{k}{N} \frac{t}{T_\text{s}}}\bigg|_{t=n T_\text{s}} \\ & = \sum\limits_{k=-\lfloor \frac{N}{2} \rfloor}^{+\lfloor \frac{N}{2} \rfloor} X[k] e^{i 2 \pi \frac{k}{N}n} \\ & = \sum\limits_{k=0}^{N-1} X[k] e^{i 2 \pi \frac{nk}{N}} \\ & = \mathcal{iDFT}\{X[k]\} \end{align} $$ if you deal with that $\frac{1}{2}$ fudging for $X\left[\frac{N}{2}\right]$ for the $N$ even case. this is because $X[k+N]=X[k]$ for all $k$. a periodic continuous-time function can be described with a countable set of Fourier coefficients. a bandlimited periodic continuous function can be described with a finite set of $N$ Fourier coefficients, just as well as it can be described with a finite set of $N$ samples. robert bristow-johnsonrobert bristow-johnson $\begingroup$ Even though I haven't checked that final formula, it's obviously true that sampling one period of a periodic band-limited signal suffices. This also leads to the remarkable result that the minimum required sampling rate for sampling a periodic band-limited signal is $f_s=0$. $\endgroup$ – Matt L. Feb 14 '15 at 9:33 $\begingroup$ Robert, I will look more closely to your argument later, but let me say this first. It seems to me that you're assuming knowledge about the signal, like its exact period, and its time delay. Given that knowledge, it may well be that you only need a finite number of samples. However, the sampling theorem (which is what the question was about) does not assume any knowledge about the signal except its bandwidth. Of course, the sampling theorem gives sufficient, not necessary conditions; in many cases you can do with much less samples and/or slower sampling rate. $\endgroup$ – MBaz Feb 14 '15 at 17:18 $\begingroup$ Robert, yes, assuming you can adjust the sampling frequency to have $T/T_s$ an integer, then the samples are periodic and you only need to keep $N$ of them. $\endgroup$ – MBaz Feb 14 '15 at 18:12 $\begingroup$ i'm starting to crap out on this, guys. so if @MBaz or MattL want to prove that Dirichlet thingie, the way to do it is, first assume without loss of generality that $T_\text{s}=1$, and start with the periodic $x[n+N]=x[n]$ and $x(t+N)=x(t)$ and make this asssumption: $$ x[n] = \begin{cases} 1, & \text{if }n=mN \quad m\in \mathbb{Z} \\ 0, & \text{otherwise} \end{cases}$$ and that $x(t)$ is real. then remember that the frequency components $X[k]$ where $k>\frac{N}{2}$ are reflected to negative frequency. for $k=\frac{N}{2}$, you have to split $X[k]$ in two. for positive and negative $f$. $\endgroup$ – robert bristow-johnson Feb 15 '15 at 21:54 $\begingroup$ @robertbristow-johnson, what Matt is doing (as I understand it) is define an "average $f_s$", which is the number of samples taken divided by the time you spend sampling. Since we take a finite number of samples, but the signal's duration is infinite, then your average sampling rate is zero. $\endgroup$ – MBaz Feb 16 '15 at 23:38 Sampling and the Fourier series are only indirectly related. Both are orthonormal expansions of a function, but they use different basis. In the Fourier series, the orthonormal basis are exponential functions. The Fourier coefficents are the magnitude and phase of the exponentials. As you say, having the coefficients is equivalent, in a certain sense, to having the actual function. However: As you have noticed, the Fourier series in general requires an infinite number of coefficients, so it may not be practical. However, the coefficients tend to zero (there is a proof of this), so you can keep a finite number of them and neglect the rest. The difference between the original function and the one re-created from the stored coefficients can be so small as to be ignored. Note that the Fourier transformation is not unique. It is unique if you limit the functions to voltages that can be created in practice. When sampling, the orthonormal basis are sinc (cardinal sine) functions. The coefficients of the sinc functions are the signal samples (that is, its amplitude at specific instants). The sampling theorem gives sufficient conditions for a function to be expressed in terms of this basis. A nice thing about the proof is that it is constructive; that is, it recreates the function from its samples. Notes: Again, you need an infinite number of coefficients (samples) . If you only keep a finite number, there will be a difference between the original function and its reconstruction. The sampling theorem assumes that the function is band-limited, which implies that it is infinite in duration. So, the theorem requires $n$ to be infinite. It makes no statement on $N$. In engineering and signal processing, we just design our systems so that the difference between the original signal and that reconstructed from its samples is too small to notice. I don't know of any results that quantify the error in terms of $N$ for general functions. MBazMBaz $\begingroup$ What exactly do you mean by "the Fourier transform is not unique"? $\endgroup$ – Matt L. Feb 14 '15 at 9:35 $\begingroup$ @MattL. The Fourier transform of arbitrary time functions is not unique, in this sense: the FT of functions that differ only at individual time instants is the same. (Mathematecians say that the two function's difference is not Lebesgue-measurable.) The same argument applies to the IFT. An example is a function that is always zero, except at time $t=1$, when its value is 5. Its FT is zero, and its IFT is zero too. Of course, the FT is unique when you consider the subset of functions that can be generated in practice. $\endgroup$ – MBaz Feb 14 '15 at 17:06 $\begingroup$ OK, so you're referring to functions that are equal "almost everywhere" having the same FT. That's clear, thanks. $\endgroup$ – Matt L. Feb 14 '15 at 17:17 $\begingroup$ Mathematicians say not that the difference is not Lebesgue measureable—non-measureability is a different, and unpleasant, kettle of fish—but rather, as @MattL. says, that the original functions are equal almost everywhere, or that the difference equals 0 almost everywhere. I would prefer to phrase this as saying that the inverse Fourier transform is not unique: there's only one Fourier transform for each function, but there are multiple different functions that have that Fourier transform. $\endgroup$ – LSpice Apr 12 '15 at 23:02 $\begingroup$ @LSpice I should have said that their difference has Lebesgue measure zero, instead of being not measureable; thanks for the correction. See definitions 2.5.2 and Theorem 6.2.12 from this book. $\endgroup$ – MBaz Apr 12 '15 at 23:34 Not the answer you're looking for? Browse other questions tagged discrete-signals signal-analysis frequency-spectrum sampling or ask your own question. A question about the sampling theorem Sampling theorem and signals explained to a mathematician Ideal sampling of audio signals-Periodic spectrum Exact formula for alias of Discrete Fourier transform for periodic sigals Sampling Theorem and Dirac Comb Spectrum of a Signal Going Through a Zero Order Hold Sampling System A Sampling theorem for power signals Determining the final value of the output of a discrete system Is sampling a Fourier transformed signal and fourier transforming a sampled signal the same? implication of sampling and reconstruction theorem Use of the Dirac delta as a sampling operator
CommonCrawl
# Introduction to MATLAB and numerical methods To get started with MATLAB, you can download the software from the MathWorks website (https://www.mathworks.com/products/get-matlab.html) and follow the installation instructions provided. Once installed, you can launch the MATLAB environment and start writing your first lines of code. MATLAB uses a simple syntax for writing mathematical expressions and functions. For example, you can define a function to compute the square of a number as follows: ```matlab function y = square(x) y = x^2; end ``` Numerical methods are essential in the analysis of dynamical systems because they allow us to approximate the behavior of complex systems using mathematical models. In this textbook, we will cover several numerical methods for stability analysis, including the Jacobian method, the Lyapunov method, and the simulation method. ## Exercise Write a MATLAB function that computes the factorial of a given number. ```matlab function nfact = factorial(n) % Your code here end ``` ### Solution ```matlab function nfact = factorial(n) if n == 0 nfact = 1; else nfact = n * factorial(n-1); end end ``` # Discrete-time dynamical systems A discrete-time dynamical system is a mathematical model that describes the evolution of a system over discrete time intervals. Such systems are often used to model the behavior of physical systems, such as mechanical systems, electrical circuits, and biological processes. In a discrete-time dynamical system, the state of the system at time $t$ is determined by its state at the previous time step $t-1$. The state update equation for a discrete-time system is given by: $$ x_{t+1} = f(x_t, u_t, t) $$ where $x_t$ is the state vector at time $t$, $u_t$ is the input vector at time $t$, and $t$ is the time variable. # Continuous-time dynamical systems A continuous-time dynamical system is a mathematical model that describes the evolution of a system over continuous time intervals. Such systems are often used to model the behavior of physical systems, such as fluid dynamics, heat transfer, and control systems. In a continuous-time dynamical system, the state of the system at time $t$ is determined by its state at time $t$ and the time derivative of the state. The state update equation for a continuous-time system is given by: $$ \dot{x} = f(x, u, t) $$ where $\dot{x}$ is the time derivative of the state vector $x$, $u$ is the input vector, and $t$ is the time variable. # Stability analysis of discrete-time systems The Jacobian method involves computing the Jacobian matrix of the state update function and analyzing its eigenvalues and eigenvectors. If the system has a stable equilibrium point, the Jacobian matrix will have eigenvalues with negative real parts. The Lyapunov method involves constructing a Lyapunov function that describes the distance of the system from the equilibrium point. If the system has a stable equilibrium point, the Lyapunov function will be positive definite. The simulation method involves simulating the behavior of the system over a long time interval and observing its behavior. If the system converges to a stable equilibrium point, the simulation will show that the state vector converges to a constant value. ## Exercise Use MATLAB to simulate the behavior of a discrete-time dynamical system and analyze its stability. ```matlab % Your code here ``` ### Solution ```matlab % Define the state update function function x = discrete_time_update(x, u) % Your code here end % Simulate the behavior of the system t = 0:0.1:100; x = zeros(size(t)); x(1) = 1; % Initial state for i = 2:length(t) x(i) = discrete_time_update(x(i-1), t(i)); end % Plot the simulation results plot(t, x); xlabel('Time'); ylabel('State'); title('Simulation of Discrete-Time Dynamical System'); ``` # Stability analysis of continuous-time systems The Jacobian method involves computing the Jacobian matrix of the state update function and analyzing its eigenvalues and eigenvectors. If the system has a stable equilibrium point, the Jacobian matrix will have eigenvalues with negative real parts. The Lyapunov method involves constructing a Lyapunov function that describes the distance of the system from the equilibrium point. If the system has a stable equilibrium point, the Lyapunov function will be positive definite. The simulation method involves simulating the behavior of the system over a long time interval and observing its behavior. If the system converges to a stable equilibrium point, the simulation will show that the state vector converges to a constant value. ## Exercise Use MATLAB to simulate the behavior of a continuous-time dynamical system and analyze its stability. ```matlab % Your code here ``` ### Solution ```matlab % Define the state update function function dx = continuous_time_update(x, u) % Your code here end % Simulate the behavior of the system t = 0:0.1:100; x = zeros(size(t)); x(1) = 1; % Initial state for i = 2:length(t) x(i) = x(i-1) + continuous_time_update(x(i-1), t(i)); end % Plot the simulation results plot(t, x); xlabel('Time'); ylabel('State'); title('Simulation of Continuous-Time Dynamical System'); ``` # Numerical methods for stability analysis The Jacobian method involves computing the Jacobian matrix of the state update function and analyzing its eigenvalues and eigenvectors. If the system has a stable equilibrium point, the Jacobian matrix will have eigenvalues with negative real parts. The Lyapunov method involves constructing a Lyapunov function that describes the distance of the system from the equilibrium point. If the system has a stable equilibrium point, the Lyapunov function will be positive definite. The simulation method involves simulating the behavior of the system over a long time interval and observing its behavior. If the system converges to a stable equilibrium point, the simulation will show that the state vector converges to a constant value. # Simulation of dynamical systems in MATLAB To simulate a discrete-time dynamical system in MATLAB, you can define a function that computes the state update and then use a loop to simulate the system over a time interval. For example: ```matlab % Define the state update function function x = discrete_time_update(x, u) % Your code here end % Simulate the behavior of the system t = 0:0.1:100; x = zeros(size(t)); x(1) = 1; % Initial state for i = 2:length(t) x(i) = discrete_time_update(x(i-1), t(i)); end % Plot the simulation results plot(t, x); xlabel('Time'); ylabel('State'); title('Simulation of Discrete-Time Dynamical System'); ``` To simulate a continuous-time dynamical system in MATLAB, you can define a function that computes the time derivative of the state and then use a loop to simulate the system over a time interval. For example: ```matlab % Define the state update function function dx = continuous_time_update(x, u) % Your code here end % Simulate the behavior of the system t = 0:0.1:100; x = zeros(size(t)); x(1) = 1; % Initial state for i = 2:length(t) x(i) = x(i-1) + continuous_time_update(x(i-1), t(i)); end % Plot the simulation results plot(t, x); xlabel('Time'); ylabel('State'); title('Simulation of Continuous-Time Dynamical System'); ``` # Applications of stability analysis in engineering Stability analysis is a fundamental concept in engineering that is used to ensure the long-term behavior of various systems, such as mechanical systems, electrical circuits, and control systems. - The design of mechanical systems to ensure their stability under various operating conditions. - The analysis of electrical circuits to ensure their stability and prevent oscillations or failure. - The design of control systems to ensure the stability of the closed-loop system. # Case studies and examples - The stability analysis of a mechanical system, such as a vibrating structure or a suspension bridge. - The stability analysis of an electrical circuit, such as a power grid or a control system. - The stability analysis of a control system, such as a robotic arm or a vehicle guidance system. These case studies will demonstrate the importance of stability analysis in engineering and its role in ensuring the reliability and safety of various systems.
Textbooks
MCRF Home Global controllability and stabilizability of Kawahara equation on a periodic domain June 2015, 5(2): 359-376. doi: 10.3934/mcrf.2015.5.359 Feedback controls to ensure global solutions and asymptotic stability of Markovian switching diffusion systems Guangliang Zhao 1, , Fuke Wu 2, and George Yin 3, GE Global Research, 1 Research Circle, Niskayuna, NY 12309, United States School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, Hubei 430074 Department of Mathematics, Wayne State University, Detroit, Michigan 48202 Received January 2014 Revised February 2014 Published April 2015 To treat networked systems involving uncertainty due to randomness with both continuous dynamics and discrete events, this paper focuses on diffusions modulated by a continuous-time Markov chain. In our paper [19], we considered ordinary differential equations with Markovian switching. This paper further treats more complex cases, namely, stochastic differential equations with Markovian switching. Our goal is to stabilize the systems under consideration. One of the difficulties is that the systems grow much faster than the allowable rates in the literature of stochastic differential equations. As a result, the underlying systems have finite explosion time. To overcome the difficulties, we develop feedback controls to extend the local solutions to global solutions and to stabilize the resulting systems. The feedback controls are Brownian type of perturbations. We establish the existence of global solution, prove the stability of the resulting systems, obtain boundedness in probability as $t\to\infty$, and provide sufficient conditions for almost sure stability. Then we present numerical examples to illustrate the main results. Keywords: stabilization, feedback control., regularity, stability, Regime-switching diffusion. Mathematics Subject Classification: Primary: 60J60, 60J27, 93E15; Secondary: 93E0. Citation: Guangliang Zhao, Fuke Wu, George Yin. Feedback controls to ensure global solutions and asymptotic stability of Markovian switching diffusion systems. Mathematical Control & Related Fields, 2015, 5 (2) : 359-376. doi: 10.3934/mcrf.2015.5.359 J. A. D. Appleby and X. Mao, Stochastic stabilisation of functional differential equations, Systems & Control Letters, 54 (2005), 1069-1081. doi: 10.1016/j.sysconle.2005.03.003. Google Scholar J. A. D. Appleby, X. Mao and A. Rodkina, Stabilization and destabilization of nonlinear differential equations by noise, IEEE Trans. Automat. Control, 53 (2008), 683-691. doi: 10.1109/TAC.2008.919255. Google Scholar L. Arnold, H. Crauel and V. Wihstusz, Stabilization of linear system by noise, SIAM J. Control Optim., 21 (1983), 451-461. doi: 10.1137/0321027. Google Scholar A. Bahar and X. Mao, Stochastic delay Lotka-Volterra model, Journal of Mathematical Analysis and Applications, 292 (2004), 364-380. doi: 10.1016/j.jmaa.2003.12.004. Google Scholar F. Deng, Q. Luo, X. Mao and S. Pang, Noise suppresses or expresses exponential growth, Systems & Control Letters, 57 (2008), 262-270. doi: 10.1016/j.sysconle.2007.09.002. Google Scholar M. K. Ghosh, A. Arapostathis and S. I. Marcus, Ergodic control of switching diffusions, SIAM J. Control Optim., 35 (1997), 1952-1988. doi: 10.1137/S0363012996299302. Google Scholar R. Z. Khasminskii, Stochastic Stability of Differential Equations, $2^{nd}$ edition, Springer, Berlin, 2012. doi: 10.1007/978-3-642-23280-0. Google Scholar R. Z. Khasminskii and G. Yin, Asymptotic behavior of parabolic equations arising from null-recurrent diffusions, J. Differential Eqs., 161 (2000), 154-173. doi: 10.1006/jdeq.1999.3647. Google Scholar H. J. Kushner and G. Yin, Stochastic Approximation and Recursive Algorithms and Applications, $2^{nd}$ edition, Springer-Verlag, New York, NY, 2003. doi: 10.1007/b97441. Google Scholar B. Lian and S. Hu, Asymptotic behaviour of the stochastic Gilpin-Ayala competition models, J. Math. Anal. Appl., 339 (2008), 419-428. doi: 10.1016/j.jmaa.2007.06.058. Google Scholar R. Liptser and A. N. Shiryaev, Theory of Martingale, Kluwer Academic Publishers, Dordrecht, 1989. doi: 10.1007/978-94-009-2438-3. Google Scholar X. Mao, Stability of stochastic differential equations with Markovian switching, Stochastic Process. Appl., 79 (1999), 45-67. doi: 10.1016/S0304-4149(98)00070-2. Google Scholar X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, UK, 2006. doi: 10.1142/9781860948848_fmatter. Google Scholar X. Mao, Stochastic Differential Equations and Applications, $2^{nd}$ edition, Horwood, Chichester, UK, 2008. doi: 10.1533/9780857099402. Google Scholar A. V. Skorokhod, Asymptotic Methods in the Theory of Stochastic Differential Equations, Amer. Math. Soc., Providence, RI, 1989. Google Scholar F. Wu and S. Hu, Suppression and stabilisation of noise, Internat. J. Control, 82 (2009), 2150-2157. doi: 10.1080/00207170902968108. Google Scholar G. Yin, X. R. Mao, C. Yuan and D. Cao, Approximation methods for hybrid diffusion systems with state-dependent switching processes: Numerical algorithms and existence and uniqueness of solutions, SIAM J. Math. Anal., 41 (2010), 2335-2352. doi: 10.1137/080727191. Google Scholar G. Yin and C. Zhu, Hybrid Switching Diffusions: Properties and Applications, Springer, New York, 2010. doi: 10.1007/978-1-4419-1105-6. Google Scholar G. Yin, G. Zhao and F. Wu, Regularization and stabilization of randomly switching dynamic systems, SIAM J. Appl. Math., 72 (2012), 1361-1382. doi: 10.1137/110851171. Google Scholar C. Zhu and G. Yin, On competitive Lotka-Volterra model in random environments, J. Math. Anal. Appl., 357 (2009), 154-170. doi: 10.1016/j.jmaa.2009.03.066. Google Scholar Jiaqin Wei. Time-inconsistent optimal control problems with regime-switching. Mathematical Control & Related Fields, 2017, 7 (4) : 585-622. doi: 10.3934/mcrf.2017022 Wensheng Yin, Jinde Cao, Yong Ren. Inverse optimal control of regime-switching jump diffusions. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021034 Fuke Wu, George Yin, Zhuo Jin. Kolmogorov-type systems with regime-switching jump diffusion perturbations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2293-2319. doi: 10.3934/dcdsb.2016048 Zhuo Jin, Linyi Qian. Lookback option pricing for regime-switching jump diffusion models. Mathematical Control & Related Fields, 2015, 5 (2) : 237-258. doi: 10.3934/mcrf.2015.5.237 Jun Li, Fubao Xi. Exponential ergodicity for regime-switching diffusion processes in total variation norm. Discrete & Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2021309 Zhuo Jin, George Yin, Hailiang Yang. Numerical methods for dividend optimization using regime-switching jump-diffusion models. Mathematical Control & Related Fields, 2011, 1 (1) : 21-40. doi: 10.3934/mcrf.2011.1.21 Chao Xu, Yinghui Dong, Zhaolu Tian, Guojing Wang. Pricing dynamic fund protection under a Regime-switching Jump-diffusion model with stochastic protection level. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2603-2623. doi: 10.3934/jimo.2019072 Kehan Si, Zhenda Xu, Ka Fai Cedric Yiu, Xun Li. Open-loop solvability for mean-field stochastic linear quadratic optimal control problems of Markov regime-switching system. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021074 Tak Kuen Siu, Yang Shen. Risk-minimizing pricing and Esscher transform in a general non-Markovian regime-switching jump-diffusion model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2595-2626. doi: 10.3934/dcdsb.2017100 Christoforidou Amalia, Christian-Oliver Ewald. A lattice method for option evaluation with regime-switching asset correlation structure. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1729-1752. doi: 10.3934/jimo.2020042 Mourad Bellassoued, Raymond Brummelhuis, Michel Cristofol, Éric Soccorsi. Stable reconstruction of the volatility in a regime-switching local-volatility model. Mathematical Control & Related Fields, 2020, 10 (1) : 189-215. doi: 10.3934/mcrf.2019036 Engel John C Dela Vega, Robert J Elliott. Conditional coherent risk measures and regime-switching conic pricing. Probability, Uncertainty and Quantitative Risk, 2021, 6 (4) : 267-300. doi: 10.3934/puqr.2021014 Martin Gugat, Mario Sigalotti. Stars of vibrating strings: Switching boundary feedback stabilization. Networks & Heterogeneous Media, 2010, 5 (2) : 299-314. doi: 10.3934/nhm.2010.5.299 Ping Chen, Haixiang Yao. Continuous-time mean-variance portfolio selection with no-shorting constraints and regime-switching. Journal of Industrial & Management Optimization, 2020, 16 (2) : 531-551. doi: 10.3934/jimo.2018166 Yinghui Dong, Kam Chuen Yuen, Guojing Wang. Pricing credit derivatives under a correlated regime-switching hazard processes model. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1395-1415. doi: 10.3934/jimo.2016079 Jiaqin Wei, Zhuo Jin, Hailiang Yang. Optimal dividend policy with liability constraint under a hidden Markov regime-switching model. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1965-1993. doi: 10.3934/jimo.2018132 Jiapeng Liu, Ruihua Liu, Dan Ren. Investment and consumption in regime-switching models with proportional transaction costs and log utility. Mathematical Control & Related Fields, 2017, 7 (3) : 465-491. doi: 10.3934/mcrf.2017017 Daniel Franco, Chris Guiver, Phoebe Smith, Stuart Townley. A switching feedback control approach for persistence of managed resources. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021109 Rohit Gupta, Farhad Jafari, Robert J. Kipka, Boris S. Mordukhovich. Linear openness and feedback stabilization of nonlinear control systems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1103-1119. doi: 10.3934/dcdss.2018063 Elena Braverman, Alexandra Rodkina. Stabilization of difference equations with noisy proportional feedback control. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2067-2088. doi: 10.3934/dcdsb.2017085 Guangliang Zhao Fuke Wu George Yin
CommonCrawl
Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals Culture modulates face scanning during dyadic social interactions Jennifer X. Haensel, Matthew Danvers, … Atsushi Senju A review of theories and methods in the science of face-to-face social interaction Lauren V. Hadley, Graham Naylor & Antonia F. de C. Hamilton Individual differences and the multidimensional nature of face perception David White & A. Mike Burton Subjectivity and complexity of facial attractiveness Miguel Ibáñez-Berganza, Ambra Amico & Vittorio Loreto Facial recognition technology can expose political orientation from naturalistic facial images Michal Kosinski Different colour predictions of facial preference by Caucasian and Chinese observers Yan Lu, Kaida Xiao, … Sophie Wuerger Africans and Europeans differ in their facial perception of dominance and sex-typicality: a multidimensional Bayesian approach Vojtěch Fiala, Petr Tureček, … Karel Kleisner Pervasive influence of idiosyncratic associative biases during facial emotion recognition Marwa El Zein, Valentin Wyart & Julie Grèzes Conceptual knowledge predicts the representational structure of facial emotion perception Jeffrey A. Brooks & Jonathan B. Freeman Soheil Keshmiri ORCID: orcid.org/0000-0003-0854-03541, Masahiro Shiomi1, Kodai Shatani1,2, Takashi Minato1 & Hiroshi Ishiguro1,2 Social and cognitive psychology provide a rich map of our personality landscape. What appears to be unexplored is the correspondence between these findings and our behavioural responses during day-to-day life interaction. In this article, we utilize cluster analysis to show that the individuals' facial pre-touch space can be divided into three well-defined subspaces and that within the first two immediate clusters around the face area such distance information significantly correlate with their openness in the five-factor model (FFM). In these two clusters, we also identify that the individuals' facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals' behavioural responses but also these responses allow for a fine-grained categorization of individuals' personality. Personality, with its signatures already etched on our brain1, is what defines us as individuals and determines our responses to psychological stressors2. Recent findings on its traits3, types4, and neural correlates5 have substantially advanced our understanding about individuality6 that can be reliably identified across different languages and cultures7. For instance, the big-5 or five-factor-model (FFM)8 has been shown to provide a good predictor for such patterns of behaviour as well-being and mental health, job performance and marital relations9, as well as the clinical assessments of personality disorders10. In this respect, there is ample evidence that point at the effect of personality on our social development11,12 and embodied interactions13,14,15 that is not affected by the nature of interacting agency16. These observations beg the question of whether personality also influences such behavioural responses as personal space17 and interpersonal distance18,19. The significance of such a scrutiny is clarified by considering the findings that emphasize the positive socioemtional effect of physical interaction on our wellbeing20,21,22,23,24. However unlike the findings that identify the correspondence between body and such internal states as emotions25,26,27, lack of consensus on the interplay between personality and personal space28,29 does not warrant an informed conclusion on the influence of the personality traits on our behavioural responses. In this article, we address this shortcoming through cluster analysis of the individuals' facial pre-touch distance. We consider the facial area touch interaction as opposed to other body parts that are more openly shared during social interactions (e.g., shoulder patting) due to higher sensitivity of people around their face which makes the facial boundary to play a substantial role in understanding the people's behavioural responses within the context of touch interaction. We show that the individuals' facial pre-touch space can be divided into three well-defined subspaces. Within the first two immediate clusters around the face area, we identify that such distance information significantly correlate with individuals' openness in FFM. We also show that the individuals' facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals' behavioural responses but also these responses allow for a fine-grained categorization of individuals' personality. Fifty younger adults (M = 21.83, SD = 1.53) participated in our experiment. These individuals were paired into four distinct categories: female touchers and evaluators (FF), female touchers and male evaluators (FM), male touchers and female evaluators (MF), male touchers and male evaluators (MM). Data from three participants were not usable and therefore we excluded their corresponding two pairs from further analyses. This experiment was carried out with written informed consents from all subjects. We recruited the participants through a local commercial recruiting website. Our participants were not limited to university students and came from different occupational background. This study was carried out in accordance with the recommendations of the ethical committee of the Advanced Telecommunications Research Institute International (ATR) with written informed consent from all subjects in accordance with the Declaration of Helsinki. The protocol was approved by the ATR ethical committee (approval code:17-601-4). We conducted a facial pre-touch distance experiment to study whether individuals' facial area pre-touch space can predict their personality traits in FFM. For this purpose, we acquired the facial pre-touch distances that were measured between the hand of a toucher and the face of a person who was about to be touched (evaluator). Figure 1 shows an instance of the experiment. The evaluator was seated on a chair in the middle of the experimental room and the toucher stood close to the evaluator in a distance that was adjusted based on the arm's length of each of the touchers in our experiment. The nine approaching positions from which the toucher reached for the face-area of the evaluator are shown in this figure (positions 0 through 8). In our experimental setup, the touchers slowly stretched their hand toward the evaluators' face. While doing so, they freely decided their initial hand position and their approaching angle. When the evaluators felt that the touchers' hand were exceeding their comfort zone and wanted them to stop, they clicked a mouse bottom whose clicking sound was audible to the touchers. We instructed the touchers to immediately stop getting their hand any closer to the evaluators' face once they heard the mouse clicking sound. We then measured the distance between the touchers' hand and the evaluators' face and used these measured distances as the minimum comfortable pre-touch distance of the individuals (i.e., their behavioural-based facial pre-touch boundary). We did not fix the number of pre-touch interactions and allowed the participants to continue as long as their allocated time permitted. Each pairs of toucher-evaluator participated in a two-hour trial during which one of them played the role of the evaluator for the first one-hour and the other was the toucher (i.e., approximately 6.67 minutes per touch-interaction spot in Fig. 1) and then switched their roles during the second one-hour period. While interacting, we asked the participants to look at the center of the approaching hand from their own perspective (i.e., palm of the hand for the evaluator and the back of the hand for the toucher) and to keep neutral facial expression and suppress reactions toward the touch during their interaction. The average number of trials per participants was M = 288.02 (SD = 78.02, CI = [265.11 310.93]). Predetermined toucher-evaluator interaction positions. In this setting, the toucher (i.e., T) moves along the positions 0 through 8 and stretches his hand toward the face of the evaluator (i.e., E) who is seated in the middle. The two Kinect V2 sensors mounted behind the evaluator collect the joint and the head positions of the toucher and the evaluator. The location of two Kinect V2 sensors that were mounted behind the evaluators' seat to automatically track the touchers' hand and the evaluator's face positions are visible in this figure. We used two Kinect V2 sensors that were mounted behind the evaluators' seat (Fig. 1) to track the touchers' hand and the evaluators' face positions. We collected the 3D positions of each joint of the touchers (including the center of their hands) and the 3D head position of the evaluators. We also recorded the timing of the evaluators' mouse clicks that signalled the touchers to stop getting their hand any closer to the evaluators' face. In order to calculate the evaluators' facial pre-touch distances (in cm), we subtracted the size of the touchers' hand (measured prior to the commencement of the experiment) from the average Japanese face size (i.e., 9.0 cm for female and 10.0 cm for males30). Giancola et al.31 suggested that Kinect sensors are suitable for applications in which the joint position accuracy does not exceed a few cm. However in their study, they focused on the accuracy of a whole body tracking algorithm in a upper-limb rehabilitation scenario. Our experiment differed from their setting in which we considered the interaction space between the touchers' hand and the evaluators' face. Therefore, we employed (unlike Giancola et al.31) two Kinect sensors for data acquisition, thereby bypassing the use of markers on touchers' hand and the evaluators' face to prevent their potential confounding effect on participants' pre-touch feelings. To increase the accuracy of the detected joint positions, we further calibrated the relative positions of these two sensors and used their absolute positions to integrate their joint positions data. In the event of one Kinect sensor's failure, we used the other sensor if its estimates were continuous and stable. To test for the instrument's reliability, we used a Japanese version of the Ten-Item Personality Inventory (TIPI-J)32. Since the TIPI-J has only two items for each domain, the authors in32 used within-scale inter-item correlations for evaluating the internal consistency of each scale than the Cronbach's alpha coefficients33. Therefore, we did not evaluate Cronbach's alpha, but we believe that the validity of the TIPI-J is already evaluated via original authors. We first utilized Kruskal-Wallis test to verify that there was no effect of four paired gender groups (i.e., FF, FM, MF, and MM) on participants' facial pre-touch distances. Anther factor that needed further verification was the potential effect of the familiarity between the pairs of interacting participants. Specifically, it was important to determine whether the facial space between these individuals shrank as they interacted throughout their session. For this purpose, we used the averages of the first and the last 10 facial pre-touch distance measurements of each participants and applied Wilcoxon rank sum test on these two sets of average distances. We found that the effect of gender and the familiarity between interacting pairs were non-significant (for details, see supplementary material (SM)). Our analysis of the potential correspondence between facial pre-touch distance and the FFM personality traits included three steps: (1) cluster analysis of the participants' facial pre-touch distance to determine their potential spatial clusters around the face area (2) Spearman correlation between pre-touch distances of these clusters and the individuals' FFM personality scores (3) classification of the individuals' personality traits based on the results of the correlation analysis in step (2). Cluster analysis of the facial pre-touch distances To determine whether the individuals' facial pre-touch distances had a potential spatial pattern around the face area, we applied cluster analysis on these pre-touch distances. To choose between parametric (e.g., gaussian mixture model (GMM)) and non-parametric (e.g., K-means algorithm) clusterings, we first applied the Lillifors test with Monte Carlo approximation to determine whether individuals' facial pre-touch distances (both their actual as well as log-transformation) followed a normal distribution. The test rejected the presence of normality at 5.0% significance level. Therefore, we adapted non-parametric analyses in present study. We used the K-means algorithm34 for cluster analysis of the participants' facial area pre-touch distances. We applied this clustering step on the entire pre-touch distance data (i.e., all the participants combined). The basic principle underlying this algorithm is to group the data points into a specified number of clusters in such a way that the Euclidean distance between the members of these clusters to their corresponding cluster center is minimized. We used participants' pre-touch distances (in cm) along with the azimuth and elevation angles (in degrees) associated with these distances as inputs to the K-means algorithm. In order to determine the number of clusters, we utilized Akaike and Bayesian information criteria (AIC and BIC) and checked for cluster number K = 1, …, 5. Both AIC and BIC indicated that K = 3 best suited our data. Therefore, we used this value for clustering the participants' facial pre-touch distances. Correlation analysis of the facial pre-touch distances and FFM scores We used the resulting three clusters and performed Spearman correlation between pre-touch distances that were assigned to each of these clusters and their corresponding FFM scores (i.e., extraversion, agreeableness, conscientiousness, openness, and neuroticism) of the participants. Specifically, we first computed the average facial pre-touch distance of each individual in each cluster and then used these average distances along with the FFM scores that were within [1 … 7] real-valued intervals (e.g., openness = 3.78) for correlation analysis. We found that the participants' openness scores and their pre-touch distances showed significant anti-correlation in the first two immediate clusters around the face area. To further verify the observed anti-correlations in these two clusters, we computed their 95.0% bootstrap (10,000 rounds) confidence intervals. For the bootstrap test, we considered the null hypothesis H0: there is no correlation between the individuals' facial pre-touch distances and their openness scores and tested it against the alternative hypothesis H1: there is a significant correlation between the individuals' facial pre-touch distances and their openness scores. We reported the mean, standard deviation, and the 95.0% confidence interval for these tests. We also computed the p-value of these tests as the fraction of the distribution that was more extreme than the actually observed anti-correlation values. For this purpose, we performed a two-tailed test in which we used the absolute values so that both the positive and the negative correlations were accounted for. Classification of the individuals' personality traits Since we found that the participants' openness scores and their pre-touch distances showed significant anti-correlation in the first two immediate clusters around the face area, we excluded the outermost cluster around the face area and primarily used the other two clusters (for correlation results associated with the third cluster as well as other FFM scores than the openness, see SM). Since we wanted to determine whether it was possible to determine the level of openness of an individual based on their measured facial pre-touch distances, we first grouped the participants into six openness levels based on their openness scores that were within [1 … 7] real-valued intervals (e.g., openness = 3.78). We calculated these groups using the following boundaries: $$openness=\{\begin{array}{ll}\mathrm{1,} & {\rm{if}}\,score\,\le \,{\rm{2.0.}}\\ \mathrm{2,} & {\rm{if}}\,{\rm{2.0}}\, < \,score\,\le \,{\rm{3.0.}}\\ \mathrm{3,} & {\rm{if}}\,{\rm{3.0}}\, < \,score\,\le \,{\rm{4.0.}}\\ \mathrm{4,} & {\rm{if}}\,{\rm{4.0}}\, < \,score\,\le \,{\rm{5.0.}}\\ \mathrm{5,} & {\rm{if}}\,{\rm{5.0}}\, < \,score\,\le \,{\rm{6.0.}}\\ \mathrm{6,} & \mathrm{ifscore} > 6{\rm{.0.}}\end{array}$$ We then used these six groups and applied nine different classification methods to determine the utility of the participants' pre-touch distance information in predicting their openness level in the first two immediate clusters around the face area. They were support vector classifier (SVC), quadratic discriminant analysis (QDA), adaboost, logistic regression (LR), naive Bayes (NB), random forest (RF), decision tree (DT), k-nearest-neighbour (KNN), and linear discriminant analysis (LDA). We used the participants' pre-touch distance (in cm) along with their azimuth and elevation angles (in degrees) as input features to these algorithms. The preprocessing of the models' input features included the scaling of these features (column-wise) within [0, …, 1] using \(\frac{f-min({C}_{i})}{max({C}_{i})-min({C}_{i})}\) where Ci, i = 1,…3, refers to the ith column in the feature vector (i.e., C1, C2 for azimuth and elevation angles and C3 for the pre-touch distance and f identifies a specific feature value that is scaled. The output from these classifiers were their predicted openness level of the participants (i.e., levels 1 through 6). Given the six levels of openness, the chance level accuracy was ≈16.67%. For comparison of the classifiers' accuracy, we performed 200 simulation runs in which we randomly split the pre-touch distances (per cluster) to 70.0% train and 30.0% test sets. We also ensured that a balanced proportion of each of the six labels were split between these train and test sets. In each run, we used the same split of train and test sets and applied the above nine classifiers. We used the train set for training these classifiers and the test set to compute their prediction accuracy, precision, recall, and F1-score. We then used the 200 predictions by each of these algorithms and applied Friedman's test that was followed by posthoc Wilcoxon signed rank to determine the classifier with the highest accuracy. Our results indicated that KNN significantly outperformed the other classifiers which we further verified it by computing the 99.0% bootstrap (10,000 rounds) confidence intervals of the accuracies of these models. For the bootstrap test, we considered the null hypothesis H0: there is no difference between the average accuracy of KNN and the other models and tested it against the alternative hypothesis H1: KNN's average accuracy is significantly higher than those of the other models. We reported the mean, standard deviation, and the 99.0% confidence interval for these tests. Therefore, we adapted KNN for our main analysis (for details of this comparative analysis, see SM). We used the KNN's predictions during 200 simulation runs (per cluster) and applied Kruskal-Wallis test to determine whether the KNN accuracy was affected by different levels of participants' openness. This was followed by the posthoc Wilcoxon rank sum. We also computed their 99.0% bootstrap (10,000 rounds) confidence intervals. For the bootstrap test, we considered the null hypothesis H0: KNN's average accuracy is the same between different openness levels and tested it against the alternative hypothesis H1: KNN's average accuracy significantly differs between different openness levels. We reported the mean, standard deviation, and the 99.0% confidence interval for these tests. For the Kruskal-Wallis and Friedman's tests, we reported the effect size \(r=\sqrt{\frac{{\chi }^{2}}{N}}\)35 with N denoting the sample size and χ2 is the respective test-statistics. In the case of Wilcoxon tests, we used \(r=\frac{W}{\sqrt{N}}\)36 as effect size with W denoting the Wilcoxon statistics and N is the sample size. All results reported were Bonferroni corrected. All analyses were carried out in Python 2.7 and Matlab 2016a. We used Raincloud plots37 for visualization of the classification accuracies. Facial pre-touch clusters We found that the actual (Fig. 2(A)) and log-transformed (Fig. 2(B)) facial pre-touch distances were not normally distributed (at 5.0% significance level; actual: p < 0.001, test-statistics = 0.07, Mactual = 20.55, SDactual = 10.24, CIactual = [20.40 20.69] and log-transformed: p < 0.001, test-statistics = 0.05, Mlog−transformed = 2.88, SDlog−transformed = 0.57, CIlog−transformed = [2.88 2.89]). Figure 2(C) shows the 3D grids of the individuals' facial personal space that is mapped along the azimuth and elevation angles associated with these distances around the face area. These angles were within (in degrees) azimuth ∈ [−51.30 44.47] and elevation ∈ [−63.84 48.30] intervals. Facial pre-touch data of all the participants. (A) Distribution of actual facial pre-touch distances (in cm). (B) Distribution of log-transformed facial pre-touch distances. (C) 3D map of facial pre-touch distances in which the individuals' preferential facial personal space are shown along the z-axis. The schematic diagram of the face direction is shown under this subplot. (D) Akaike (AIC in red) and Bayesian (BIC in blue) information criteria unanimously identify K = 3 as the best number of clusters for facial personal space. Their values are: AIC = [12.034, 12.034, 11.986, 11.991, 11.993] and BIC = [12.034, 12.034, 11.987, 11.991, 11.993]. (E) 3D facial pre-touch distance clusters: C1 (red), C2 (green), and C3 (blue). The schematic diagram of the face direction is shown under this subplot. (F) 2D facial pre-touch distance clusters that maps these distances against their corresponding azimuth angle. We applied K-means clustering on this grid to determine their grouping and used AIC and BIC (Fig. 2(D)) to identify the best number of clusters (k). Both these measures indicated that k = 3 (AICk=3 = 11.986 and BICk=3 = 11.987). In Fig. 2(D), values associated with k = 1,…, 5 are: AICk=1 = 12.034, AICk=2 = 12.034, AICk=3 = 11.986, AICk=4 = 11.991, AICk=5 = 11.993 and BICk=1 = 12.034, BICk=2 = 12.034, BICk=3 = 11.987, BICk = 11.991, BICk=5 = 11.993. Figure 2(E) shows the resulting three clusters. We found that there were 1814, 5202, and 6440 facial pre-touch distance data points in C1 (MDistance = 34.38, SDDistance = 7.22, CIDistance = [34.11 34.66], azimuth ∈ [−51.30 44.47], elevation ∈ [−63.84 48.30]), C2 (MDistance = 15.87, SDDistance = 6.01, CIDistance = [15.73 16.01], azimuth ∈ [−42.78 40.52], elevation ∈ [−55.93 41.92]), and C3 (MDistance = 5.89, SDDistance = 3.95, CIDistance = [5.81 5.97], azimuth ∈ [−25.32 26.73], elevation ∈ [−27.30 25.62]). These data points corresponded to twenty-seven, forty-seven, and forty-four participants. Figure 2(F) illustrates these clusters in 2D in which these distances are mapped against their corresponding azimuth angle. Facial pre-touch distance and openness correlation We found that the participants' openness score showed a significant anti-correlation with their facial pre-touch distances in C2 (Fig. 3A) (r = −0.33, p < 0.03, MDistance = 25.25, SDDistance = 5.23, MO = 4.67, SDO = 1.20) and C3 (Fig. 3B) (r = −0.40, p < 0.01, MDistance = 13.61, SDDistance = 3.10, MO = 4.72, SDO = 1.23). Openness (O) versus pre-touch distance Spearman correlations. (A) Cluster C2 (B) Cluster C3 (C) Bootstrap (10,000 simulation runs) 95.0% confidence intervals (CI) of the Spearman correlation between participants' facial pre-touch distances and their FFM openness scores. The mean of the bootstrapped correlation coefficients is shown with the yellow line, the 95.0% confidence intervals are the two red lines, and the null hypothesis H0 (i.e., no correlation) is the blue line. Table 1 summarizes the results of the bootstrap (Fig. 3C, 10,000 simulation runs) 95.0% confidence interval of these clusters' correlation analysis. This table confirms the observed significant anti-correlation between the participants' facial pre-touch distances and their FFM openness scores. Table 1 Bootstrap (10,000 simulation runs) 95.0% confidence intervals (CI) associated with the correlation analysis of the facial pre-touch distance and the participants' FFM openness scores. Openness prediction Overall prediction accuracy Kruskal-Wallis indicated (Fig. 4(A)) significant difference in KNN's prediction accuracy on different openness level (p < 0.001, H(5, 1211) = 153.14, r = 0.36). Posthoc Wilcoxon tests (Fig. 4(B) and Table 2) revealed that KNN overall accuracy (i.e., C2 and C3 combined) in the case of openness level 1 was only higher than openness level 3 and below all the other openness levels. We also observed that KNN overall accuracy in the case of openness level 2 was higher than all the other openness levels. These results also indicated that whereas the overall accuracy in the case of openness level 4 was non-significant in comparison with the openness level 5, it was significantly lower than the overall accuracy in the case of level 6. We also observed that KNN overall accuracy in the case of openness level 6 was significantly higher than 5. KNN accuracy. (A) Overall performance (i.e., six openness levels combined) and without considering the clusters. (B) Comparison of the accuracy between different openness levels and without considering the clusters. This figure illustrates the distribution of 200 simulation rounds in which we randomly assigned 30.0% of entire data to test set and used the remainder of data for training these models. While splitting the data, we also ensured that a proper proportion of each labels (i.e., 30.0% per label) was assigned to the test set. In this figure, the asterisks mark the significant differences between openness level prediction accuracies. Table 2 Pair-wise Wilcoxon rank sum p-value, test-statistics, effect size, and the mean and standard deviation of the openness levels' prediction accuracy by KNN (chance level accuracy ≈16.67%). Figure 5 and Table 3 show the results of the bootstrap (10,000 simulation runs) confidence intervals for KNN's overall accuracy (i.e., clusters C2 and C3 combined) paired openness levels. These results confirmed that KNN accuracy was significantly higher in the case of openness level 2 than all the other levels. They also indicated that its accuracy for the case of openness level 1 was only higher than openness level 3 (Fig. 5(B)) and lower than all the other labels. We also observed that whereas the accuracy for the openness level 4 showed no difference with respect to the level 5 (Fig. 5(M)) it was lower than that of the openness level 6 (Fig. 5(N)). Bootstrap (10,000 simulation runs) 99.0% confidence intervals (CI) for comparative analysis of the overall (i.e., clusters C2 and C3 combined) KNN accuracy. These subplots correspond to the difference between openness levels (A) 1 vs. 2 (B) 1 vs. 3 (C) 1 vs. 4 (D) 1 vs. 5 (E) 1 vs. 6 (F) 2 vs. 3 (G) 2 vs. 4 (H) 2 vs. 5 (I) 2 vs. 6 (J) 3 vs. 4 (K) 3 vs. 5 (L) 3 vs. 6 (M) 4 vs. 5 (N) 4 vs. 6 (O) 5 vs. 6. For each paired comparison the sample mean difference (i.e., μi−μj, i = 1, …, 6, j = 1, …, 6) is shown with the yellow line, the 99.0% confidence intervals are the two red lines, and the null hypothesis H0 (i.e., mean difference is zero) is the blue line. Subplot (M) indicates that the comparative overall KNN performance (i.e., combined C2 and C3) between openness levels 4 and 5 is non-significant. Table 3 Bootstrap (10,000 simulation runs) confidence intervals (CI) for comparative analysis of the overall (i.e., clusters C2 and C3 combined) KNN accuracy. comparative overall KNN performance (i.e., combined C2 and C3) between openness levels 4 and 5 is non-significant. C2 versus C3 predictions accuracy Kruskal-Wallis indicated a significant difference between the accuracies in C2 and C3 (p < 0.001, H(1, 1211) = 64.30, r = 0.23). Posthoc tests identified (Fig. 6(A) and Table 4) that whereas KNN accuracy in the case of openness levels 2 and 6 were higher for the cluster C3 than cluster C2, it performed significantly better in C2 than C3 in the case of openness levels 1, 3, 4, and 5. Figure 6(B) shows the overlaid KNN accuracies for openness levels 1 through 6 in C2 and C3 for better visualization of the effect. Figure 6(C) shows the precision, recall, and F1-score associated with KNN while predicting different openness levels in C2 and C3. Column "Support" refers to the number of each openness levels that were included in each of these clusters' test sets while testing the KNN predictions. The row "average" indicates the average precision, recall, and F1-score when all levels combined in their respective clusters. KNN accuracy. (A) C3 versus C2 in the case of within openness level. The asterisks mark the significant differences between openness level prediction accuracies. (B) Overlaid KNN accuracies for better visualization of the effect in clusters C3 and C2. (C) Precision, recall, and F1-score associated with KNN while predicting different openness level in C3 and C2. Column "Support" refers to the number of each openness levels that were included in each of these clusters' test sets while testing the KNN predictions. The row "average" indicates the average precision, recall, and F1-score when all levels combined in their respective clusters. Table 4 Level-wise prediction of the openness by KNN in C2 and C3: Wilcoxon rank sum p-value, test-statistics, effect size, and the mean and standard deviation of the openness levels' prediction accuracy by KNN (chance level accuracy ≈ 16.67%). Figure 7 and Table 5 show the results of the bootstrap (10,000 simulation runs) 99.0% confidence intervals for KNN performance on openness levels 1 through 6 in clusters C2 and C3. Entries of Table 5 confirm that while KNN achieved higher accuracies in C3 than C2 in the case of openness levels 2 (Fig. 7(B)) and 6 (Fig. 7(F)), its performance was significantly higher in C2 than C3 in the case of openness levels 1 (Fig. 7(A)), 3 (Fig. 7(C)), 4 (Fig. 7(D)), and 5 (Fig. 7(E)). However, we note that such a paired-wise difference was weaker in the case of openness level 4 (i.e., Fig. 7(D)) than the other five levels. Bootstrap (10,000 simulation runs) 99.0% confidence intervals (CI) for KNN performance on clusters C2 and C3 for paired openness (A) level 1 (B) level 2 (C) level 3 (D) level 4 (E) level 5 (F) level 6. For each paired comparison the sample mean difference (i.e., μC2−μC3) is shown with the yellow line, the 99.0% confidence intervals are the two red lines, and the null hypothesis H0 (i.e., mean difference is zero) is the blue line. Whereas KNN performed significantly better in C3 for openness levels 2 (subplot (B)) and 6 (subplot (F)) its accuracy was significantly higher in C2 for openness levels 1 (subplot (A)), 3 (subplot (C)), 4 (subplot (D)), and 5 (subplot (E)). Table 5 Bootstrap (10,000 simulation runs) 99.0% confidence intervals (CI) for KNN performance on paired openness levels in C2 and C3. KNN accuracy was higher in C3 than C2 in the case of openness levels 2 and 3. On the other hand, it achieved significantly higher performance in C2 than C3 in the case of openness levels 1, 3, 4, and 5. In this article we sought answer to the question of whether individuals' personality traits are reflected in such tacit behavioural cues as preferred personal space. To examine this possibility, we considered a naturalistic scenario in which paired individuals signalled their preferred facial pre-touch distances. We considered the facial area touch interaction as opposed to other body parts that are more openly shared during social interactions (e.g., shoulder patting) due to higher sensitivity of people around their face which makes the facial boundary play a substantial role in understanding the people's behavioural responses within the context of touch interaction. The results of the cluster analysis of these facial pre-touch distances indicated potential patterns in individuals' facial personal space in the form of three distinct subspaces. They also specified that within the first two immediate clusters around the face area these distance information significantly anti-correlated with individuals' openness in FFM8. These results that were in line with the previous findings on peripersonal space representation12 and the effect of anxiety on such a space14 complemented the observations on the bodily maps of subjective feelings25 by providing evidence that such internal states are also present in our embodied interaction and its associated personal space17,18,19. They also extended the previous research that pointed at the connection between individuals' personality and their brain functioning1,5 that can be traced throughout individuals' development38 to the case of such immediate and observable behavioural responses as preferred personal space. Our results also indicated that individuals' sense of facial pre-touch space can significantly predict their personality trait of openness that was further categorized into six distinct groups. These results complemented the previous research that showed the personality traits8 can further be divided into four personality types4 by providing evidence on the correspondence between individuals' preferred personal space and the level of openness in their personality in a finer-grain. Although previous research pointed at the relation between individuals' psychological characteristics and such behavioural responses as personal space29, these results suffered from lack of consensus on the interplay between personality and personal space28. Our results contributed to these previous findings by providing evidence that identified the role of individuals' personality in shaping their personal space, thereby allowing for more informed conclusion on the influence of the personality traits on our behavioural responses and psychological capacities2. From a broader perspective, our results are potentially useful in such applied contexts as clinical settings related to psychopathological scenarios in which the patients' acute psychological conditions directly affect their prospects about their inter/personal space and its boundary39,40. Considering the fast emergence of embodied agents in our society41,42, our findings can also benefit the research in human-robot interaction (HRI) in which researchers urge for more robust evaluations that are founded on theoretical than sheer empirical approaches43,44 to enable these agents to better meet the grand social challenges45 [p.9] that these agents may encounter during their interaction with individuals46. For instance, an embodied agent that can determine the individuals' level of openness using their preferred personal space during their interactions can better serve them when deployed for health-related interventions47,48,49. There are several limitations in our study that require future consideration. Although our data included a moderately large number of samples, the small number of participants that only included younger adults does not allow for extension of our findings to all age groups (children, adolescents, and older people). In addition, our data did not include individuals from different geographical and cultural background. The absence of such a diversity that potentially plays a significant role in defining such trends as personal space and interpersonal distance does not allow for our results to be readily extended to all cultures and populations. Moreover, limiting the individuals' behavioural responses to their facial area does not warrant the applicability of our results to overall embodied interaction of human subjects. Our findings also highlight a challenge for future studies. Specifically, our results identified a significant correspondence between individuals' openness and their personal space that predicted these individuals openness personality trait in six distinct categories. However, they left the utility of personal space and interpersonal distance in determination of other personality factors (e.g., neuroticism, agreeableness, etc.) unanswered. In this regard, we found a significant anti-correlation between individuals' responses to questionnaires on openness and their degree of neuroticism (for details, see SM). Despite the possibility that such an observation might lead to expecting a relation between neuroticism and the personal space (e.g., the higher the neurotic feeling the larger the personal distance which opposes the results in the case of openness), we did not observe such a correspondence in our results. Future research can shed light on such potentially counterintuitive observations. Liu, W., Kohn, N. & Fernández, G. Intersubject similarity of personality is associated with intersubject similarity of brain connectivity patterns. NeuroImage 186, 56–69 (2019). Xin, Y. et al. The relationship between personality and the response to acute psychological stress. Scientific Reports 7, 16906 (2017). Widiger, T. A. Oxford handbook of the five factor model of personality. Oxford University Press, Oxford (2015). Gerlach, M., Farb, B., Revelle, W. & Amaral, L. A. N. A robust data-driven approach identifies four personality types across four large data sets. Nature Human Behaviour 2, 735–742 (2018). Allen, T. A. & DeYoung, C. G. Personality neuroscience and the five factor model. Oxford handbook of the five factor model, 319–352 (2017). Revelle, W., Wilt, J. & Condon, D. M. Wiley-Blackwell handbook of individual differences. (eds Chamorro-Premuzic, T. et al. .) Wiley- Blackwell, Oxford, 1–38 (2013). McCrae, R. R. & Costa, P. T. SAGE handbook of personality theory and assessment: volume 1 personality theories and models. (eds Boyle, G. J. et al. .) SAGE, London, 273–294 (2008). Goldberg, L. R. An alternative"description of personality": the big-five factor structure. Journal of Personality and Social Psychology 59, 1216–1229 (1990). Ozer, D. J. & Benet-Martinez, V. Personality and the prediction of consequential outcomes. Annual Review of Psychology 57, 401–421 (2006). Widiger, T. A. & Costa, P. T., Jr. Personality disorders and the five-factor model of personality. 3rd edn American Psychological Association, Washington DC (2013). DeYoung, C. G. & Allen, T. A. Personality neuroscience: a developmental perspective. Guilford Handbook of Personality Development 79–105 (2019). Serino, A. et al. Body part-centered and full body-centered peripersonal space representations. Scientific Reports 5, 18603 (2015). Argyle, M. Bodily communication. Methuen, NY, (1975). Sambo, C. F. & Iannetti, G. D. Better safe than sorry? The safety margin surrounding the body is increased by anxiety. Journal of Neuroscience 33, 14225–14230 (2013). Lourenco, S. F., Longo, M. R. & Pathman, T. Near space and its relation to claustrophobic fear. Cognition 119, 448–453 (2011). Reeves, B. & Nass, C. I. The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press, 37–51 (1996). Sommer, R. Personal Space: The Behavioral Basis of Design. Prentice-Hall, Inc., Englewood Cliffs, NJ (1969). Hall, E. T. The silent language. Anchor Books, NY (1959). Hall, E. T. The hidden dimension. Anchor Books, NY (1963). Gallace, A. & Spence, C. The science of interpersonal touch: an overview. Neuroscience & Biobehavioral Reviews 34, 246–259 (2010). Field, T. Touch for socioemotional and physical well-being: A review. Developmental Review 30, 367–383 (2010). Chatel-Goldman, J., Congedo, M., Jutten, C. & Schwartz, J. L. Touch increases autonomic coupling between romantic partners. Frontiers in Behavioral Neuroscience 8, 95 (2014). Yun, K., Watanabe, K. & Shimojo, S. Interpersonal body and neural synchronization as a marker of implicit social interaction. Scientific Reports 2, 959 (2012). Singh, H. et al. The brain's response to pleasant touch: An EEG investigation of tactile caressing. Frontiers in Human. Neuroscience 8, 893 (2014). Nummenmaa, L., Glerean, E., Hari, R. & Hietanen, J. K. Bodily maps of emotions. Proceedings of the National Academy of Sciences 111, 646–651 (2014). Nummenmaa, L., Hari, R., Hietanen, J. K. & Glerean, E. Maps of subjective feelings. Proceedings of the National Academy of Sciences 115, 9198–9203 (2018). Kövecses, Z. Metaphor and emotion: Language, culture, and body in human feeling. Cambridge University Press (2003). Hayduk, L. A. Personal space: Where we now stand. Psychological Bulletin 94, 293–335 (1983). Ickinger, W. J. & Morris, S. Psychological characteristics and interpersonal distance. Tulane University (2001). Makiko, K. & Mochimaru, M. Japanese Head Size Database 2001 (In Japanese). AIST, H16PRO-212 (2008). Giancola, S., Corti, A., Molteni, F. & Sala, R. Motion Capture: An Evaluation of Kinect V2 Body Tracking for Upper Limb Motion Analysis. International Conference on Wireless Mobile Communication and Healthcare, 302–309 (2016). Oshio, A., Abe Shingo, S. & Cutrone, P. Development, reliability, and validity of the Japanese version of Ten Item Personality Inventory (TIPI-J). Japanese Journal of Personality 21, 40–52 (2012). Oshio, A., Abe, S., Cutrone, P. & Gosling, S. D. Further validity of the Japanese version of the Ten Item Personality Inventory (TIPI-J). Journal of Individual Differences (2014). Liao, T. W. Clustering of time series data - a survey. Pattern Recognition 39, 1857–1874 (2005). Rosenthal, R. & DiMatteo, M. R. Meta-analysis: recent developments n quantitative methods for literature reviews. Annual Review of Psychology 52, 59–82 (2001). Tomczak, M. & Tomczak, E. The need to report effect size estimates revisited. an overview of some recommended measures of effect size. Trends in Sport Sciences 1, 19–25 (2014). Allen, M., Poggiali, D., Whitaker, K., Marshall, T. R. & Kievit, R. Raincloud plots: a multi-platform tool for robust data visualization. PeerJ Preprints 6, e27137v1 (2018). DeYoung, C. G. & Allen, T. A. Personality neuroscience: A developmental perspective. The Handbook of Personality Development, 79–105 (2018). Noel, J. P., Cascio, C. J., Wallace, M. T. & Park, S. The spatial self in schizophrenia and autism spectrum disorder. Schizophre nia Research 179, 8–12 (2017). Mul, C. L. et al. Altered bodily self-consciousness and peripersonal space in autism. Autism (2019). Matarić, M. Socially assistive robotics: Human augmentation versus automation. Science Robotics 2, eaam5410 (2017). Tanaka, F., Cicourel, A. & Movellan, J. R. Socialization between toddlers and robots at an early childhood education center. Proceedings of the National Academy of Sciences 104, 17954–17958 (2007). Scassellati, B. Theory of mind for a humanoid robot. Autonomous Robots 12, 13–24 (2002). Jung, M. & Hinds, P. Robots in the wild: A time for more robust theories of human-robot interaction. ACM Transactions on Human-Robot Interaction (THRI), 7 (2018). Yang, G. Z. et al. The grand challenges of Science Robotics. Science Robotics 3, eaar7650 (2018). Clabaugh, C. & Matarić, M. Robots for the people, by the people: Personalizing human-machine interaction. Science Robotics, 3 (2018). Scassellati, B. et al. Improving social skills in children with ASD using a long-term, in-home social robot. Science Robotics 3, eaat7544 (2018). Valenti Soler, M. et al. Social robots in advanced dementia. Frontiers in Aging. Neuroscience 7, 133 (2015). Robinson, H., MacDonald, B., Kerse, N. & Broadbent, E. The psychosocial effects of a companion robot: a randomized controlled trial. Journal of the American Medical Directors Association 14, 661–667 (2013). This research work was supported in part by JST CREST Grant Number JPMJCR18A1, Japan, the JST ERATO Ishiguro Symbiotic Human Robot Interaction Project (Grant Number: JPMJER1401), and JSPS KAKENHI Grant Numbers JP17K00293 and JP19K20746. Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan Soheil Keshmiri, Masahiro Shiomi, Kodai Shatani, Takashi Minato & Hiroshi Ishiguro Graduate School of Engineering Science, Osaka University, Osaka, Japan Kodai Shatani & Hiroshi Ishiguro Soheil Keshmiri Masahiro Shiomi Kodai Shatani Takashi Minato S.K., M.S. and T.M. contributed equally. M.S. was the group lead. He designed and supervised the experiment. K.S. conducted the experiment. S.K. performed the analyses. As the head of Hiroshi Ishiguro Laboratories (HIL), H.I. oversees the entire activity of all research teams and themes, ensuring the soundness of all proposals, quality of results, and their validity. S.K., M.S. and T.M. prepared the manuscript. Correspondence to Soheil Keshmiri. Keshmiri, S., Shiomi, M., Shatani, K. et al. Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals. Sci Rep 9, 11924 (2019). https://doi.org/10.1038/s41598-019-48481-x
CommonCrawl
Sat, 27 Apr 2019 17:13:23 GMT 14.4: Double Integrals in Polar Form [ "article:topic", "showtoc:no" ] Map: University Calculus (Hass et al.) 14: Multiple Integrals Polar Rectangular Regions of Integration General Polar Regions of Integration Polar Areas and Volumes Key Equations Recognize the format of a double integral over a polar rectangular region. Evaluate a double integral in polar coordinates by using an iterated integral. Recognize the format of a double integral over a general polar region. Use double integrals in polar coordinates to calculate areas and volumes. Double integrals are sometimes much easier to evaluate if we change rectangular coordinates to polar coordinates. However, before we describe how to make this change, we need to establish the concept of a double integral in a polar rectangular region. When we defined the double integral for a continuous function in rectangular coordinates—say, \(g\) over a region \(R\) in the \(xy\)-plane—we divided \(R\) into subrectangles with sides parallel to the coordinate axes. These sides have either constant \(x\)-values and/or constant \(y\)-values. In polar coordinates, the shape we work with is a polar rectangle, whose sides have constant \(r\)-values and/or constant \(\theta\)-values. This means we can describe a polar rectangle as in Figure \(\PageIndex{1a}\), with \(R = \{(r,\theta)\,|\, a \leq r \leq b, \, \alpha \leq \theta \leq \beta\}\). Figure \(\PageIndex{1}\): (a) A polar rectangle \(R\) (b) divided into subrectangles \(R_{ij}\) (c) Close-up of a subrectangle. In this section, we are looking to integrate over polar rectangles. Consider a function \(f(r,\theta)\) over a polar rectangle \(R\). We divide the interval \([a,b]\) into \(m\) subintervals \([r_{i-1}, r_i]\) of length \(\Delta r = (b - a)/m\) and divide the interval \([\alpha, \beta]\) into \(n\) subintervals \([\theta_{i-1}, \theta_i]\) of width \(\Delta \theta = (\beta - \alpha)/n\). This means that the circles \(r = r_i\) and rays \(\theta = \theta_i\) for \(1 \leq i \leq m\) and \(1 \leq j \leq n\) divide the polar rectangle \(R\) into smaller polar subrectangles \(R_ij\) (Figure \(\PageIndex{1b}\)). As before, we need to find the area \(\Delta A\) of the polar subrectangle \(R_{ij}\) and the "polar" volume of the thin box above \(R_{ij}\). Recall that, in a circle of radius \(r\) the length \(s\) of an arc subtended by a central angle of \(\theta\) radians is \(s = r\theta\). Notice that the polar rectangle \(R_{ij}\) looks a lot like a trapezoid with parallel sides \(r_{i-1}\Delta \theta\) and \(r_i\Delta \theta\) and with a width \(\Delta r\). Hence the area of the polar subrectangle \(R_{ij}\) is \[\Delta A = \frac{1}{2} \Delta r (r_{i-1} \Delta \theta + r_1 \delta \theta ). \nonumber\] Simplifying and letting \[r_{ij}^* = \frac{1}{2}(r_{i-1}+r_i) \nonumber\] we have \(\Delta A = r_{ij}^* \Delta r \Delta \theta\). Therefore, the polar volume of the thin box above \(R_{ij}\) (Figure \(\PageIndex{2}\)) is \[f(r_{ij}^*, \theta_{ij}^*) b\Delta A = f(r_{ij}^*, \theta_{ij}^*)r_{ij}^* \Delta r \Delta \theta. \nonumber\] Figure \(\PageIndex{2}\): Finding the volume of the thin box above polar rectangle \(R_{ij}\). Using the same idea for all the subrectangles and summing the volumes of the rectangular boxes, we obtain a double Riemann sum as \[\sum_{i=1}^m \sum_{j=1}^n f(r_{ij}^*, \theta_{ij}^*) r_{ij}^* \Delta r \Delta \theta. \nonumber\] As we have seen before, we obtain a better approximation to the polar volume of the solid above the region \(R\) when we let \(m\) and \(n\) become larger. Hence, we define the polar volume as the limit of the double Riemann sum, \[V = \lim_{m,n\rightarrow\infty}\sum_{i=1}^m \sum_{j=1}^n f(r_{ij}^*, \theta_{ij}^*) r_{ij}^* \Delta r \Delta \theta. \nonumber\] This becomes the expression for the double integral. Definition: The double integral in polar coordinates The double integral of the function \(f(r, \theta)\) over the polar rectangular region \(R\) in the \(r\theta\)-plane is defined as \[\begin{align} \iint_R f(r, \theta)dA &= \lim_{m,n\rightarrow\infty}\sum_{i=1}^m \sum_{j=1}^n f(r_{ij}^*, \theta_{ij}^*) \Delta A \\[4pt] &= \lim_{m,n\rightarrow \infty} \sum_{i=1}^m \sum_{j=1}^n f(r_{ij}^*,\theta_{ij}^*)r_{ij}^* \Delta r \Delta \theta. \end{align}\] Again, just as in section on Double Integrals over Rectangular Regions, the double integral over a polar rectangular region can be expressed as an iterated integral in polar coordinates. Hence, \[\iint_R f(r, \theta)\,dA = \iint_R f(r, \theta) \,r \, dr \, d\theta = \int_{\theta=\alpha}^{\theta=\beta} \int_{r=a}^{r=b} f(r,\theta) \,r \, dr \, d\theta.\] Notice that the expression for \(dA\) is replaced by \(r \, dr \, d\theta\) when working in polar coordinates. Another way to look at the polar double integral is to change the double integral in rectangular coordinates by substitution. When the function \(f\) is given in terms of \(x\) and \(y\) using \(x = r \, \cos \, \theta, \, y = r \, \sin \, \theta\), and \(dA = r \, dr \, d\theta\) changes it to \[\iint_R f(x,y) \,dA = \iint_R f(r \, \cos \, \theta, \, r \, \sin \, \theta ) \,r \, dr \, d\theta.\] Note that all the properties listed in section on Double Integrals over Rectangular Regions for the double integral in rectangular coordinates hold true for the double integral in polar coordinates as well, so we can use them without hesitation. Example \(\PageIndex{1A}\): Sketching a Polar Rectangular Region Sketch the polar rectangular region \[R = \{(r, \theta)\,|\,1 \leq r \leq 3, 0 \leq \theta \leq \pi \}. \nonumber\] As we can see from Figure \(\PageIndex{3}\), \(r = 1\) and \(r = 3\) are circles of radius 1 and 3 and \(0 \leq \theta \leq \pi\) covers the entire top half of the plane. Hence the region \(R\) looks like a semicircular band. Figure \(\PageIndex{3}\): The polar region \(R\) lies between two semicircles. Now that we have sketched a polar rectangular region, let us demonstrate how to evaluate a double integral over this region by using polar coordinates. Example \(\PageIndex{1B}\): Evaluating a Double Integral over a Polar Rectangular Region Evaluate the integral \(\displaystyle \iint_R 3x \, dA\) over the region \(R = \{(r, \theta)\,|\,1 \leq r \leq 2, \, 0 \leq \theta \leq \pi \}.\) First we sketch a figure similar to Figure \(\PageIndex{3}\), but with outer radius \(r=2\). From the figure we can see that we have \[\begin{align*} \iint_R 3x \, dA &= \int_{\theta=0}^{\theta=\pi} \int_{r=1}^{r=2} 3r \, \cos \, \theta \,r \, dr \, d\theta \text{Use an integral with correct limits of integration.} \\ &= \int_{\theta+0}^{\theta=\pi} \cos \, \theta \left[\left. r^3\right|_{r=1}^{r=2}\right] d\theta \text{Integrate first with respect to $r$.} \\ &=\int_{\theta=0}^{\theta=\pi} 7 \, \cos \, \theta \, d\theta \\ &= 7 \, \sin \, \theta \bigg|_{\theta=0}^{\theta=\pi} = 0. \end{align*}\] Exercise \(\PageIndex{1}\) Sketch the region \(D = \{ (r,\theta) \vert 1\leq r \leq 2, \, -\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2} \}\), and evaluate \(\displaystyle \iint_R x \, dA\). Follow the steps in Example \(\PageIndex{1A}\). \(\frac{14}{3}\) Example \(\PageIndex{2A}\): Evaluating a Double Integral by Converting from Rectangular Coordinates Evaluate the integral \[\iint_R (1 - x^2 - y^2) \,dA \nonumber\] where \(R\) is the unit circle on the \(xy\)-plane. The region \(R\) is a unit circle, so we can describe it as \(R = \{(r, \theta )\,|\,0 \leq r \leq 1, \, 0 \leq \theta \leq 2\pi \}\). Using the conversion \(x = r \, \cos \, \theta, \, y = r \, \sin \, \theta\), and \(dA = r \, dr \, d\theta\), we have \[\begin{align*} \iint_R (1 - x^2 - y^2) \,dA &= \int_0^{2\pi} \int_0^1 (1 - r^2) \,r \, dr \, d\theta \\[4pt] &= \int_0^{2\pi} \int_0^1 (r - r^3) \,dr \, d\theta \\ &= \int_0^{2\pi} \left[\frac{r^2}{2} - \frac{r^4}{4}\right]_0^1 \,d\theta \\&= \int_0^{2\pi} \frac{1}{4}\,d\theta = \frac{\pi}{2}. \end{align*}\] Example \(\PageIndex{2B}\): Evaluating a Double Integral by Converting from Rectangular Coordinates Evaluate the integral \[\displaystyle \iint_R (x + y) \,dA \nonumber\] where \(R = \big\{(x,y)\,|\,1 \leq x^2 + y^2 \leq 4, \, x \leq 0 \big\}.\) We can see that \(R\) is an annular region that can be converted to polar coordinates and described as \(R = \left\{(r, \theta)\,|\,1 \leq r \leq 2, \, \frac{\pi}{2} \leq \theta \leq \frac{3\pi}{2} \right\}\) (see the following graph). Figure \(\PageIndex{4}\): The annular region of integration \(R\). Hence, using the conversion \(x = r \, \cos \, \theta, \, y = r \, \sin \, \theta\), and \(dA = r \, dr \, d\theta\), we have \[\begin{align*} \iint_R (x + y)\,dA &= \int_{\theta=\pi/2}^{\theta=3\pi/2} \int_{r=1}^{r=2} (r \, \cos \, \theta + r \, \sin \, \theta) r \, dr \, d\theta \\ &= \left(\int_{r=1}^{r=2} r^2 \, dr\right)\left(\int_{\pi/2}^{3\pi/2} (\cos \, \theta + \sin \, \theta)\,d\theta\right) \\ &= \left. \left[\frac{r^3}{3}\right]_1^2 [\sin \, \theta - \cos \, \theta] \right|_{\pi/2}^{3\pi/2} \\ &= - \frac{14}{3}. \end{align*}\] Evaluate the integral \[ \displaystyle \iint_R (4 - x^2 - y^2)\,dA \nonumber\] where \(R\) is the circle of radius 2 on the \(xy\)-plane. Follow the steps in the previous example. \(8\pi\) To evaluate the double integral of a continuous function by iterated integrals over general polar regions, we consider two types of regions, analogous to Type I and Type II as discussed for rectangular coordinates in section on Double Integrals over General Regions. It is more common to write polar equations as \(r = f(\theta)\) than \(\theta = f(r)\), so we describe a general polar region as \(R = \{(r, \theta)\,|\,\alpha \leq \theta \leq \beta, \, h_1 (\theta) \leq r \leq h_2(\theta)\}\) (Figure \(\PageIndex{5}\)). Figure \(\PageIndex{5}\): A general polar region between \(\alpha < \theta < \beta\) and \(h_1(\theta) < r < h_2(\theta)\). Theorem: Double Integrals over General Polar Regions If \(f(r, \theta)\) is continuous on a general polar region \(D\) as described above, then \[\iint_D f(r, \theta ) \,r \, dr \, d\theta = \int_{\theta=\alpha}^{\theta=\beta} \int_{r=h_1(\theta)}^{r=h_2(\theta)} f(r,\theta) \, r \, dr \, d\theta.\] Example \(\PageIndex{3}\): Evaluating a Double Integral over a General Polar Region \[\iint_D r^2 \sin \theta \, r \, dr \, d\theta \nonumber\] where \(D\) is the region bounded by the polar axis and the upper half of the cardioid \(r = 1 + \cos \, \theta\). We can describe the region \(D\) as \(\{(r, \theta)\,|\,0 \leq \theta \leq \pi, \, 0 \leq r \leq 1 + \cos \, \theta\} \) as shown in Figure \(\PageIndex{6}\). Figure \(\PageIndex{6}\): The region \(D\) is the top half of a cardioid. Hence, we have \[\begin{align*} \iint_D r^2 \sin \, \theta \, r \, dr \, d\theta &= \int_{\theta=0}^{\theta=\pi} \int_{r=0}^{r=1+\cos \theta} (r^2 \sin \, \theta) \,r \, dr \, d\theta \\ &= \frac{1}{4}\left.\int_{\theta=0}^{\theta=\pi}[r^4] \right|_{r=0}^{r=1+\cos \, \theta} \sin \, \theta \, d\theta \\ &= \frac{1}{4} \int_{\theta=0}^{\theta=\pi} (1 + \cos \, \theta )^4 \sin \, \theta \, d\theta \\ &= - \frac{1}{4} \left[ \frac{(1 + \cos \, \theta)^5}{5}\right]_0^{\pi} = \frac{8}{5}.\end{align*}\] \[\iint_D r^2 \sin^2 2\theta \,r \, dr \, d\theta \nonumber\] where \(D = \left\{ (r,\theta)\,|\,0 \leq \theta \leq \pi, \, 0 \leq r \leq 2 \sqrt{\cos \, 2\theta} \right\}\). Graph the region and follow the steps in the previous example. \(\frac{\pi}{8}\) As in rectangular coordinates, if a solid \(S\) is bounded by the surface \(z = f(r, \theta)\), as well as by the surfaces \(r = a, \, r = b, \, \theta = \alpha\), and \(\theta = \beta\), we can find the volume \(V\) of \(S\) by double integration, as \[V = \iint_R f(r, \theta) \,r \, dr \, d\theta = \int_{\theta=\alpha}^{\theta=\beta} \int_{r=a}^{r=b} f(r,\theta)\, r \, dr \, d\theta.\] If the base of the solid can be described as \(D = {(r, \theta)|\alpha \leq \theta \leq \beta, \, h_1 (\theta) \leq r \leq h_2(\theta)}\), then the double integral for the volume becomes \[V = \iint_D f(r, \theta) \,r \, dr \, d\theta = \int_{\theta=\alpha}^{\theta=\beta} \int_{r=h_1(\theta)}^{r=h_2(\theta)} f(r,\theta) \,r \, dr \, d\theta.\] We illustrate this idea with some examples. Example \(\PageIndex{4A}\): Finding a Volume Using a Double Integral Find the volume of the solid that lies under the paraboloid \(z = 1 - x^2 - y^2\) and above the unit circle on the \(xy\)-plane (Figure \(\PageIndex{7}\)). Figure \(\PageIndex{7}\): Finding the volume of a solid under a paraboloid and above the unit circle. By the method of double integration, we can see that the volume is the iterated integral of the form \[\displaystyle \iint_R (1 - x^2 - y^2)\,dA \nonumber\] where \(R = \big\{(r, \theta)\,|\,0 \leq r \leq 1, \, 0 \leq \theta \leq 2\pi\big\}\). This integration was shown before in Example \(\PageIndex{2A}\), so the volume is \(\frac{\pi}{2}\) cubic units. Example \(\PageIndex{4B}\): Finding a Volume Using Double Integration Find the volume of the solid that lies under the paraboloid \(z = 4 - x^2 - y^2\) and above the disk \((x - 1)^2 + y^2 = 1\) on the \(xy\)-plane. See the paraboloid in Figure \(\PageIndex{8}\) intersecting the cylinder \((x - 1)^2 + y^2 = 1\) above the \(xy\)-plane. Figure \(\PageIndex{8}\): Finding the volume of a solid with a paraboloid cap and a circular base. First change the disk \((x - 1)^2 + y^2 = 1\) to polar coordinates. Expanding the square term, we have \(x^2 - 2x + 1 + y^2 = 1\). Then simplify to get \(x^2 + y^2 = 2x\), which in polar coordinates becomes \(r^2 = 2r \, \cos \, \theta\) and then either \(r = 0\) or \(r = 2 \, \cos \, \theta\). Similarly, the equation of the paraboloid changes to \(z = 4 - r^2\). Therefore we can describe the disk \((x - 1)^2 + y^2 = 1\) on the \(xy\) -plane as the region \[D = \{(r,\theta)\,|\,0 \leq \theta \leq \pi, \, 0 \leq r \leq 2 \, \cos \theta\}. \nonumber\] Hence the volume of the solid bounded above by the paraboloid \(z = 4 - x^2 - y^2\) and below by \(r = 2 \, \cos \theta\) is \[\begin{align*} V &= \iint_D f(r, \theta) \,r \, dr \, d\theta \\&= \int_{\theta=0}^{\theta=\pi} \int_{r=0}^{r=2 \, \cos \, \theta} (4 - r^2) \,r \, dr \, d\theta\\ &= \int_{\theta=0}^{\theta=\pi}\left.\left[4\frac{r^2}{2} - \frac{r^4}{4}\right|_0^{2 \, \cos \, \theta}\right]d\theta \\ &= \int_0^{\pi} [8 \, \cos^2\theta - 4 \, \cos^2\theta]\,d\theta \\&= \left[\frac{5}{2}\theta + \frac{5}{2} \sin \, \theta \, \cos \, \theta - \sin \, \theta \cos^3\theta \right]_0^{\pi} = \frac{5}{2}\pi\; \text{units}^3. \end{align*}\] Notice in the next example that integration is not always easy with polar coordinates. Complexity of integration depends on the function and also on the region over which we need to perform the integration. If the region has a more natural expression in polar coordinates or if \(f\) has a simpler antiderivative in polar coordinates, then the change in polar coordinates is appropriate; otherwise, use rectangular coordinates. Find the volume of the region that lies under the paraboloid \(z = x^2 + y^2\) and above the triangle enclosed by the lines \(y = x, \, x = 0\), and \(x + y = 2\) in the \(xy\)-plane. First examine the region over which we need to set up the double integral and the accompanying paraboloid. Figure \(\PageIndex{9}\): Finding the volume of a solid under a paraboloid and above a given triangle. The region \(D\) is \(\{(x,y)\,|\,0 \leq x \leq 1, \, x \leq y \leq 2 - x\}\). Converting the lines \(y = x, \, x = 0\), and \(x + y = 2\) in the \(xy\)-plane to functions of \(r\) and \(\theta\) we have \(\theta = \pi/4, \, \theta = \pi/2\), and \(r = 2 / (\cos \, \theta + \sin \, \theta)\), respectively. Graphing the region on the \(xy\)- plane, we see that it looks like \(D = \{(r, \theta)\,|\,\pi/4 \leq \theta \leq \pi/2, \, 0 \leq r \leq 2/(\cos \, \theta + \sin \, \theta)\}\). Now converting the equation of the surface gives \(z = x^2 + y^2 = r^2\). Therefore, the volume of the solid is given by the double integral \[\begin{align*} V &= \iint_D f(r, \theta)\,r \, dr \, d\theta \\&= \int_{\theta=\pi/4}^{\theta=\pi/2} \int_{r=0}^{r=2/ (\cos \, \theta + \sin \, \theta)} r^2 r \, dr d\theta = \int_{\pi/4}^{\pi/2}\left[\frac{r^4}{4}\right]_0^{2/(\cos \, \theta + \sin \, \theta)} d\theta \\ &=\frac{1}{4}\int_{\pi/4}^{\pi/2} \left(\frac{2}{\cos \, \theta + \sin \, \theta}\right) d\theta \\ &= \frac{16}{4} \int_{\pi/4}^{\pi/2} \left(\frac{1}{\cos \, \theta + \sin \, \theta} \right) d\theta \\&= 4\int_{\pi/4}^{\pi/2} \left(\frac{1}{\cos \, \theta + \sin \, \theta}\right)^4 d\theta. \end{align*}\] As you can see, this integral is very complicated. So, we can instead evaluate this double integral in rectangular coordinates as \[V = \int_0^1 \int_x^{2-x} (x^2 + y^2) \,dy \, dx. \nonumber\] Evaluating gives \[\begin{align*} V &= \int_0^1 \int_x^{2-x} (x^2 + y^2) \,dy \, dx \\&= \int_0^1 \left.\left[x^2y + \frac{y^3}{3}\right]\right|_x^{2-x} dx\\ &= \int_0^1 \frac{8}{3} - 4x + 4x^2 - \frac{8x^3}{3} \,dx \\ &= \left.\left[\frac{8x}{3} - 2x^2 + \frac{4x^3}{3} - \frac{2x^4}{3}\right]\right|_0^1 \\&= \frac{4}{3} \; \text{units}^3. \end{align*}\] To answer the question of how the formulas for the volumes of different standard solids such as a sphere, a cone, or a cylinder are found, we want to demonstrate an example and find the volume of an arbitrary cone. Example \(\PageIndex{5B}\): Finding a Volume Using a Double Integral Use polar coordinates to find the volume inside the cone \(z = 2 - \sqrt{x^2 + y^2}\) and above the \(xy\)-plane. The region \(D\) for the integration is the base of the cone, which appears to be a circle on the \(xy\)-plane (Figure \(\PageIndex{10}\)). Figure \(\PageIndex{10}\): Finding the volume of a solid inside the cone and above the \(xy\)-plane. We find the equation of the circle by setting \(z = 0\): \[\begin{align*} 0 &= 2 - \sqrt{x^2 + y^2} \\ 2 &= \sqrt{x^2 + y^2} \\ x^2 + y^2 &= 4. \end{align*}\] This means the radius of the circle is \(2\) so for the integration we have \(0 \leq \theta \leq 2\pi\) and \(0 \leq r \leq 2\). Substituting \(x = r \, \cos \theta\) and \(y = r \, \sin \, \theta\) in the equation \(z = 2 - \sqrt{x^2 + y^2}\) we have \(z = 2 - r\). Therefore, the volume of the cone is \[\int_{\theta=0}^{\theta=2\pi} \int_{r=0}^{r=2} (2 - r)\,r \, dr \, d\theta = 2 \pi \frac{4}{3} = \frac{8\pi}{3}\; \text{cubic units.} \nonumber\] Note that if we were to find the volume of an arbitrary cone with radius \(\alpha\) units and height \(h\) units, then the equation of the cone would be \(z = h - \frac{h}{a}\sqrt{x^2 + y^2}\). We can still use Figure \(\PageIndex{10}\) and set up the integral as \[\int_{\theta=0}^{\theta=2\pi} \int_{r=0}^{r=a} \left(h - \frac{h}{a}r\right) r \, dr \, d\theta. \nonumber\] Evaluating the integral, we get \(\frac{1}{3} \pi a^2 h\). Use polar coordinates to find an iterated integral for finding the volume of the solid enclosed by the paraboloids \(z = x^2 + y^2\) and \(z = 16 - x^2 - y^2\). Sketching the graphs can help. \[V = \int_0^{2\pi} \int_0^{2\sqrt{2}} (16 - 2r^2) \,r \, dr \, d\theta = 64 \pi \; \text{cubic units.} \nonumber\] As with rectangular coordinates, we can also use polar coordinates to find areas of certain regions using a double integral. As before, we need to understand the region whose area we want to compute. Sketching a graph and identifying the region can be helpful to realize the limits of integration. Generally, the area formula in double integration will look like \[\text{Area of} \, A = \int_{\alpha}^{\beta} \int_{h_1(\theta)}^{h_2(\theta)} 1 \,r \, dr \, d\theta.\] Example \(\PageIndex{6A}\): Finding an Area Using a Double Integral in Polar Coordinates Evaluate the area bounded by the curve \(r = \cos \, 4\theta\). Sketching the graph of the function \(r = \cos \, 4\theta\) reveals that it is a polar rose with eight petals (see the following figure). Figure \(\PageIndex{11}\): Finding the area of a polar rose with eight petals. Using symmetry, we can see that we need to find the area of one petal and then multiply it by 8. Notice that the values of \(\theta\) for which the graph passes through the origin are the zeros of the function \(\cos \, 4\theta\), and these are odd multiples of \(\pi/8\). Thus, one of the petals corresponds to the values of \(\theta\) in the interval \([-\pi/8, \pi/8]\). Therefore, the area bounded by the curve \(r = \cos \, 4\theta\) is \[\begin{align*} A &= 8 \int_{\theta=-\pi/8}^{\theta=\pi/8} \int_{r=0}^{r=\cos \, 4\theta} 1\,r \, dr \, d\theta \\ &= 8 \int_{\theta=-\pi/8}^{\theta=\pi/8}\left.\left[\frac{1}{2}r^2\right|_0^{\cos \, 4\theta}\right] d\theta \int_{-\pi/8}^{\pi/8} \frac{1}{2} \cos^24\theta \, d\theta \\&= 8\left. \left[\frac{1}{4} \theta + \frac{1}{16} \sin \, 4\theta \, \cos \, 4\theta \right|_{-\pi/8}^{\pi/8}\right] \\&= 8 \left[\frac{\pi}{16}\right] = \frac{\pi}{2}\; \text{units}^2. \end{align*}\] Example \(\PageIndex{6B}\): Finding Area Between Two Polar Curves Find the area enclosed by the circle \(r = 3 \, \cos \, \theta\) and the cardioid \(r = 1 + \cos \, \theta\). First and foremost, sketch the graphs of the region (Figure \(\PageIndex{12}\)). Figure \(\PageIndex{12}\): Finding the area enclosed by both a circle and a cardioid. We can from see the symmetry of the graph that we need to find the points of intersection. Setting the two equations equal to each other gives \[3 \, \cos \, \theta = 1 + \cos \, \theta. \nonumber\] One of the points of intersection is \(\theta = \pi/3\). The area above the polar axis consists of two parts, with one part defined by the cardioid from \(\theta = 0\) to \(\theta = \pi/3\) and the other part defined by the circle from \(\theta = \pi/3\) to \(\theta = \pi/2\). By symmetry, the total area is twice the area above the polar axis. Thus, we have \[A = 2 \left[\int_{\theta=0}^{\theta=\pi/3} \int_{r=0}^{r=1+\cos \, \theta} 1 \,r \, dr \, d\theta + \int_{\theta=\pi/3}^{\theta=\pi/2} \int_{r=0}^{r=3 \, \cos \, \theta} 1\,r \, dr \, d\theta \right]. \nonumber\] Evaluating each piece separately, we find that the area is \[A = 2 \left(\frac{1}{4}\pi + \frac{9}{16} \sqrt{3} + \frac{3}{8} \pi - \frac{9}{16} \sqrt{3} \right) = 2 \left(\frac{5}{8}\pi\right) = \frac{5}{4}\pi \, \text{square units.} \nonumber\] Find the area enclosed inside the cardioid \(r = 3 - 3 \, \sin \theta\) and outside the cardioid \(r = 1 + \sin \theta\). Sketch the graph, and solve for the points of intersection. \[A = 2 \int_{-\pi/2}^{\pi/6} \int_{1+\sin \, \theta}^{3-3\sin \, \theta} \,r \, dr \, d\theta = \left(8 \pi + 9 \sqrt{3}\right) \; \text{units}^2 \nonumber\] Example \(\PageIndex{7}\): Evaluating an Improper Double Integral in Polar Coordinates \[\iint_{R^2} e^{-10(x^2+y^2)} \,dx \, dy. \nonumber\] This is an improper integral because we are integrating over an unbounded region \(R^2\). In polar coordinates, the entire plane \(R^2\) can be seen as \(0 \leq \theta \leq 2\pi, \, 0 \leq r \leq \infty\). Using the changes of variables from rectangular coordinates to polar coordinates, we have \[\begin{align*} \iint_{R^2} e^{-10(x^2+y^2)}\,dx \, dy &= \int_{\theta=0}^{\theta=2\pi} \int_{r=0}^{r=\infty} e^{-10r^2}\,r \, dr \, d\theta = \int_{\theta=0}^{\theta=2\pi} \left(\lim_{a\rightarrow\infty} \int_{r=0}^{r=a} e^{-10r^2}r \, dr \right) d\theta \\ &=\left(\int_{\theta=0}^{\theta=2\pi}\right) d\theta \left(\lim_{a\rightarrow\infty} \int_{r=0}^{r=a} e^{-10r^2}r \, dr \right) \\ &=2\pi \left(\lim_{a\rightarrow\infty} \int_{r=0}^{r=a} e^{-10r^2}r \, dr \right) \\ &=2\pi \lim_{a\rightarrow\infty}\left(-\frac{1}{20}\right)\left(\left. e^{-10r^2}\right|_0^a\right) \\ &=2\pi \left(-\frac{1}{20}\right)\lim_{a\rightarrow\infty}\left(e^{-10a^2} - 1\right) \\ &= \frac{\pi}{10}. \end{align*}\] \[\iint_{R^2} e^{-4(x^2+y^2)}dx \, dy. \nonumber\] Convert to the polar coordinate system. To apply a double integral to a situation with circular symmetry, it is often convenient to use a double integral in polar coordinates. We can apply these double integrals over a polar rectangular region or a general polar region, using an iterated integral similar to those used with rectangular double integrals. The area \(dA\) in polar coordinates becomes \(r \, dr \, d\theta\). Use \(x = r \, \cos \, \theta, \, y = r \, \sin \, \theta\), and \(dA = r \, dr \, d\theta\) to convert an integral in rectangular coordinates to an integral in polar coordinates. Use \(r^2 = x^2 + y^2\) and \(\theta = tan^{-1} \left(\frac{y}{x}\right)\) to convert an integral in polar coordinates to an integral in rectangular coordinates, if needed. To find the volume in polar coordinates bounded above by a surface \(z = f(r, \theta)\) over a region on the \(xy\)-plane, use a double integral in polar coordinates. Double integral over a polar rectangular region \(R\) \[\iint_R f(r, \theta) dA = \lim_{m,n\rightarrow\infty}\sum_{i=1}^m \sum_{j=1}^n f(r_{ij}^*, \theta_{ij}^*) \Delta A = \lim_{m,n\rightarrow\infty}\sum_{i=1}^m \sum_{j=1}^nf(r_{ij}^*,\theta_{ij}^*)r_{ij}^*\Delta r \Delta \theta \nonumber \] Double integral over a general polar region \[\iint_D f(r, \theta)\,r \, dr \, d\theta = \int_{\theta=\alpha}^{\theta=\beta} \int_{r=h_1(\theta)}^{r_2(\theta)} f (r,\theta) \,r \, dr \, d\theta \nonumber\] polar rectangle the region enclosed between the circles \(r = a\) and \(r = b\) and the angles \(\theta = \alpha\) and \(\theta = \beta\); it is described as \(R = \{(r, \theta)\,|\,a \leq r \leq b, \, \alpha \leq \theta \leq \beta\}\) Gilbert Strang (MIT) and Edwin "Jed" Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. 14.3: Area by Double Integration 14.5: Triple Integrals in Rectangular Coordinates
CommonCrawl
Shortest path on unit cube in $\mathbb{R}^n$ The unit cube in $\mathbb{R}^n$ is the set of points $(x_1,...,x_n)$ such that $0 \leq x_i \leq 1$ for all $1 \leq i \leq n$. The surface of this cube is the set of points of the cube such that at least one of the $x_i$ equals either $0$ or $1$. What is the shortest path to travel from $(0,0,...,0)$ to $(1,1,...,1)$ only along points on the surface of the cube? In $\mathbb{R}^3$ the shortest path is $\sqrt{5}$, obtained by flattening the cube and drawing a diagonal. However, what would the shortest path in $\mathbb{R}^n$ be? rpfrpf You can still flatten the hypercube and draw a diagonal that goes straight through two cubes (two hyperfaces). The shortest path is then the straight path from $(0,0,\ldots,0,0)$ to $(0,\frac 12,\ldots, \frac 12,1)$ and from there the straight path to $(1,1,\ldots,1,1)$. Both segments have length $\sqrt {1+\frac{n-2}4} = \frac 12\sqrt{n+2}$, so the length of the shortest path is $\sqrt{n+2}$ merciomercio Not the answer you're looking for? Browse other questions tagged geometry optimization or ask your own question. Shortest path that has crossings? Shortest distance from a point to vertices of a cube about shortest path between points Maximising shortest path passing through $n$ points inside a bounded region Maximizing symmetric functions on the unit cube How to find the shortest path between opposite vertices of a cube, traveling on its surface? Filling a unit cube with countable balls. Finding the length of the shortest path Finding the shortest path between two points on the surface of a cube The shortest distance between two points on a sphere
CommonCrawl
Characterization, optimization and kinetics study of acetaminophen degradation by Bacillus drentensis strain S1 and waste water degradation analysis Sunil Chopra1 & Dharmender Kumar1 Bioresources and Bioprocessing volume 7, Article number: 9 (2020) Cite this article In this study, the biodegradation of N-acetyl-para-aminophenol also known as acetaminophen (APAP, paracetamol) was studied by bacterial strain Bacillus drentensis strain S1 (accession no. KY623719) isolated from sewage sample. The Bacillus drentensis strain S1 was isolated from the sewage sample using the enrichment culture method. As per our knowledge this is the first Bacillus drentensis strain reported for the degradation of APAP. In this study a 20-L batch reactor was employed for degradation of APAP. The maximum specific growth rate (μmax) was observed at 400 mg/L concentration of APAP. The pilot-scale anaerobic batch reactor of was stable and self-buffered. The degradation in pilot-scale reactor was slow as compared to batch experiments due to fluctuation in pH and exhaustion of nutrients. Design-Expert® software was used for optimization of conditions for APAP degradation; such as temperature (40 °C), pH (7.0), concentration of APAP (300 g/L) and agitation speed (165 rpm). The FTIR and GC–MS were used to identify the degradation metabolites. The intermediates of degradation like 2-isopropyl-5-methylcyclohexanone and phenothiazine were observed, based on these results the metabolic pathway has been predicted. The optimization, kinetic, batch study and pilot study indicates the potential of Bacillus drentensis strain S1 for degradation of acetaminophen. The experimental design, optimization and statistical analysis were performed by Design Expert® software. The optimal growth condition for Bacillus drentensis strain S1 was found to be at temperature 40 °C, pH 7, acetaminophen at concentration of 300 (mg/L) and agitation speed 165 rpm. The GC–MS and FTIR was used for identification of metabolites produced during acetaminophen degradation and the partial metabolic pathway for degradation of acetaminophen was also proposed The pharmaceuticals pollutants viz. analgesics, anti-inflammatory drugs and non-steroidal anti-inflammatory drugs (NSAIDs), psychiatric drugs, β-blockers, other illicit drugs, etc., are classified as emerging organic contaminants (EOCs). They are detected in rivers, lakes, ground water, marine and various aquatic ecosystems (Wu et al. 2012). These compounds are designed to cure diseases in humans and animals. Presently, many of these compounds are detected worldwide ranging from ng/L to μg/L. The concentration of these compounds is increasing day by day, due to high rates of production and consumption. This has lead to adverse effects on ecosystem due to their biologically active nature (Onesios et al. 2009). One of the most frequently detected pharmaceutical and personal care products (PPCP) in treated and untreated waste water is acetaminophen (APAP), commonly known as paracetamol and chemically N-acetyl-para-aminophenol. This is the one of most salable over the counter medicines, highly produced and prescribed. APAP is prescribed as fever reducer, pain relief, analgesic and antipyretic drug (Chandrasekharan et al. 2002). The metabolism of APAP occurs in the liver of patient, and it is not completely metabolized and secreted during excretion. The concentration of APAP as excretion varies from person to person and the average concentration lies between 60 and 70% (Khetan and Collins 2007). The APAP has chemical properties, like high solubility and hydrophilic nature, which enables the entry of this compound in the aquatic environment. Also this is one of the most identified PPCPs, detected in water resources around the globe. Their entry into the environment leads to production of intermediates metabolites and these metabolites are highly toxic compared to parent molecule and having more adverse effect in the environment (Balakrishna et al. 2017). APAP is more frequently detected in waste water treatment plants (WTPs) commonly used for treatment of domestic waste water. The concentration of APAP in these WTPs is 40 times higher in developing countries like India due to high population (Balakrishna et al. 2017). The presence of APAP and its metabolites in drinking water resources raised issues related to human health. Presently, there is no such evidence available, which has shown the adverse effects of dissolved APAP on human health (Minto et al. 1997). But the high therapeutic use of the APAP can lead to nephrotoxicity and teratogenic effects. The use of high concentration of APAP is reported to cause adverse effect on human liver (Khan, et al. 2006). Chlorination is most commonly used method for drinking water treatment in many developing countries. A higher emission and mass load of PPCPs has been reported in India (Subedi et al. 2017). During chlorination, the APAP is transformed into various intermediates, such as N-acetyl-p-benzoquinoneimine (NAPQI) and 1,4-benzoquinone. These transformed compounds having higher toxicity compared to APAP (Bedner and MacCrehan 2006). Now a days, the WTPs in many countries use advanced methods to remove such compounds such as electrochemical (Brillas et al. 2005, Waterston et al. 2006); ozonation; H2O2/UV; H2O2/Fe2+/UV oxidation of waste water (Skoumal et al. 2006; Vogna et al. 2002) and semiconductor photo-catalysis methods. These technologies having high operation cost while in some methods APAP gets transformed into a more toxic form of intermediate compounds (Yang et al. 2008, 2009). As compared to above-mentioned methods, biodegradation is considered as the eco-friendly and cost-effective option. With the help of biodegradation mechanism, APAP is degraded and converted into low molecular weight dead-end products (Hasan et al. 2012; Chen et al. 2010). The Penicillium sp. transforms APAP into 4-aminophenol one of the dead-end metabolite and acetate (Hart and Orr 1975). The Rhodococcus strain degrades APAP into intermediates compounds: 4-aminophenol, hydroquinone and catechol (Ismail et al. 2017). Bart et al. (2011) purposed the metabolic pathway for the degradation of APAP by D. tsuruhatensis and P. aeruginosa and it was observed that hydroquinone as an intermediate produced during degradation. Strains like Burkholderia sp. AK-4 have the capability to degrade aminophenol into 1,2,4-trihydroxybenzene with the intermediate of 1,4-hydroxybenzene (Zhang et al. 2013). The biodegradation of APAP by soil microorganisms occurs by the cleavage of aromatic ring of APAP into 3-hydroxyacetaminophen and N-acetyl benzoquinoneimine and this occurs by the mechanisms involving hydroxylation, methylation and oxygenation reactions (Li et al. 2014). The bacterium plays a vital role in the APAP degradation and contributes to eco-friendly environment (Hu et al. 2013). In this study, the APAP-degrading bacterium Bacillus drentensis strain S1 was isolated by enrichment culture method from the sample collected from highly polluted sewage. The strain was identified by microscopy, biochemical, and molecular characterization-based methods. The biodegradation mediated by the Bacillus drentensis strain S1 was performed in 250-mL flasks and also in 20-L pilot-scale reactor. The metabolites produced during degradation were analyzed by FTIR and GC–MS-based methods. Chemicals used and collection of samples Acetaminophen (99% purity) was purchased from Sigma Aldrich (USA). Other chemicals used in this study were purchased from HiMedia (Mumbai, India). The sewage samples used in this study were collected from waste water drain in Sonipat, Haryana, India (28.98°N 77.02°E). This sample contains non-treated waste water mainly from household waste and small-scale industrial discharges. These samples contain microbial consortium having ability to degrade different compounds present in waste water. They were presumed to be the best source for showing adaptation of microorganisms in it and compounds gets transformed by the metabolic activity of microorganisms. The samples were transported to the laboratory in a refrigerated container and stored at 4 °C. The samples were settled down for heavy particles and the sediment was discarded before use. Later, the samples were filtered just before use through Whatman filter paper (90 mm pore size) to remove any suspended particles. Enrichment, isolation and screening of APAP-degrading bacteria APAP biodegradation tests were performed in 250-mL conical flasks (abiotic tests) supplemented with Bushnell Haas Medium (BHM, HiMedia, Mumbai, India), APAP and waste water sample. The flasks were incubated at 37 °C in rotary shaker at 150 rpm according to general standard conditions (including a blank test without activated waste water sample and an adsorption test by using autoclaved activated sample, etc.). The degradation was checked after at regular intervals of 8 h. Consequently, 1 mL of the mixture containing APAP-degrading strain was analyzed for pH and APAP concentration. The mixture was suspended into fresh sterilized media having APAP (100–500 mg/L), concentration increased in stepwise manner. After, 5 days the disposable plastic plates contain MSM agar medium supplemented with 100 to 500 mg/L of APAP (this also includes a blank test and an adsorption test). The strain isolation was performed by enrichment culture technique using liquid MSM (pH of 7.2 ± 0.5) in 250 mL flask as per the method prescribed by Jameson (1961). These plates were incubated in aerobic condition at 37 °C in a BOD incubator. After the incubation of 24 h, the bacterial colonies having different morphological features were sub-cultured on nutrient agar plates. Experimental design, degradation optimization studies and statistical analysis The effect on various parameters viz. temperature, pH, shaking speed and concentration was studied for the degradation of APAP mediated by the addition of culture of S1 strain. These experiments were performed in 100-mL flasks containing varying concentrations of MSM, APAP and biomass. Design-Expert® software (version DX 6.0.1, Stat-Ease, Minneapolis, 2005) was used for designing the experiment, for the best fit value of physical parameters and thereby statistical analysis performed. Box–Behnken design (BBD) (Mirizadeh et al. 2014) with quadratic model was used to identify the combined effect of four independent variables with the range as parameters viz. temperature (20–60 °C), pH (5–9), agitation speed (80–250 rpm) and concentration of APAP (100–500 mg/L). BBD-proposed three-level designs were used for fitting response surfaces to get the best values for degradation of APAP obtained for different variables based on second-order polynomial model. Each factor was coded at three levels and 29 experiments were performed in shaking-flask set at previously fixed conditions. Thereafter, statistically significant values were analyzed by Design-Expert® software. The response surface methodology (RSM) plots were used to identify the parameters for optimization of specific growth conditions. The analysis of variance (ANOVA) and multiple regression analysis were used to know the best fit response by using the quadric model in Box–Behnken design (BBD). The data were analyzed using various statistical parameters like: F-value, degree of freedom (DF), sum of squares (SS), coefficient of variation (CV) and regression coefficient (R2). These parameters generate statistical data, which were further used for analysis of BBD/quadric models showing the significant value for the optimization of biodegradation. The resultant data were used to plot the response surfaces curves (Wang et al. 2017). Identification and molecular characterization of isolate S1 The primary characterization of isolate S1 was done by Gram's straining. The KB013 kit (HiMedia, Mumbai India) was used for biochemical identification of isolate. The genomic DNA of isolate S1 was isolated by alkaline lysis method (Wilson 2001). The qualitative analysis of DNA was done by using 1.0% agarose gel through electrophoresis (BioRad USA). The amplification of isolate's genomic DNA was done by polymerase chain reaction (PCR) method using universal 16S rRNA primers (27F and 1492R), respectively. After that, the amplified PCR product was sequenced (Eurofins Genomics India Pvt Ltd.). The sequence obtained after 16S rRNA was analyzed for presence of any chimeric sequence by DECIPHER version 1.12.2, an online bioinformatics tool. The Nucleotides Basic Local Alignment Search Tool (BLASTn) was further used for comparing isolate S1 sequence to the GenBank database of National Centre of Biotechnology Information (NCBI). Similar sequences were aligned with MUSCLE. The neighbor-joining (NJ) method was used to construct the phylogenetic tree using MEGA 7.0 software. The UGPMA clustering method was used for analyses of genetic variances between the sequences (Kumar et al. 2016b). The evolutionary history of isolate S1 was inferred using the neighbor-joining method (Saitou and Nei 1987). Biodegradation study in pilot reactor Batch experiment and kinetic study The colorimetric method was used for detection of degradation percentage of APAP by the isolate S1. The 500 µL of degraded sample was mixed with 1.0 mL of 15% trichloroacetic acid (TCA) and centrifuged until a clear supernatant was formed. After this, the supernatant was transferred to another tube containing 0.5 mL of 6 N HCl and sodium nitrite (0.4 mL). This reaction produced nitrous acid which was neutralized by adding 15% sulfamic acid and finally add 15% sodium hydroxide (NaOH). The UV-spectrophotometer was used to detect absorbance of final mixture at 254 nm against water as blank (Shervington and Sakhnini 2000; Shihana et al. 2010). The degradation percentage (R) of APAP was calculated by Eq. 1: $${\text{R}} = \frac{{C_{0} - C_{t} }}{{C_{0} }} \times 100.$$ Here \(C_{0}\) is absorbance at the initial concentration of APAP; Ct is the absorbance after incubation at time 't'. Kinetic studies were performed by using same anaerobic reaction conditions mentioned above for isolate S1 with 100 to 2200 mg/L of APAP added in stepwise manner. The mixture was pipetted out after regular time interval of 4 h. Then cell growth in mixture was measured by using UV-spectrophotometer at optical density (OD600). The biomass dry weight was determined by filtering the cell suspension through a 0.2-μm-pore filter and then drying the filter to a constant weight for 24 h at 80 °C. Time course analysis of APAP biodegradation (APAP concentration vs. reaction time) was plotted. The experimental results were further used in various models but Haldane's growth kinetics model for its best fit. This growth model was modified version of the Monod kinetics model. The kinetic analysis of the growth data was fitted in Eq. 2: $$\mu = \mu_{\text{max} } \frac{S}{{K_{s} + S + S_{2} /K_{i} }}.$$ Here, 'μ' is the specific growth rate of strain S1; μmax is maximum specific growth rate of strain after time 't'; 'S' is the concentration of APAP (mg/L); 'Ks' is half-saturation constant (mg/L) and 'Ki' is inhibition constant (mg/L). To determine the yield coefficient by the strain S1, linear regression was used by accessing Eq. 3: $$X \, - \, X_{0} = \, Y_{X/S} \left( {S_{0} - \, S} \right),$$ where 'X' and 'X0' are the biomass of strain S1 at time 't' and initial concentration of biomass strain S1 (mg/L), respectively, and 'S' and 'S0' are the APAP concentration after time 't' and initial concentration of APAP (mg/L), respectively. The data analysis software Origin 2017 was used for analysis of experimental data on daily bases based on values of biomass concentration with time and fitted in the above equation. To perform the batch experiment waste water was collected from the Sonipat, Haryana (India). The effluent had unpleasant smell and was black in color. The batch experiment was performed with sterilized waste water supplemented with APAP (400 mg/L) in 2-L flask inoculated with isolate S1. No other nutrients were added in batch experiment. The degradation was observed after regular interval of every 4 h. The process of anaerobic biodegradation of APAP (400 mg/L) was also studied at pilot lab-scale 20-L fabricated reactor (Fig. 1e). The pilot unit was set with waste water collected from the sewage. The waste water was transferred to laboratory and used immediately. The COD, odor, color and pH were observed before reactor study and also during the reactor study as well. The UV-spectrophotometer-based method was used for COD detection. The reactor was fed with the sterilized waste water and distilled water (60:40). The absorbance was noted at UV 244 and supplemented with MSM (4 g/L) essential for microbial growth (Hesnawi et al. 2014; Joss et al. 2006). An overnight grown culture of S1 strain was inoculated in reactor. The samples were withdrawn after 5 days of interval of time and these observations were recorded for 30 days. The samples were analyzed for physiological changes in color of the medium during degradation, pH, COD analysis, biomass identification and biodegradation was also determined. Biodegradation of APAP by bacterial strain S1 using batch culture study and morphology of the colony observed after 48 h on plate containing mineral salt medium (MSM) containing APAP. a The strain S1 inoculated with BHM and supplemented with 5 mg/mL of APAP; b after 48 h of incubation of the culture the appearance of the medium changes from colorless to light brown; c similarly after 5 days of incubation the color changed from light brown to black; d morphology of the colony observed after 48 h on plate containing mineral salt medium (MSM) containing APAP, e pilot-scale anaerobic reactor used for biodegradation of APAP Identification of metabolites The analysis of biodegrading products in the shake-flask study was performed by Fourier transform-infrared spectroscopy (FTIR) and gas chromatography–mass spectrometry (GC–MS) based techniques. This has led to identification of metabolic intermediates produced during degradation. FTIR spectroscopy has also been a top choice for minimizing the environmental issues regarding industrial chemical waste as it does not require much solvent. The structure of paracetamol contains different functional groups including –NH, O–H, C=O and aromatic ring containing C=C. The band appearing for C=O was selected for quantification as the interference arising from the excipients in pharmaceutical formulation is minimal in the region (Mallah et al. 2015). The 5 mL of degraded sample was filtered with Whatman paper (90 mm pore size) and centrifuged at 4000 rpm for 20 min. Then, the supernatant was transferred to a fresh tube containing equal volume of hexane by the gentle shaking of tube (Granberg and Rasmuson 1999). This tube was again centrifuged at 4000 rpm for 20 min. After second time centrifugation, the two separate layers appeared. The upper layer was taken in a fresh Eppendorf tube. This sample was stored at 4 °C and further used for FTIR and GC–MS analysis. FTIR was performed by Bruker instrument at 250–8000 cm−1. The hexane mix metabolites were analyzed by GC–MS (Shimadzu-QP-2010 plus thermal desorption system T-20) at Advanced Instrumentation Research Facility (AIRF), JNU New Delhi, India. The GC–MS characterization of metabolites was performed with operation condition at 70 eV (electron impact mode); DDVP (Dichlorvos) column flow rate for single (1.7 mL/min) and double (3.4 mL/min) was used; helium as carrier gas (99.9%) was injected at 0.5 EL injection volume with 10:1 split ratio at injector temperature (250 °C); ion-source temperature (280 °C). The oven temperature was programmed to 110 °C for 2 min then increase in temperature with the rate of 10 °C/min to achieve 200 °C and further increase with rate of 5 °C/min till 280 °C was achieved and ends with at 280 °C for 9 min. The 70 eV mass spectra were taken with scanning interval of 0.5 s with fragments of 40 to 550 Da. PathPred, an online web-based tool was used to predict pathways from a query compounds as detected by GC–MS analysis of sample. The results interpretation through this server provides all the possible reactions in the form of tree indicated by the reaction pathways. Isolation and screening of APAP-degrading isolates The conical flasks having control, test and abiotic sample were incubated at 37 °C in BOD incubator with shaking speed of 150 rpm. The color change was analyzed visually from white to brown and black was recorded after 2 and 5 days in flask containing active sewage sample (Fig. 1a–d). No physiological change was reported in control and test flasks. The inoculum was grown on plates containing MSM supplemented with APAP; the plates were then incubated at 37 °C in BOD incubator for 48 h. The color of media changed to black after 24 h and no individual colony was seen on the plate. A loop full of bacteria growing on plate was taken and cultured on plate containing MSM and APAP. After 48 h, individual bacterial colonies were observed and the medium around the colony turned brown/black (Fig. 1b, c). The physical appearances of all the colonies recovered were similar. The individual colony was re-cultured on nutrient agar plates (Fig. 1d). After incubation, a white color creamy colony was observed on the nutrient agar medium and this isolate was named as isolate S1. The isolate S1 does not require any other growth factors or supplements to grow in nutrient broth. It grows as monospecies on MSM supplemented with APAP as sole carbon, energy and nitrogen source. Many studies reported using microbial species that utilized xenobiotic compounds as a source of energy. Kumar et al. (2016a) reported the isolation and characterization of Kocuria sp. strain DAB-1 W, Staphylococcus sp. strain DAB-1Y and three Bacillus sp. strains uses gamma-HCH as carbon and energy source (Pannu and Kumar 2017). The adaptation of bacterial strain S1 with APAP, as carbon and nitrogen source occur in liquid media supplemented with APAP. The factors like concentration of APAP, temperature, pH, shaking speed (in rpm) played role in degradation determined through screening in shake-flask study (data unpublished). Further, these factors were optimized using the BBD and response surface methodology plots. Experimental design, optimization studies and degradation analysis The Design-Expert® software was used with the Box–Behnken design (BBD) with quadric model for predict the APAP degradation. The degrading efficiency of desired APAP was analyzed by putting the experimental value in the model and the predicted conditions of temperature, pH, agitation speed and APAP concentration were determined. The RSM was used to determine the optimization curve for the biodegradation of APAP. There was 29 values were predicted for four optimal condition. These 29 predicted values of experiments were performed in triplicate in conical flasks. The observed results for the predicted values of APAP degradation were analyzed by the software and their responses were predicted (Table 1). Further, these experimental results and predicted results were put into the model and RSM 3-D plots and contour curves were prepared. These 3-D plots show the effect of physical factor on degradation of APAP. The response surfaces values was fitted with Eq. (2). $$Y = 89.20 - 2.83A + 1.58B - 5.23C + 3.67D - 33.22A^{2} - 26.10B^{2} - 14.10C^{2} - 15.23D^{2} + 2.25{\text{AB}} + 8.00{\text{AC}} + 1.25{\text{AD}} - 2.50{\text{BC}} + 2.50{\text{BD}} - 1.25{\text{CD}}$$ Table 1 Box–Behnken design matrix for Bacillus drentensis strain S1 degrading APAP Here, Y is predicted response (APAP degradation), A (pH), B (temperature), C (APAP concentration) and D (agitation speed) were the independent variables and they ultimately decides the fate of degradation. The predicted and experimental values of APAP concentration were analyzed with help of Design-Expert® (Table 1). The R2 was observed as response value defined by experimental factors and interactions between them as in Fig. 2a, showing a satisfactory correlation between residual and predicted values. The R2 for APAP degradation was 0.9510, which explains the variability of response up to 95.10%. The Adj R2 = 0.9019 was high for significance of the model. Predicted R2 of 0.7308 was realistic concurrence with Adj R2. The signal-to-noise ratio was measured by adequate precision the desirable ratio was greater and adequate precision ratio was of 14.355. The design space was navigated by model and the P and F values were determined by comparing of significance of each coefficient (Table 2). The ANOVA quadratic model demonstrates that this model was significant. The interaction of variable coefficients was not significant in determination of the response. The P value was less than 0.0001 and F-value was less than 0.05. This indicated that the models were significant. The P-value indicates highly significant models if value was less than 0.0522. The important model terms are B, D, AB, AC, AD and BD and in linear effect D (agitation speed) was one of the most important factors in optimization study, while C (APAP concentration) did not have much effect on the identified responses and Eq. 2 modified into Eq. (3). $$Y = 89.20 + 1.58B + 3.67D + 2.25{\text{AB}} + 8.00{\text{AC}} + 1.25{\text{AD}} + 2.50{\text{BD}}$$ Parity plot and 3D plots of RSM showing optimization study of degradation of APAP by strain S1 under different physiological conditions. a Parity plot showing distribution of experimental and predicted values of APAP; b 3D-RSM plot showing the effect of concentration of APAP vs. shake speed; c 3D RSM plot showing the effect of temperature vs. pH; d 3D- RSM plot showing the effect of shaking speed vs. pH Table 2 Analysis of variance (ANOVA) for Bacillus drentensis strain S1 degrading APAP The function of two factors was determined by RSM plot. All the other factors were helpful at fixed level to understand the interactions among these factors. The residual vs predicted is shown in Fig. 2a. Response surface curves for APAP degradation between concentration of APAP vs speed (Fig. 2b), temperature vs pH (Fig. 2c), agitation speed vs pH (Fig. 2d), concentration vs temperature and agitation speed vs temperature. These results indicate the effect of four independent variables; temperature, pH, APAP concentration and shaking speed. The effect of APAP degradation rate depends on temperature (50–60 °C) and pH (8–9). Further, the effects of APAP concentration and agitation speed on APAP degradation were also determined. It was found that the lower APAP concentration and low agitation speed have no significant effect on degradation predicted through model. The temperature (40 °C), pH (7.0), APAP concentration (300 mg/L) and agitation speed (165 rpm) were reported for the maximum degradation of APAP by strain S1 in shake-flask study. The PPCPs mixture containing APAP, salicylic acid, carbamazepine, diethyltoluamide, and crotamiton were studied in batch experiments with soil. The experimental data were described with the help of pseudo-second-order kinetics having R2 > 0.98. The results indicated that there was effect of pH on adsorption of PPCPs mixture (Foolad et al. 2015). Isolation, identification and molecular characterization of S1 strain The isolate S1 was a facultative anaerobe Gram-positive bacteria having single and paired narrow tapered rods. These colonies were cream colored at the center and produce brownish pigment around the colony and were oxidase and catalase positive, non-spore forming, coccobacilli-shaped bacteria. Positive results were shown for malonate, citrate, catalase and arginine utilization. This strain shows variable biochemical result of Voges–Proskauer reaction and nitrate reduction, while ONPG, sucrose, mannitol, glucose, arabinose and trehalose biochemical test shows negative results. Genomic DNA was extracted from the 24-h grown culture of isolate S1. A single band of DNA having high-molecular weight was observed. Genomic DNA was isolated and 16S rDNA gene was amplified with PCR using universal (27F and 1492R) primers. The 1500-bp band was observed after evaluation of PCR product on agarose gel. The purification of the excised band from the gel was done to remove the contaminants. The 16S rDNA sequence generated by forward and reverse sequencing of PCR product was consensus. The 16S rRNA sequencing was used for molecular identification of genomic DNA extracted from isolate S1. The 16S rRNA was checked for chimeric sequence. The sequences data were used for BLAST analysis which suggested the genus Bacillus drentensis strain S1. The strain was named as Bacillus drentensis strain S1 based on 16S rRNA partial sequence and 1478-bp nucleotide sequence was submitted to NCBI GenBank with accession number KY623719. A phylogenetic tree of Bacillus drentensis strain S1 was performed MEGA 7.0 with BLAST hits based on similarity of sequences with maximum identity score were selected and aligned with Clustal-W. The hit table shows the related similarity of our strain with other strains (Additional file 1: S1). MEGA 7.0 was used to construct distance matrix and phylogenetic tree between the neighboring sequences which led to strain identification (Fig. 3). The identification of evolutionary history of the strain S1 was determined with the Kimura 2 parameter-based maximum likelihood method. The bootstrap agreement of tree provides 1000 replicates which were less than partitions reproduced of 50% bootstrap. Less than half bootstrap replicates were collapsed. Heuristic search for the initial tree were obtained by applying neighbor-join (NJ) method. The Bio-NJ algorithms were used for matrix to estimate pairwise distances using maximum composite likelihood (MCL) approach, further the topology was selected for better-quality log likelihood value. This analysis was performed with 11 nucleotide sequences having 1463 positions of the final dataset through MEGA 7.0 software. The tree constructed with the sum of branch length. The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. The evolutionary distances were computed using the maximum composite likelihood method (Tamura et al. 2004) and are in the units of the number of base substitutions per site. This analysis involved 16 nucleotide sequences. All positions containing gaps and missing data were eliminated. Evolutionary analyses were conducted in MEGA7 (Kumar et al. 2016b). The maximum likelihood phylogenetic showing the position of APAP-degrading Bacillus drentensis strain S1 (KY623719) relevant to other type strains with genus Bacillus Heyrman et al. (2004) isolated Bacillus drentensis sp. (AJ542506) strain from an agricultural research area in Netherlands and reported it as Gram-positive anaerobic, tapered rods with cream-colored colonies producing brownish soluble pigment with butyrous consistency having eggshell-like structure. These features are similar to the strain Bacillus drentensis strain S1 (KY623719). This strain also produces brown pigment during the growth and degradation process. Kim et al. (2014) isolated Bacillus drentensis sp. strain from heavy metal-contaminated soils from a mine in Korea. The study on utilization of APAP as carbon and nitrogen sources was reported in Penicillium sp. by Hart and Orr (1975). Bart et al. (2011) reported Delftia tsuruhatensis and Pseudomonas aeruginosa that utilized APAP in membrane bioreactor (MBR), while Stenotrophomonas and Pseudomonas utilized paracetamol as energy sources (Zhang et al. 2013). The degradation of PPCPs and electricity generation were achieved by solid plain-graphite plate microbial fuel cell. The bacterial species Hydrogenophaga sp., Rubrivivax sp. and Leptothrix sp. were involved in PPCPs biodegradation (Bart et al. 2011). Biodegradation study in batch and pilot reactor During the start of degradation, no lag phase was observed up to 458.4 ± 17.8 mg/L of APAP concentration. After 48 h, decrease was observed up to 50% concentration of APAP and the color change to brown was observed after 5 days and no APAP was detected (< 100 ng/L). No extra additives were added in effluent. Kinetic analysis was done in batch experiment using strain S1 and the biomass growth data required through variation in APAP concentration and semi-logarithmic graph was plotted using these degradation data (Fig. 4a). The graph was plotted between the specific growth rate (μ) and APAP concentration. The specific growth rate for APAP-degrading isolate S1 increases with increase in concentration of APAP from 100 to 400 mg/L and further increase in concentration the specific growth rate was decreased up to 2200 mg/L and the highest growth was observed at 400 mg/L. The μmax, Ks and Ki were 0.21/h, 155 with standard error of 5% mg/L, and 315 with standard error (SE) of 5% mg/L, respectively, while R2 of 0.9827 (Fig. 4a). a Specific growth rate curve of Bacillus drentensis strain S1; b utilization of APAP as a growth substrate by Bacillus drentensis strain S1 indicated degradation of APAP, COD and pH with respect to time Based on initial concentrations, a linear plot was observed after short lag phase, this indicated that APAP acts as a substrate for the exponential growth of culture. The maximum growth rate of strain HJ1012 was observed at 315 mg/L with R2 of 0.946, respectively (Hu et al. 2013). Pseudomonas sp. strain ST-4 isolated from activated sludge samples has adapted concentrations 50 to 500 ppm (Khan et al. 2006); whereas, the Stenotrophomonas sp. f1, Pseudomonas sp. f2 and fg-2 were isolated from paracetamol-degrading aerobic aggregate with APAP concentration of 400, 2500 and 2000 mg/L, respectively (Zhang et al. 2013). The strains of P. putida, P. cepacia, and P. acidovorans were grown in MSM supplemented with acetamide or phenylacetamide (Betz and Clarke 1973). The kinetics experiments follow first-order kinetics. The half-life decreased from 1.6- to 11.7-fold on comparisons to initial values, and it was noted that APAP degraded to a greater extent (Martínez-Hernández et al. 2016). Other studies have indicated the degradation of APAP from the wide range and our strain has similar degradation behavior and degraded 300 g/L of acetaminophen (Table 3). Table 3 The reported bacterial strains with potential acetaminophen degradation The bioremediation-based processes increases the degradation by APAP by microbial strains, and this has been an alternative for chemical techniques used in conventional treatment systems (Zur et al. 2018a, b). The six microbial strains were identified as Acinetobacter, Pseudomonas, Sphingomonas and Bacillus genus, which have the potential to degrade APAP. But the Pseudomonas moorei KB4 was able to use 50 mg/L of APAP as carbon source (Zur et al. 2018a, b). Pharmaceutical compounds (acetaminophen, caffeine, sulfamethoxazole, naproxen and carbamazepine) are also present in natural soil (loamy sand). The soil microbes have the potential to transform such compounds. The sorption and biodegradation kinetics models describe the degradation behavior. The serial batch-type reactor having soil and water ratio of 1:4 was supplemented with pharmaceutical with 100 μg/L concentration (Martínez-Hernández et al. 2016). An anaerobic reactor was constructed on the basis of information available on several factors such as concentration of waste water, volume of inoculum, agitation speed, physical conditions, etc. (Zwain et al. 2017). The main parameter was the load rate of APAP for degradation. The anaerobic reactor reaction rate was increased in initial stage due to presence of volatile fatty acids in waste water. Higher loading rate caused many problems, like acidification and lower the efficiency of degradation (COD removal) (Liang et al. 2016; Wang et al. 2017). The anaerobic reactor degradation occurs in three stages: (1) adjustment phase, (2) initiation of degradation, and (3) steady-state degradation. The adjustment of system takes almost 20 h during which the COD (2700 mg/L) reduced slowly. The COD decreasing rate depends on the rate of degradation. The color of load and smell intensity gradually increased with rate of reaction. This shows that the S1 strain gradually degrades the APAP in the new environment. The increased pH was reported with respect to increase in degradation rate. It has been observed that after 3 days approximately 20% COD was decreased (Fig. 4b). This indicated the start of degradation, and the biomass of S1 was increased with reference to increase in time. The COD was decreased manyfold as the degradation proceeds for more time. The rate of decrease of COD was approximate 80% within 20 days. The pilot reactor was stable and self-buffered during APAP degradation. Further, the pH of the reactor changes and the nutrients are consumed in the reactor due to microbial growth and development; this anaerobic reactor degradation is slow as compared to batch reactor. The rate of degradation was increased gradually with respect to adjustment stage. The color of load turned slight blackish and the odor of the load was becoming unpleasant. These conditions assured that the microbes were slowly adapting during the adjustment stage and now the degradation of APAP was started by the S1 strain. During steady stage there was slight decrease in COD reduction wrt time. This was due to presence of low APAP concentration in the system. Almost 94.5% of APAP was degraded by the strain S1 in anaerobic conditions. The COD decreasing to manyfolds demonstrated that the S1 strain was working effectively under stress conditions. The color turned black and odor of the degraded products was unbearable. Such physical changes reflect that there was some kind of chemical reactions occurring in the strain S1 in the presence of APAP. These chemical reactions make a path to the degradation of APAP. During the whole process going within the anaerobic reactor, there was a slight fluctuation reported in pH (6.4 to 7.2) (Fig. 4b). The consortium of microalgae–bacteria has the potential to degrade ketoprofen, APAP and aspirin mixture in photobioreactors. These microbes degrade approximately 95% analgesics mixture and reduce COD (Ivshina et al. 2006). The microbial fuel cell having solid plain-graphite plates was used for biodegradation of APAP, ibuprofen, and sulfamethoxazole. The PPCP-containing sewage was used for microbial degradation during which electricity generated. The COD and nitrogen removal efficiencies were achieved almost 97.20% and 83.75%, during degradation of mixture. The removal efficiency was almost 98.21% to 99.89%. The microbial groups involved in degradation were Dechloromonas sp., Sphingomonas sp., and Pseudomonas aeruginosa (Chang et al. 2014). Degradation mediated by the Bacillus drentensis strain S1 was analyzed by FTIR. The alkane section was evidenced by sp2 C–H stretch band at 3098.98 cm−1. The presence of aromatic rings was evidenced by several bands in the spectrum. The alkane-like carbon monoxides have a band at 2070.97 cm−1. The characteristic C=C vibration stretch were observed at 1621.56 cm−1. The overtone combination bands appear between 3681 and 4000 cm−1. The vibrations were detected between 4000 and 2500 cm−1 region due to hydroxide (O–H), carbon–hydrogen (C–H) and nitrogen–hydrogen (N–H) stretching. The stretching produced by hydroxide has a broad range of 3700–3600 cm−1, while NH stretching is between 3400 and 3300 cm−1. R–OH stretch between 2500 and 3000 cm−1. An out-of-plane C–H bending at 736.23 cm−1 was characteristic of para-substituted aromatic compounds and at 3550–3500 cm−1 for phenol O–H stretching (Fig. 5a, b). Studies indicated that FTIR was used to determine the bonding pattern in organic solvents and biological macromolecules. Mallah et al. (2015) analyzed that acetaminophen contains functional groups like N–H, C=O, O–H and aromatic ring. Burgina et al. (2004) has calculated the potential and kinetic energies of APAP molecules (–CH3, –C=O, –NH, –C6H4, –C–O, and –O–H) and hydrogen bonds (–O–H…O and –N–H…O) in the experimental spectra. They indicated OH, NH and CH bonds between the spectra of 2800–3600 cm−1. The variation in bond suggested that the Bacillus drentensis strain S1 degraded acetaminophen. Analysis of chemical bonds produced during APAP degradation by Bacillus drentensis strain S1 by FTIR: a control sample FTIR of APAP; b FTIR of sample after degradation of APAP Biodegradation of APAP gives many intermediate metabolites which were analyzed in a hexane extract by GC–MS. The gas chromatogram showed 66 peaks and the each peak indicates a single metabolite eluting close together (retention times of 17.709 to 51.510). Mass spectroscopy generates a library in the form of gas chromatogram peaks (Fig. 6a). The results of GC–MS indicated that, there was no residue of APAP existed in the batch culture. This indicated that Bacillus drentensis strain S1 utilizes APAP as energy source and has ability to remove APAP from the waste water. The metabolites of the APAP catabolic pathway were conclusively identified as oxalic acid, 2-isopropyl-5-methylcyclohexanone, and phenothiazine based on peak and retention time (RT) of sample (Fig. 6b, c). a Chromatogram analysis of metabolites produced during degradation of APAP by GC–MS; b mass peak of 2-isopropyl-5-methylcyclohexanone.; c mass peak of phenothiazine After batch incubation study, the intermediates were obtained during degradation of APAP by the strain S1 analyzed by GC–MS. The derivatives obtained by mass spectra library were confirmed by published reports on APAP. The degradation of APAP analyzed by GC–MS led to the identification of compounds as oxalic acid and 2-isopropyl-5-methylcyclohexanone. These were identified on the basis of query run on PathPred server (Additional file 2: S2). This server predicts pathways from a query compounds (Additional file 3: S3). Oxalic acid and 2-isopropyl-5-methylcyclohexanone were detected as intermediates and GC–MS chromatogram in degrading sample mediated by S1 strain (Fig. 7). The retention time for oxalic acid and 2-isopropyl-5-methylcyclohexanone was 45.372 and 45.003, respectively (Table 4). Moreover, the Bacillus drentensis strain S1 shows mineralization of nitrogen of APAP to nitrites and nitrates. These compounds may also be indicators for APAP degradation and mineralization. Predicted metabolic pathway for degradation of APAP by Bacillus drentensis strain S1 strain Table 4 Details of peaks identified during GC–MS analysis of acetaminophen degradation by Bacillus drentensis strain S1 The metabolic pathway for APAP degradation by microbes was proposed via formation of 4-aminophenol to hydroquinone, which was considered as the major degradation route for APAP biodegradation. In hydroquinone pathway APAP was converted into 1,4-benzenediol or hydroquinone. The amino group of APAP was replaced by hydroxyl group during catalysis by Amidohydrolase which formulates hydroquinone. The hydroquinone subsequently transformed into fission rings while hydrolytic enzymes potentially catalyzed the hydroxylation of APAP during degradation, and potentially released acetamide to form hydroquinone which was further converted into 2-isopropyl-5-methylcyclohexanone and oxalic acid. The hydroquinone pathway and hydrolytic enzyme pathway for APAP degradation was purposed by Zhang et al. (2013). Other possible pathway for APAP degradation the hydroxylase oxidized APAP to form N-(4-hydroxyphenyl)-acetamide by the replacement of hydrogen to produce p-aminophenol. The Burkholderia sp. strain AK-5 follows the hydroquinone and pyrocatechol degradation pathway (Takenaka et al. 2003). APAP form dimer and trimer during elimination pathway in dark incubation (Liang et al. 2016) intermediate products p-aminophenol and hydroquinone were recognized by LC/MS. Physical factors like pH, temperature do not have a prominent effect on the degradation mediated by KB4 strain (Zur et al. 2018a, b). Acetaminophen is a common drug used extensively as a pain reliever and a fever reducer. Due to its regular use its concentration gets increased in our environment gradually. Therefore, there is an urgent need to eliminate such contaminants from the environment. Biodegradation is one of the best approaches to remediate such contaminants. Bacillus drentensis strain S1 is an isolate identified from sewage sample having the potential to remove acetaminophen from waste water. This strain also degrades and tolerates the wide range of APAP concentration. The degrading potential of strain in anaerobic pilot-scale reactor was slow compared to the batch reactor. It is due to the fluctuation in pH and exhaustion of nutrients. We cannot underestimate the potential of strain, because it is an eco-friendly and cost-effective approach for degradation of APAP. It also has ability to degrade the APAP as sole carbon and energy source. Acetaminophen degradation pathways gave us indication of kind of intermediates produced during biodegradation, but sometimes biotransformation/biodegradation leads to production of dead-end metabolites. Therefore, the long-term prospective for the acetaminophen removal from the environment is needed and Bacillus drentensis strain S1 can play the major role. The Bacillus drentensis strain S1 strain has the potential to lead in acetaminophen removal from various water sources. With the aid of modern technology using nanotechnology, biosensors, and deploying better degrading strains, etc., can increase the degradation potential of harmful compounds in the environment. The datasets supporting this article are included in the manuscript. PPCPs: Pharmaceuticals and personal care products GC–MS: Gas chromatography–mass spectroscopy RSM: FT-IR: Fourier transform-infrared spectroscopy BBD: Box–Behnken design NSAIDs: EOCs: Emerging organic contaminants WTPs: Waste water treatment plants Balakrishna K, Rath A, Praveenkumarreddy Y, Guruge KS, Subedi B (2017) A review of the occurrence of pharmaceuticals and personal care products in Indian water bodies. Ecotoxicol Environ Saf 137:113–120. https://doi.org/10.1016/J.ECOENV.2016.11.014 Bart DG, Vanhaecke L, Verstraete W, Boon N (2011) Degradation of acetaminophen by Delftia tsuruhatensis and Pseudomonas aeruginosa in a membrane bioreactor. Water Res 45(4):1829–1837. https://doi.org/10.1016/J.WATRES.2010.11.040 Bedner M, MacCrehan WA (2006) Transformation of acetaminophen by chlorination produces the toxicants 1,4-benzoquinone and N-acetyl-p-benzoquinone imine. Environ Sci Technol 40(2):516–522. https://doi.org/10.1021/es0509073 Betz JL, Clarke PH (1973) Growth of pseudomonas species on phenylacetamide. J Gen Microbiol 75:167–177. https://doi.org/10.1099/00221287-75-1-167 Brillas E, Sirés I, Arias C, Cabot PL, Centellas F, Rodríguez RM, Garrido JA (2005) Mineralization of paracetamol in aqueous medium by anodic oxidation with a boron-doped diamond electrode. Chemosphere 58(4):399–406. https://doi.org/10.1016/J.CHEMOSPHERE.2004.09.028 Burgina EB, Baltakhinov VP, Boldyreva EV, Shakhtschneider TP (2004) IR spectra of paracetamol and phenacetin. 1. Theoretical and experimental studies. J Struct Chem 45:64–73. https://doi.org/10.1023/B:JORY.0000041502.85584.d5 Chandrasekharan NV, Dai H, Roos KLT, Evanson NK, Tomsik J, Elton TS, Simmons DL (2002) COX-3, a cyclooxygenase-1 variant inhibited by acetaminophen and other analgesic/antipyretic drugs: cloning, structure, and expression. Proc Natl Acad Sci 99(21):13926–13931. https://doi.org/10.1073/pnas.162468699 Chang YT, Yang CW, Chang YJ, Chang TC, Wei DJ (2014) The treatment of PPCP-containing sewage in an anoxic/aerobic reactor coupled with a novel design of solid plain graphite-plates microbial fuel cell. Biomed Res Int 2014:765652. https://doi.org/10.1155/2014/765652 Chen CY, Chen SC, Fingas M, Kao CM (2010) Biodegradation of propionitrile by Klebsiella oxytoca immobilized in alginate and cellulose triacetate gel. J Hazard Mater 177:856—863. https://doi.org/10.1016/j.jhazmat.2009.12.112 Edrees WH, AL-Kaf AG, Abdullah QY, Naji KM (2018) Isolation and identification of a new bacterial strains degrading Paracetamol isolated from Yemeni Environment. Clin Biotechnol Microbiol 1(6):257–270. https://www.scientiaricerca.com/srcbmi/SRCBMI-01-00039.php%0A https://www.scientiaricerca.com/srcbmi/pdf/SRCBMI-01-00039.pdf Foolad M, Hu J, Tran NH, Ong SL (2015) Sorption and biodegradation characteristics of the selected pharmaceuticals and personal care products onto tropical soil. Water Sci Technol 73(1):51–59. https://doi.org/10.2166/wst.2015.461 Granberg RA, Rasmuson ÅC (1999) Solubility of paracetamol in pure solvents. J Chem Eng Data 44(6):1391–1395. https://doi.org/10.1021/je990124v Hart A, Orr DLJ (1975) The degradation of paracetamol (4-hydroxyacetanilide) and other substituted acetanilides by a Penicillium species. Antonie Van Leeuwenhoek 41(1):239–247. https://doi.org/10.1007/BF02565059 Hasan Z, Jeon J, Jhung SH (2012) Adsorptive removal of naproxen and clofibric acid from water using metal-organic frameworks. J Hazard Mater 209–210:151–157. https://doi.org/10.1016/j.jhazmat.2012.01.005 Hesnawi R, Dahmani K, Al-Swayah A, Mohamed S, Mohammed SA (2014) Biodegradation of municipal waste water with local and commercial bacteria. Procedia Eng 70:810–814. https://doi.org/10.1016/J.PROENG.2014.02.088 Heyrman J, Vanparys B, Logan NA, Balcaen A, Rodríguez-Díaz M, Felske A, De Vos P (2004) Bacillus novalis sp. nov., Bacillus vireti sp. nov., Bacillus soli sp. nov., Bacillus bataviensis sp. nov. and Bacillus drentensis sp. nov., from the Drentse A grasslands. Int J Syst Evol Microbiol 54(1):47–57. https://doi.org/10.1099/ijs.0.02723-0 Hu J, Zhang LL, Chen JM, Liu Y (2013) Degradation of paracetamol by Pseudomonas aeruginosa strain HJ1012. J Environ Sci Health Part A 48(7):791–799. https://doi.org/10.1080/10934529.2013.744650 Ismail MM, Essam TM, Ragab YM, El-Sayed AE-KB, Mourad FE (2017) Remediation of a mixture of analgesics in a stirred-tank photobioreactor using microalgal-bacterial consortium coupled with attempt to valorise the harvested biomass. Bioresour tTechnol 232:364–371. https://doi.org/10.1016/j.biortech.2017.02.062 Ivshina IB, Rychkova MI, Vikhareva EV, Chekryshkina LA, Mishenina II (2006) Catalysis of the biodegradation of unusable medicines by Alkanotrophic rhodococci. Appl Biochem Microbiol 42(4):392–395. https://doi.org/10.1134/S0003683806040090 Jameson JE (1961) A study of tetrathionate enrichment techniques, with particular reference to two new tetrathionate modifications used in isolating salmonellae from sewer swabs. J Hyg 59(1):1–13. https://doi.org/10.1017/S0022172400038663 Joss A, Zabczynski S, Göbel A, Hoffmann B, Löffler D, McArdell CS et al (2006) Biological degradation of pharmaceuticals in municipal waste water treatment: proposing a classification scheme. Water Res 40(8):1686–1696. https://doi.org/10.1016/J.WATRES.2006.02.014 Khan AS, Hamayun M, Ahmed S (2006) Degradation of 4-aminophenol by newly isolated Pseudomonas sp. strain ST-4. Enzyme Microb Technol 38(1–2):10–13. https://doi.org/10.1016/j.enzmictec.2004.08.045 Khetan SK, Collins TJ (2007) Human pharmaceuticals in the aquatic environment: a challenge to green chemisty. Chem Rev 107(6):2319–2364. https://doi.org/10.1021/cr020441w Kim I, Lee M, Wang S (2014) Heavy metal removal in groundwater originating from acid mine drainage using dead Bacillus drentensis sp. immobilized in polysulfone polymer. J Environ Manage 146:568–574. https://doi.org/10.1016/J.JENVMAN.2014.05.042 Kumar D, Kumar A, Sharma J (2016a) Degradation study of lindane by novel strains Kocuria sp. DAB-1Y and Staphylococcus sp. DAB-1W. Bioresour Bioprocess. https://doi.org/10.1186/s40643-016-0130-8 Kumar S, Stecher G, Tamura K (2016b) MEGA7: molecular evolutionary genetics analysis version 7.0 for bigger datasets. Mol Biol Evol 33(7):1870–1874. https://doi.org/10.1093/molbev/msw054 Li J, Ye Q, Gan J (2014) Degradation and transformation products of acetaminophen in soil. Water Res 49:44–52. https://doi.org/10.1016/J.WATRES.2013.11.008 Liang C, Lan Z, Zhang X, Liu Y (2016) Mechanism for the primary transformation of acetaminophen in a soil/water system. Water Res 98:215–224. https://doi.org/10.1016/J.WATRES.2016.04.027 Mallah MA, Sherazi STH, Bhanger MI, Mahesar SA, Bajeer MA (2015) A rapid Fourier-transform infrared (FTIR) spectroscopic method for direct quantification of paracetamol content in solid pharmaceutical formulations. Spectrochim Acta Part A Mol Biomol Spectrosc 141:64–70. https://doi.org/10.1016/j.saa.2015.01.036 Martínez-Hernández V, Meffe R, Herrera López S, de Bustamante I (2016) The role of sorption and biodegradation in the removal of acetaminophen, carbamazepine, caffeine, naproxen and sulfamethoxazole during soil contact: a kinetics study. Sci Total Environ 559:232–241. https://doi.org/10.1016/J.SCITOTENV.2016.03.131 Minto CF, Schnider TW, Egan TD, Youngs E, Lemmens HJ, Gambus PL et al (1997) Influence of age and gender on the pharmacokinetics and pharmacodynamics of remifentanil: I. Model development. Anesthesiology 86(1):10–23 Mirizadeh S, Yaghmaei S, Ghobadi Nejad Z (2014) Biodegradation of cyanide by a new isolated strain under alkaline conditions and optimization by response surface methodology (RSM). J Environ Health Sci Eng 12(1):85. https://doi.org/10.1186/2052-336X-12-85 Mutnur S (2014) Bioremediation of paracetamol from industrial waste water by Pseudomonas mendocina. In: 12th Specialized conference on small water and waste water systems and 4th specialized conference on resources oriented sanitation, November 2–4, 2014 Muscat, Sultanate of Oman Onesios KM, Yu JT, Bouwer EJ (2009) Biodegradation and removal of pharmaceuticals and personal care products in treatment systems: a review. Biodegradation 20(4):441–466. https://doi.org/10.1007/s10532-008-9237-8 Pannu R, Kumar D (2017) Process optimization of γ-hexachlorocyclohexane degradation using three novel Bacillus sp. strains. Biocatal Agric Biotechnol 11:97–107. https://doi.org/10.1016/J.BCAB.2017.06.009 Saitou N, Nei M (1987) The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol 4:406–425 Shervington LA, Sakhnini N (2000) A quantitative and qualitative high performance liquid chromatographic determination of acetaminophen and five of its para-substituted derivatives. J Pharm Biomed Anal 24(1):43–49. https://doi.org/10.1016/S0731-7085(00)00396-4 Shihana F, Dissanayake D, Dargan P, Dawson A (2010) A modified low-cost colorimetric method for paracetamol (acetaminophen) measurement in plasma. Clin Toxicol 48(1):42–46. https://doi.org/10.3109/15563650903443137 Skoumal M, Cabot P-L, Centellas F, Arias C, Rodríguez RM, Garrido JA, Brillas E (2006) Mineralization of paracetamol by ozonation catalyzed with Fe2+ , Cu2+ and UVA light. Appl Catal B 66(3–4):228–240. https://doi.org/10.1016/J.APCATB.2006.03.016 Subedi B, Balakrishna K, Joshua DI, Kannan K (2017) Mass loading and removal of pharmaceuticals and personal care products including psychoactives, antihypertensives, and antibiotics in two sewage treatment plants in southern India. Chemosphere 167:429–437. https://doi.org/10.1016/J.CHEMOSPHERE.2016.10.026 Takenaka S, Murakami S, Shinke R, Aoki K (1998) Metabolism of 2-aminophenol by Pseudomonas sp. AP-3: modified meta-cleavage pathway. Arch Microbiol 170(2):132–137. https://doi.org/10.1007/s002030050624 Takenaka S, Okugawa S, Kadowaki M, Murakami S, Aoki K (2003) The metabolic pathway of 4-aminophenol in Burkholderia sp strain AK-5 differs from that of aniline and aniline with C-4 substituents. Appl Environ Microbiol 69(9):5410–5413. https://doi.org/10.1128/aem.69.9.5410-5413.2003 Tamura K, Nei M, Kumar S (2004) Prospects for inferring very large phylogenies by using the neighbor-joining method. Proc Natl Acad Sci USA 101:11030–11035 Vogna D, Marotta R, Napolitano A, Ischia M (2002) Advanced oxidation chemistry of paracetamol. UV/H2O2-induced hydroxylation/degradation pathways and 15N-aided inventory of nitrogenous breakdown products. J Org Chem 67(17):6143–6151. https://doi.org/10.1021/jo025604v Wang Y, Wang Q, Wang Y, Han H, Hou Y, Shi Y (2017) Statistical optimization for the production of recombinant cold-adapted superoxide dismutase in E. coli using response surface methodology. Bioengineered 8(6):693–699. https://doi.org/10.1080/21655979.2017.1303589 Waterston K, Wang JW, Bejan D, Bunce NJ (2006) Electrochemical waste water treatment: electrooxidation of acetaminophen. J Appl Electrochem 36(2):227–232. https://doi.org/10.1007/s10800-005-9049-z Wei F, Zhou Q-W, Leng SQ (2011) Isolation, identification and biodegradation characteristics of a new bacterial strain degrading paracetamol. Chin J Environ Sci 32(6):1812–1819 Wilson K (2001) Preparation of genomic DNA from bacteria. Curr Protoc Mol Biol 56(1):2.4.1–2.4.5. https://doi.org/10.1002/0471142727.mb0204s56 Wu S, Zhang L, Chen J (2012) Paracetamol in the environment and its degradation by microorganisms. Appl Microbiol Biotechnol 96(4):875–884. https://doi.org/10.1007/s00253-012-4414-4 Yang L, Yu LE, Ray MB (2008) Degradation of paracetamol in aqueous solutions by TiO2 photocatalysis. Water Res 42(13):3480–3488. https://doi.org/10.1016/J.WATRES.2008.04.023 Yang D, Liu H, Zheng Z, Yuan Y, Zhao JC, Waclawik ER et al (2009) An efficient photocatalyst structure: TiO2(B) nanofibers with a shell of anatase nanocrystals. J Am Chem Soc 131(49):17885–17893. https://doi.org/10.1021/ja906774k Zhang L, Hu J, Zhu R, Zhou Q, Chen J (2013) Degradation of paracetamol by pure bacterial cultures and their microbial consortium. Appl Microbiol Biotechnol 97(8):3687–3698. https://doi.org/10.1007/s00253-012-4170-5 Żur J, Piński A, Marchlewicz A, Hupert-Kocurek K, Wojcieszyńska D, Guzik U (2018a) Organic micropollutants paracetamol and ibuprofen—toxicity, biodegradation, and genetic background of their utilization by bacteria. Environ Sci Pollut Res 25(22):21498–21524. https://doi.org/10.1007/s11356-018-2517-x Żur J, Wojcieszyńska D, Hupert-Kocurek K, Marchlewicz A, Guzik U (2018b) Paracetamol toxicity and microbial utilization. Pseudomonas moorei KB4 as a case study for exploring degradation pathway. Chemosphere 206:192–202. https://doi.org/10.1016/J.CHEMOSPHERE.2018.04.179 Zwain HM, Aziz HA, Ng WJ, Dahlan I (2017) Performance and microbial community analysis in a modified anaerobic inclining-baffled reactor treating recycled paper mill effluent. Environ Sci Pollut Res 24(14):13012–13024. https://doi.org/10.1007/s11356-017-8804-0 The authors acknowledge the sample analysis for FTIR at Central Instrumentation Laboratory (CIL), DCRUST Murthal Sonepat India, DNA sequencing at Eurofins Genomics India Pvt Ltd, Advanced Instrumentation Research Facility (AIRF), JNU New Delhi, India, for GC–MS analysis. The author S. Chopra wish to thank UGC New Delhi, India, for providing research assistantship in the form of RGNF fellowship. There is no external funding received to carry out this research. Authors wish to thank the Department of Biotechnology, DCRUST Murthal Sonipat India, for providing the necessary facilities to carry out this research. Department of Biotechnology, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonepat, 131039, Haryana, India Sunil Chopra & Dharmender Kumar Sunil Chopra Dharmender Kumar SC conducted the experiment and DK helped in the experiments and supervised the research. Both authors prepared the manuscript. Both authors read and approved the final manuscript. Correspondence to Dharmender Kumar. Ethical approval and consent to participate Both authors approved this manuscript. Additional file 1: S1. Hit table of phylogenetic analysis of by Bacillus drentensis strain S1. Predicted pathway of degradation of APAP by Bacillus drentensis strain S1 predicted by the PathPred online tool with KEGG compound number. Different intermediates predicted through PathPred online tool with structure of predicted compounds. Chopra, S., Kumar, D. Characterization, optimization and kinetics study of acetaminophen degradation by Bacillus drentensis strain S1 and waste water degradation analysis. Bioresour. Bioprocess. 7, 9 (2020). https://doi.org/10.1186/s40643-020-0297-x DOI: https://doi.org/10.1186/s40643-020-0297-x Bacillus drentensis strain S1 Molecular characterization Kinetic study
CommonCrawl
An improved understanding of how vortices develop and propagate under pulsatile flow can shed important light on the mixing and transport processes including the transition to turbulent regime occurring in such systems. For example, the characterization of pulsatile flows in obstructed artery models serves to encourage research into flow-induced phenomena associated with changes in morphology, blood viscosity, wall elasticity and flow rate. In this work, an axisymmetric rigid model was used to study the behaviour of the flow pattern with varying constriction degree ($d_0$), Reynolds number ($Re$) and Womersley number ($\alpha$). Velocity fields were acquired experimentally using Digital Particle Image Velocimetry and generated numerically. For the acquisition of data, Re was varied from 953 to 2500, $d_0$ was 1.0 cm and 1.6 cm, and $\alpha$ was fixed at 33.26 in the experiments and was varied from 15 to 50 in the numerical simulations. Results for the considered Reynolds, showed that the flow pattern consisted of two main structures: a central jet around the tube axis and a recirculation zone adjacent to the inner wall of the tube, where vortices shed. Using the vorticity fields, the trajectory of vortices was tracked and their displacement over their lifetime calculated. The analysis led to a scaling law equation for the maximum vortex displacement as a function of a dimensionless variable dependent on the system parameters Re and $\alpha$. The linear amplification mechanisms leading to streamwise-constant large-scale structures in laminar and turbulent channel flows are considered. A key feature of the analysis is that the Orr--Sommerfeld and Squire operators are each considered separately. Physically this corresponds to considering two separate processes: (i) the response of wall-normal velocity fluctuations to external forcing; and (ii) the response of streamwise velocity fluctuations to wall-normal velocity fluctuations. In this way we exploit the fact that, for streamwise-constant fluctuations, the dynamics governing the wall-normal velocity are independent of the mean velocity profile (and therefore the mean shear). The analysis is performed for both plane Couette flow and plane Poiseuille flow; and for each we consider linear amplification mechanisms about both the laminar and turbulent mean velocity profiles. The analysis reveals two things. First, that the most amplified structures (with a spanwise spacing of approximately $4h$, where $h$ is the channel half height) are to a great extent encoded in the Orr-Sommerfeld operator alone, thus helping to explain their prevalence. Second---and consistent with numerical and experimental observations---that Couette flow is significantly more efficient than Poiseuille flow in leveraging wall-normal velocity fluctuations to produce large-scale streamwise streaks. Parameter extension simulation (PES) as a mathematical method for simulating turbulent flows has been proposed in the study. It is defined as a calculation of the turbulent flow for the desired parameter values with the help of a reference solution. A typical PES calculation is composed of three consecutive steps: Set up the asymptotic relationship between the desired solution and the reference solution; Calculate the reference solution and the necessary asymptotic coefficients; Extend the reference solution to the desired parameter values. A controlled eddy simulation (CES) method has been developed to calculate the reference solution and the asymptotic coefficients. The CES method is a special type of large eddy simulation (LES) method in which a weight coefficient and an artificial force distribution are used to model part of the turbulent motions. The artificial force distribution is modeled based on the eddy viscosity assumption. The reference weight coefficient and the asymptotic coefficients can be determined through a weight coefficient convergence study. The proposed PES/CES method has been used to simulate four types of turbulent flows. They are decaying homogeneous and isotropic turbulence, smooth wall channel flows, rough wall channel flows, and compressor blade cascade flows. The numerical results show that the 0-order PES solution (or the reference CES solution) has a similar accuracy as a traditional LES solution, while its computational cost is much lower. A higher order PES method has an even higher model accuracy. Dielectric particles suspended in a weakly conducting fluid are known to spontaneously start rotating under the action of a sufficiently strong uniform DC electric field due to the Quincke rotation instability. This rotation can be converted into translation when the particles are placed near a surface providing useful model systems for active matter. Using a combination of numerical simulations and theoretical models, we demonstrate that it is possible to convert this spontaneous Quincke rotation into spontaneous translation in a plane perpendicular to the electric field in the absence of surfaces by relying on geometrical asymmetry instead. The paper presents a hybrid bubble hologram processing approach for measuring the size and 3D distribution of bubbles over a wide range of size and shape. The proposed method consists of five major steps, including image enhancement, digital reconstruction, small bubble segmentation, large bubble/cluster segmentation, and post-processing. Two different segmentation approaches are proposed to extract the size and the location of bubbles in different size ranges from the 3D reconstructed optical field. Specifically, a small bubble is segmented based on the presence of the prominent intensity minimum in its longitudinal intensity profile, and its depth is determined by the location of the minimum. In contrast, a large bubble/cluster is segmented using a modified watershed segmentation algorithm and its depth is measured through a wavelet-based focus metric. Our processing approach also determines the inclination angle of a large bubble with respect to the hologram recording plane based on the depth variation along its edge on the plane. The accuracy of our processing approach on the measurements of bubble size, location and inclination is assessed using the synthetic bubble holograms and a 3D printed physical target. The holographic measurement technique is further implemented to capture the fluctuation of instantaneous gas leakage rate from a ventilated supercavity generated in a water tunnel experiment. Overall, our paper introduces a low cost, compact and high-resolution bubble measurement technique that can be used for characterizing low void fraction bubbly flow in a broad range of applications.
CommonCrawl
Habitat suitability analysis for hippopotamus (H. amphibious) using GIS and remote sensing in Lake Tana and its environs, Ethiopia Fentanesh Haile Buruso1 This research was carried out from October 2013 to May 2014. Hippopotamus amphibious is a mammalian species distributed in different lakes and rivers where ecological requirements are fulfilled for its survival. Lake Tana and its environs are home to Hippopotamus amphibious. The species is identified as vulnerable worldwide due to habitat loss and poaching. However, despite its vulnerability, there is no research conducted regarding the species, and its environmental requirements in Ethiopia. Therefore, the main objective of this study was to carryout habitat suitability analysis and find out suitable habitat sites of hippopotamus within the Lake Tana and its environs using the integration of GIS and remote sensing techniques. The softwares such as, Arc GIS10.2, ERDAS IMAGINE2010, and Virtual satellite image downloader were used in this research. The data used were SPOT image of 2012 of the study area, bathymetric data of Lake Tana, DEM, Google Earth data and GCP. Running a suitability model requires estimation of weights by expertise for each individual criterion on GIS software. Thus, the habitats in Lake Tana and its environs ranging from most suitable to not suitable for hippopotamus were identified. It was shown that 50.88% of the areas under study was highly disturbed and became unsuitable to hippopotamus, 42.29% of the areas were moderately disturbed, and only 1.81% of the areas were revealed to be undisturbed. As the study result showed that in and around Lake Tana, a human factor was considered to be outweighing the physical factors to minimize the habitat for the aforementioned animal. The results revealed that only 22.54% of the study areas were identified as most suitable for the animal under study of which the large portions of the areas are located at the backside of settlements which are not easily accessible by the species, while 40.5% of the areas were found to be moderately suitable, and 36.96% were unsuitable habitats for hippopotamus. Based on the findings of the present study it was concluded that there was high interference of human being in the habitats of hippopotamus especially at the shores of the lake since the land were looked-for agricultural activities. Therefore, too much proximity of human activities in identified hippopotamus habitats have to be protected and conservation buffer surrounding the Lake has to be developed. Hippopotamus (H. amphibious) is a mammalian species distributed in different lakes and rivers where ecological requirements are fulfilled for its survival. According to (Eltringham 1993), Lewison and Carter (2004) the species was widely distributed in sub-Saharan Africa. However, study by International Union for Conservation of Nature and Natural Resources (IUCN 2005a, b) showed that the population had declining from time to time as a result of exploitation and habitat loss. So that hippopotamus specialist group reevaluated its status to vulnerable category on the International Red List of threatened species in 2006 (Lewison 2007; Lewison and Oliver 2008). The study conducted by G/kidan and Teka 2006 cited in Funny (2012) point out that hippopotamus are mainly restricted to pocket habitats of Lake Tana, in spite of their widespread in former times. The other study UNEP-WCMC (2010) indicated that there is no adequate countrywide information on population size of hippopotamus in Ethiopia. However, the study confirms the presence of some populations of this species in Lake Tana, and also other rivers and lakes of the country. Moreover, strategic environmental assessment report of Lake Tana and its environ (2012) elucidate the hippopotamus failed under critical conservation issues due to threats of habitats fragmentation, overgrazing, farmland, settlement, hunting and deforestation. All these problems coupled with its' ecological and economic importance requires mapping suitable habitat site in Lake Tana and its environs using GIS and remote sensing techniques. The objectives of this study were to carryout habitat suitability analysis and find out suitable sites of hippopotamus in Lake Tana and its environs using the integration of GIS and remote sensing techniques with MCDM. Description of the study area The study area is located in North Western part of Ethiopia between 11.506° to 12.394° latitudes and 36.903° to 37.717° longitudes (Amhara Design and Supervision Works Enterprise (ADSWE) 2011). The lake is a natural type which covers 309,132.12 ha area at an average elevation of 1800 m asl and with a maximum depth of 15 m (Matthew et al. 2010; Amhara Design and Supervision Works Enterprise (ADSWE) 2011). It is the largest lake in Ethiopia and the third largest in the Nile Basin. The lake is main source of the Blue Nile River which is the only surface outflow from it. The mean maximum and minimum temperature of Lake Tana are 29.20 and 10.90 °C, respectively (Amare and Rao 2011) (Fig. 1). Location map of the study area To determine the data type, sample size, collection tools, and analysis methods, identification of factors that can affect habitats of the spices understudy was priority. Hence based on literatures slope (Holmes (1996), cited in Dietz et al. 2000), elevation (Eltringham (2003), cited in UNEP-WCMC 2010), land use and land cover (Mackie 1976; Lock 1972 cited in Kanga et al. 2011; Pienaar et al. 1966; IUCN 1993; Eltringham 1999), forage proximity to the Lake (Tracy 1996), distance from settlement (Wengström 2009) and water depth (Tracy 1996) were identified variables. For this study mixed approaches were employed. The data used were GPS readings, SPOT image of 2012, DEM (digital elevation model), Google Earth images and pictures. In addition bathymetric data of the Lake and thorough filed observation were very helpful for the completion of the work (Table 1). Table 1 Major Data types and their sources Since Lake Tana is very large; it was very difficult to cover all the area. As a result five sample areas/sites were selected by considering accessibility by road transport. The sample sites for field survey were the outlet of Nile River (at Debere Mariam), Gelda, Tana Chirkos (at the inflow of Gumara River to the Lake Tana), Korata and Robit. To classify three band SPOT images into 6 land use/land cover classes 180 GCPs were collected by following the rule of thumb which states that if each measurement vector has N features, then select N + 1points per class and the practical minimum is 10*N per class (Anji Reddy 2008). In addition by using Microsoft Virtual Earth Satellite Downloader, 544 polygons of settlement were generated. The major Software used was ERDAS IMAGIN2010 to classify land use/land cover and Arc GIS 10.2 to produce thematic maps based on their particular criteria. All the GIS and RS processes performed in this study were summarized diagrammatically in Fig. 2. Schematic frameworks for habitat suitability analysis model Data analysis methods In order to map suitable habitat site for hippopotamus in Lake Tana and its environs, thematic maps were produced. The thematic layers have varied (qualitative and quantitative) values. Thus the data classes need conversion in uniform suitability measures to make the combination compatible. Each layer was reclassified into three suitability classes: highly suitable (3), moderately suitable (2), Not suitable (1) based on the literature evidences and field observation (Table 2). Table 2 Suitability classes for each criteria/factor Multi criteria decision making Model factors used were: slope, grazing proximity to resting water, and proximity to settlement/human disturbance, elevation and land use/land cover. Before the layers were merged into the weighted overlay analysis, the inputs were first converted into a raster data model and segregated into common scale to make combination possible. Multi criteria decision making (MCDM) problems typically involve criteria of varying importance to decision makers. According to Eastma et al. (1995) a criterion is some basis for a decision that can be measured and evaluated. Accordingly, 1–3 class scales (most suitable-3, moderately suitable-2 and not suitable-1) were assigned for each criteria/factor. Each criterion's relative influence on suitability of habitat for selected animal was assigned or ranked by expertise decision. Then given ranks were converted in percentages on GIS software to integrate the value with its respective raster data. The criteria were ranked on the basis of their influence from most influential to least influential based on the following formula: $$Wj = \frac{{{\text{n}} - {\text{rj}} + 1}}{{\mathop \sum \nolimits ({\text{n}} - {\text{rk}} + 1)}}$$ where: wj is normalized weight for the jth criterion, n is the number of criteria under consideration (k = 1, 2, 3…….n). rj is the rank position of the criterion. Each criterion is weighted (n − rj + 1) and then normalized by the sum of all weights, that is Σ (n − rk + 1) (Malczewski 1999; Drobne and Lisec 2009). Using this straight ranking method, each rank was converted to a weight; the higher the weight the more the important the criterion. Then the weights were summed. The sum of the criteria is 1. Weighted linear combination Weighted linear combination is based on the concept of a weighted average in which continuous criteria are standardized to a common numeric range, and then combined by means of a weighted average. The total score for each factor is obtained by multiplying the weight assigned to each attribute by the scaled value given for that attribute and then summing the products over all attributes (Drobne and Lisec 2009). According to Drobne and Lisec (2009) with the weighted linear combination, factors are combined by applying a weight to each followed by a summation of the results to yield a suitability map: $$S\; = \;\sum {w_{i} } x_{i}$$ where, S is suitability, w i is weight of factor i, and x i is the criterion score of factor I (Drobne and Lisec 2009). Water depth suitability By using GIS software, the bathymetric data was reclassified into three suitability classes based on the literature evidences. From Fig. 3 much of the lake are not suitable for hippopotamus due to its depth. The animal prefers to live in the gently sloping shallow water which is with grazing grass at the shore. This is because of the fact that Hippopotamus requires aquatic ecosystems known as their "daily living space" where they spend most of their time, and forage pasture ashore (Eltringham 1999). Most of the time, they occupy the periphery of the Lake. As a result, 71.3% of the lake areas are not suitable while only 22.34 and 6.36% of the lake area are highly suitable and moderately suitable respectively. Lake depth suitability classes of Lake Tana Elevation suitability The digital elevation model shows an elevation ranging from very low altitude of 1646 to 2394 m above sea level. The study conducted by Eltringham (2003) cited in UNEP-WCMC (2010) showed this species is abundant between altitudes of 200 and 2000 m in Ethiopia. As the other literature evidence and the researcher's estimation from field observation, the upper altitude limit was to be approximately 2000 m (Rebecca 2008). So that based on these evidences the elevation classes between 1646 and 1900 m; 1900.1 m and 2000 m and more than 2000 m above mean sea level were identified as more suitable, moderately suitable and not suitable respectively. Thus by using raster reclass tool, the reclassification analysis was computed for elevation suitability classes. The GIS spatial analysis result shows that, most of the study area's elevation was fall under the suitable class for the species understudy. As can be seen from Fig. 4, 81.85% of the lakeshore was under suitable elevation classes for this mammal, only 11.9% of the area and the unsuitable elevation class covers the least (6.2%). Elevations suitability classes of Lake Shore Lake shore slope suitability The slope class which help hippopotamus movement to the land was identified based on literature and field observation. As a result the lakeshore gradient less than 7° was identified suitable grazing ground for this animal. According to Holmes (1996), cited in Dietz et al. (2000) sites with a high slope can cause inaccessibility problems due to morphology and body size of the specious understudy. However, very shallow and gentle Lake Shore is frequently disturbed by domestic animals and human activities like irrigation and rice cultivation in the study area. The suitable slope values were those that represent conducive travel for the species, since it could not raise high gradients due to its body size and structure. Figure 5 depicted the slope suitability classes of the lakeshore. By slope criteria all the Lake area became suitable. However, when the terrestrial environs accessibility within estimated buffer zone was considered, the suitable area was very low. Thus the figure shows from the Lake shore only 6.97% as highly suitable, moderately suitable 47.2% while the remaining 45.83% is not suitable (the slope classes that could not be climbed by the hippopotamus). Slopes suitability classes Land use/land cover suitability In order to generate the present land use/land covers status, SPOT image with spatial resolution of 5 m pixel size was processed using ERDAS Imagine version 2010 software. By using ground control points (GCP) collected by GPS in the field, land use/land covers classification was performed. Supervised classification was done using the maximum likelihood algorithm for 3 spectral bands corresponding to green, red and near infrared. During field visit settlements and cultivation were found highly mixed, while image classification was conducted, they were merged together. The land use/land covers map of the Lakeshore was reclassified on the basis of suitability/compatibility to hippopotamus living and feeding ground. As a result, human development (cultivated and settlement) and forested banks were identified to be unsuitable (IUCN 1993), while wetlands and grasslands were classed as suitable. The classification for land use/land covers indicated that the area occupied with human action accounts about 45.26% of the total area within the study area. And is followed by bushland (27.42%) which is unsuitable for hippopotamus grazing which might be most probably converted by human over utilization of the land for long years. The remaining land use/land covers grassland, wetland and forests account about 20.77, 3.73 and 2.84% respectively. Most of grasslands near the Lake were dominantly communal (domestic animal grazing) grounds which were frequently grazed and highly disturbed due to competition over limited resources. To evaluate habitat suitability based on land use/land covers criteria; the Lake was seen as restriction. This was because not all parts of the lake were suitable and also it was difficult to have the common criteria for aquatic and terrestrial environment. From the total estimated area of hippopotamus habitat including the Lake Tana, only 21.23% was identified as highly suitable. More over the areas depicted as suitable are not easily accessible due to their location. Most of the suitable areas from Fig. 6 were located behind settlements. On the other hand 78.77% of the lakeshore was not suitable for hippopotamus grazing due to either human interferences or natural barriers. Land use/land cover suitability classes Settlements Proximity to hippopotamus resting and grazing area In addition to cultivation of hippopotamus grazing land and competition with domestic animals on the same area, permanent settlements have a great disturbance on its habitat. As the field visit confirm that, people living very close to the Lake made different types of barriers to prevent the passing of these animals to their gardens. Manmade obstacles that threaten this animal's life were the holes dug for this purpose and stone hedge. People knew that due to its body size and short leg it could not pass such barriers. From settlements multiple rings buffering at specified distance around the input feature was computed as follows: Figure 7 depicted the presence of high human and livestock disturbance on hippopotamus habitat in Lake Tana Environs. From the estimated terrestrial habitat only 1.81% of the land area was safe from human interference whereas 50.88% of the land was highly disturbed for hippopotamus survival in the study area. On the other hand 47.29% of the area was moderately disturbed. Settlements and livestock disturbance on hippopotamus habita Grazing ground proximity to the lake In search of food individual hippopotamus is estimated to commute every night from 2 to 7 km from the river or lake in which they spent the day. However, during condition when food is not easily obtained the distance they move may increase up to 10 km (Eltringham 1999; Muller and Erasmus 1992; Tracy 1996). Hence grazing ground suitability on the basis of proximity to resting water was classified and reclassified using multiple rings buffer analysis on Arc GIS 10.2. As the multiple rings buffer analysis depicted (Fig. 8) that the most suitable area for nocturnal grazing for hippopotamus is preferred to be close to resting water. Therefore, by keeping other factors constant and taking the capability of hippopotamus to move and forage, only 22% of estimated area was classed as highly suitable and 78% area was moderately suitable. Beyond the maximum distance that hippopotamus cannot move was not included in suitability classification. Hippopotamus grazing ground proximity classes to the lake Weight assignment for thematic maps In multi criteria evaluation of various factors for habitat suitability, estimating weights by ranking method was taken into consideration to find optimal location for hippopotamus in the Lake Tana and its environs. The criterion was first ranked based on the influence of each factor relative to other factors (Table 3). Table 3 Weight assignments for thematic maps From the above table, weightage indicates the most influential factor that threatens the existence of the species understudy by reducing its habitat. Accordingly the higher the weightage the higher its relative influence than others. Among the overlay method Weighted Sum tool was chosen for suitability modeling due to the fact that it provides the ability to weight and combine multiple inputs to create an integrated analysis on Arc GIS 10 soft ware. The resulting cell values were added to produce the final raster model output. To this end, higher values generally indicate that a location is more suitable whereas lower values imply least suitable location. Weighted overlay analysis Reclassification and weighting the factors were followed by running the overlay analysis/weighted sum for terrestrial factors. By this spatial analysis, suitable sites for hippopotamus were identified in Lake Tana environs. On the other hand the aquatic habitat was analyzed separately because of the terrestrial and aquatic environments are adjacent each other. The animal prefer the aquatic environment which has grazing ground adjacent to resting water to reduce long distance travel. As we can see from the Fig. 9, it seems there was large area suitable for hippopotamus habitat. For instance 22.54% of the study area was suitable based on used variables analyses. However, the animals could not cross settlement areas. As a result much of the areas were inaccessible. Whereas 40.5% of the study area was moderately suitable that can made more suitable if conservation strategy is designed by the government and the community. On the other hand the analysis result showed that 36.96% of the Lake Tana environs were not suitable for hippopotamus habitat. Model result for hippopotamus habitat around and in Lake Tana This study attempted to find out suitable sites for hippopotamus in Lake Tana and its environs by integrating MCDM with GIS and RS techniques. The analysis of each factor based on suitability class shows varied outputs and varied land size. In the study area high interferences of human activity in hippopotamus habitats takes the lion's share in making the areas unsuitable. To this end the land use/land cover classification indicated that the area occupied with human action (settlement, cultivation) accounts about 45.26% of the total area within the study area. However, as the analysis output ilustrated the sphere of influence of settlement was beyond the area occupied by it and was followed by bushland (27.42%) which was unsuitable for hippopotamus grazing. Due to too much proximity, these animals may begin to make threats on human life and crops as its territories are disturbed continually. Therefore, too much proximity of human settlements to such mammal's habitat has to be protected by local planning and rational management of the population is desirable. The remaining area (20.77, 3.73 and 2.84%) of the Lakeshore is covered by grassland, wetland and forests respectively. Like that of bushes, forest would not be included in the diets of hippopotamus. In general, settlement with livestock disturbance alone has made 50.88% of the areas under study highly unsuitable, 47.290% of the area fairly/moderately suitable and only 1.81% of the areas were with no disturbance/suitable. During habitat suitability modeling, physical and human factors were considered based on literature evidences and field observation. The overlay analysis/model results reveal that only 22.54.59% of the land areas under study were identified as most suitable for the species. From the model output, among the suitable areas, large portion were located at the backside of settlements which were not easily accessible by the species, while 40.5% were found to be moderately suitable for hippopotamus habitat and 36.96% were unsuitable lands. As the study shows the habitat of hippopotamus in Lake Tana and its environs were highly reduced due to mainly human factors; much more emphasis should be placed on preserving of this vulnerable species in the study area. Moreover, there should be legal enforcements to protect hippopotamus habitats in the study area to insure its sustainable conservation. Amare S, Rao KK (2011) Hydrological dynamics and human impact on ecosystems of lake Tana, Northwestern Ethiopia. Ethiop J Environ Stud Manag 4:2011 Amhara Design and Supervision Works Enterprise (ADSWE) (2011) Lake Tana bathymetry survey project. Final study report Anji Reddy M (2008) Textbook of remote sensing and geographical information systems. BS Publication, Hyderabad Dietz AJ, Mohamed MA, Okeyo-Owuor JB (2000). The hippopotamus: nothing but a nuisance? Hippo-human conflicts in Lake Victoria area, Kenya. Thesis Environmental Geography Aenne W.C.H.M. Post, University of Amsterdam Drobne S, Lisec A (2009) Multi-attribute decision analysis in GIS: weighted linear combination and ordered weighted averaging. Informatica 33:459–474 Eastman JR, Jin W, Kyem W, Toledano P (1995) Raster procedures for multi-criteria? Multi-objective decisions. Photogramm Eng Remote Sens 61(5):539–547 Eltringham SK (1993) 'The common hippopotamus, Hippopotamus amphibius'. http://www.iucn.org/themes/ssc/sgs/pphsg/Apchap3-2.htm. Accessed 20 Dec 2005 Eltringham SK (1999) The Hippos: Natural History and Conservation. Cambridge University Press, Cambridge Funny M (2012) Wetlands around Lake Tana: a landscape and avifaunistic study, diploma Thesis within the study programme landscape ecology and nature conservation, institute of botany and landscape ecology IUCN (1993) Pigs, peccaries, and hippos; status survey and conservation action plan. In Oliver WLR (ed) IUCN (2005) Hippo facts. What is a hippo? IUCN (2005) Hippo Conservation and the World Conservation Union Kanga EM, Ogutu JO, Olff H, Santema P (2011) Population trend and distribution of the vulnerable common hippopotamus Hippopotamus amphibius in the mara region of Kenya. Fauna & Flora International, Oryx Lewison R (2007) Population responses to natural and human—mediated disturbances: assessing the vulnerability of the common hippopotamus (Hippopotamus amphibius). Afr J Ecol 45:407–415 Lewison RL, Carter J (2004) Exploring behavior of an unusual megaherbivore: a spatially explicit foraging model of the hippopotamus. Ecol Model 171:127–138 Lewison R, Oliver W (2008). Hippopotamus amphibius Lock JM (1972) The effects of hippopotamus grazing on grasslands. J Ecol 60:445–467 Mackie CS (1976) Interactions between the hippopotamus (Hippopotamus amphibius) and its environment on the Lundi River. Certificate in field ecology, University of Rhodesia, Salisbury Malczewski J (1999) GIS and multicriteria decision analysis. Wiley, New York Pienaar UDV, van Wyk P, Fairall N (1966) An experimental cropping scheme of hippopotami in the Letaba river of the Kruger National Park. Koedoe 9:1–33 Rebecca J (2008) Husbandry guidelines for the common hippopotamus. Hippopotamus amphibius Mammalia: Hippopotamidae, Western Sydney Institute of TAFE Tracy RE (1996) Social grouping behaviors of captive female Hippopotamus amphibious. Northwest Missouri State University, B.S UNEP-WCMC (2010) Review of significant trade: species selected by the CITES animals Committee following CoP14 CITES Project No. S-346 AC25 Doc. 9.4 Annex. http://www.uva.nl/binaries/content/documents/personalpages/d/i/a…/asset. Accessed 15 Nov 2013 Wengström A (2009) How Maasai settlements affect the grazing habits of the Common Hippopotamus (Hippopotamus amphibius) in the Maasai Mara National Reserve, Kenya http://www.wpazambia.com/Download/WildlifeHandbook.pdf http://www.csee.wvu.edu/.../Reading%20Assignment%204%20-%20Hagan%2. Accessed 10 Nov 2013 http://www.cites.org/eng/cop/09/prop/E09-Prop-18_Hippopotamus.PDF. Accessed 15 Oct 2013 http://www.geo.arizona.edu/geo5xx/geos544/pdfs/fluvial/makaske.PD. Accessed 10 Oct 2013 http://www.cites.org/eng/cop/09/prop/E09-Prop-18_Hippopotamus.PDF. Accessed 9 Nov 2013 Foremost, I would like to express my sincere gratitude to my advisors Dr.Abate Shiferaw from Geography and Environmental Studies and Dr. Dessalegn Ejigu from Biology Department for their continuous support of my MSc research, for their patience, motivation, enthusiasm, and immense knowledge. Their guidance helped me in all the time of research and writing of this thesis. I would like to thank my collogues in Department of Geography and Environmental Studies of Bahir Dar University for their encouragement, insightful comments, and cooperation while I conduct this research. My special appreciation goes to Dawit Tekabe who provided me support and encouragement during field work. He was with me in all field visits by walking more than 5 and 6 h per day on foot. I am also thankful to Yirga Kebede one of the environmentalist in Amhara National Regional State Department of Tana Sub Basin and W/Gebreal from Environmental Protection Bureau for their invaluable expertise comment and support throughout the paper work. I am grateful to all organizations and all individuals that contributed in my study. To begin with, my thanks go to Tana Sub Basin department in Amahara National Regional State, Tana Water Transport Enterprise, Amhara Design Supervision and Works Enterprise, Amahara National Regional State Land Administration and Environmental Protection, for genuinely offering their noble service in providing me data for this research. Last but not the least; I would like to thank my family, Metimiku Yohannes, Selamawit Yohannes and Yeshitu Asfaw for their support and taking the household responsibility while I left home for field work. The author declares that this thesis is my original work and no anybody could claim for competing interest. In addition, all the sources of materials used for the thesis have been duly acknowledged. To conduct this research, the author did not get any financial support from any source of funding. Department of Geography and Environmental Studies, Bahir Dar University, Main Campus, P.O. Box. 79, Bahir Dar, Ethiopia Fentanesh Haile Buruso Correspondence to Fentanesh Haile Buruso. Buruso, F.H. Habitat suitability analysis for hippopotamus (H. amphibious) using GIS and remote sensing in Lake Tana and its environs, Ethiopia. Environ Syst Res 6, 6 (2018). https://doi.org/10.1186/s40068-017-0083-8 Habitat suitability MCDM
CommonCrawl
Show that $\det(\mathrm{adj}(A)) = \det(A)^3$ for a $(4 \times 4)$-matrix $A$ Let $A_{ik}$ be the cofactor of $a_{ik}$ in the determinant $$d = \left| \begin{matrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \\ \end{matrix} \right|.$$ Let $D$ be the corresponding determinant with $a_{ik}$ replaced by $A_{ik}.$ Prove $D = d^3.$ Book solution Let $\alpha$ be the matrix of the given determinant with elements $a_{ik}$ and let $\beta$ be the matrix of the cofactors $A_{ik},$ and let $\gamma$ be the transpose of $\beta.$ Then the product matrix $\alpha \gamma$ is a diagonal matrix with all entries on the main diagonal equal to $d$. Thus, $\det{\alpha \gamma} = d^4 = (\det{\alpha}) (\det{\gamma}) = (\det{\alpha})(\det{\beta}) = dD.$ The equation $$dD = d^4 \tag{1}$$ is an identity between polynomials in the $16$ matrix entries regard as independent determinants. Since there certainly exists a $4 \times 4$ matrix whose determinant is not zero, $d$ is non-zero in the polynomial ring. Since the polynomial ring is an integral domain, the result $$D = d^3$$ follows from $(1)$. How is $\alpha \gamma$ a diagonal matrix with all entries on the main diagonal equal to $d$? Does this come from some linear algebra rule? Then how did they get $(\det{\alpha}) (\det{\gamma}) = (\det{\alpha})(\det{\beta}) = dD$? Does $AA^{T} = AA$? Finally I am not really understanding the last paragraph so if someone could explain that that would help. linear-algebra proof-explanation Jendrik Stelzner user19405892user19405892 General Solution Let $K$ be our ground field and $L = K(x_{11}, \dotsc, x_{44})$ the field of rational functions in the 16 variables $x_{ij}$ with $1 \leq i,j \leq 4$. We will work over $L$ instead of $K$ from the very start, to avoid confusion and the taste of handwaving at later points. We slightly change the definitions, to make everything work, but still see how this applies to the exercise: Let $\alpha \in \mathrm{M}(4 \times 4, L)$ be defined as $$ \alpha = \begin{pmatrix} x_{11} & x_{12} & x_{13} & x_{14} \\ x_{21} & x_{22} & x_{23} & x_{24} \\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & x_{42} & x_{43} & x_{44} \\ \end{pmatrix}. $$ Let $\beta$ be the adjugate matrix of $\alpha$, i.e. the $(i,j)$-the entry of $\beta$ is the $(i,j)$-th cofactor of $\alpha$, and let $\gamma = \beta^T$. Let $d = \det \alpha$ and $D = \det \beta$. Notice that by Leibniz' formula the entries of $d$ and $D$ are polynomials in the variables $x_{11}, \dotsc, x_{44}$. For example \begin{align*} d &= \left| \begin{matrix} x_{11} & x_{12} & x_{13} & x_{14} \\ x_{21} & x_{22} & x_{23} & x_{24} \\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & x_{42} & x_{43} & x_{44} \\ \end{matrix} \right| \\ &= \sum_{\sigma \in S_4} \mathrm{sgn}(\sigma) x_{1\sigma(1)} x_{2\sigma(2)} x_{3\sigma(3)} x_{4\sigma(4)}. \end{align*} is a polynomial of degree $4$ (notice that the monomials $x_{1\sigma(1)} x_{2\sigma(2)} x_{3\sigma(3)} x_{4\sigma(4)}$ with $\sigma \in S_4$ are pairwise different and thus cannot cancel out). In particular $d \neq 0$. First notice that $$ (\alpha \gamma)_{ik} = \left( \alpha \beta^T \right)_{ik} = \sum_{j=1}^4 \alpha_{ij} \beta^T_{jk} = \sum_{j=1}^4 \alpha_{ij} \beta_{kj}. $$ For $i = k$ this is just the expansion of $\alpha \gamma$ by its $k$-th row. Thus the diagonal entries of $\alpha \gamma$ are $\det \alpha = d$. For $i \neq k$ consider the matrix $\alpha^{(k)}$, where we replace the $k$-th row of $\alpha$ by its $i$-th row. Then the $i$-th and $k$-th rows of $\alpha^{(k)}$ coincide, so $\det \alpha^{(k)} = 0$. Expanding $\alpha^{(k)}$ by its $k$-th row yields the right hand side of the above formula. So the non-diagonal entries of $\alpha \gamma$ are zero. Now $\alpha \gamma$ is a diagonal matrix, so $\det(\alpha \gamma)$ is just the product of the diagonal entries, all four of which are $d$. Thus $\det(\alpha \gamma) = d^4$. That $(\det \alpha)(\det \gamma) = (\det \alpha)(\det \beta)$ follows from $\det \gamma = \det \beta^T = \det \beta$, which follows from the determinant being invariant under transposition. That $(\det \alpha)(\det \beta) = dD$ follows from the definitions of $d$ and $D$. For the last paragraph: We have now shown that $d^4 = dD$. Because $d \neq 0$ we can divide by $d$ (which is a unit in the field $L$ we are working over) to get $d^3 = D$. Evaluating this at $a_{11}, \dotsc, a_{44} \in K$ gives $$ d^3(a_{11}, \dotsc, a_{44}) = D(a_{11}, \dotsc, a_{44}), $$ which is what we wanted to show in the first place. Over infinite fields If our ground field $K$ is infinite, we can also argue without using the field of rational functions. Let $\alpha$, $\beta$, $\gamma$, $d$ and $D$ be defined as originally, i.e. as in the books solution. We still get in the same way that $\alpha \gamma$ is diagonal with all diagonal entries $d$, and thus $\det \alpha = d^4$, and that $(\det \alpha) (\det \gamma) = (\det \alpha)(\det \beta) = dD$. So $d^4 = dD$. For the last paragraph we argue simlilary to the general case: Consider the following polynomials in the 16 variables $x_{11}, x_{12}, x_{13}, x_{14}, x_{21}, \dotsc, x_{44}$, i.e. elements of the polynomial ring $K[x_{11}, \dotsc, x_{44}]$: \begin{align*} p(x_{11}, \dotsc, x_{44}) &= \left| \begin{matrix} x_{11} & x_{12} & x_{13} & x_{14} \\ x_{21} & x_{22} & x_{23} & x_{24} \\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & x_{42} & x_{43} & x_{44} \\ \end{matrix} \right| \\ &= \sum_{\sigma \in S_4} \mathrm{sgn}(\sigma) x_{1\sigma(1)} x_{2\sigma(2)} x_{3\sigma(3)} x_{4\sigma(4)}. \end{align*} and \begin{align*} &\, q(x_{11}, \dotsc, x_{44}) \\ =&\, \left| \begin{matrix} y_{11}(x_{11}, \dotsc, x_{44}) & \cdots & y_{14}(x_{11}, \dotsc, x_{44}) \\ \vdots & \ddots & \vdots \\ y_{41}(x_{11}, \dotsc, x_{44}) & \cdots & y_{44}(x_{11}, \dotsc, x_{44}) \end{matrix} \right| \\ =&\, \sum_{\sigma \in S_4} \mathrm{sgn}(\sigma) y_{1\sigma(1)}(x_{11}, \dotsc, x_{44}) \dotsm y_{4\sigma(4)}(x_{11}, \dotsc, x_{44}). \end{align*} where for all $1 \leq i,j \leq 4$ we define $y_{ij}(x_{11}, \dotsc, x_{44})$ to be the $(i,j)$-th cofactor of the matrix. $$ \begin{pmatrix} x_{11} & x_{12} & x_{13} & x_{14} \\ x_{21} & x_{22} & x_{23} & x_{24} \\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & x_{42} & x_{43} & x_{44} \\ \end{pmatrix}. $$ By definition of $p$ and $q$ we have $$ d = p(a_{11}, \dotsc, a_{44}) \quad\text{and}\quad D = q(a_{11}, \dotsc, a_{44}). $$ Therefore we have shown so far that $$ p(a_{11}, \dotsc, a_{44})q(a_{11}, \dotsc, a_{44}) = p^4(a_{11}, \dotsc, a_{44}) $$ for all $a_{11}, \dotsc, a_{44} \in K$. So the polynomial $p^4-pq = p(p^3-q)$ evaluates zero everywhere. If our field is infinite we can now follows that already $p(p^3-q) = 0$, i.e. not only evaluates $p(p^3-q)$ zero everywhere, but it is already the zero polynomial. But we can find $b_{11}, \dotsc, b_{44}$ such that $p(b_{11}, \dotsc, b_{44}) \neq 0$ (i.e. the ($4 \times 4$)-matrix with entries $b_{ij}$ is invertible). So $p \neq 0$. Because the polynomial ring $K[x_{11}, \dotsc, x_{44}]$ is an integral domain we can follow from $p \neq 0$ and $p(p^3-q) = 0$ that already $p^3 - q = 0$, i.e. $p^3 = q$. Evaluating these polynomials, we get that $$ d^3 = p^3(a_{11}, \dotsc, a_{44}) = q(a_{11}, \dotsc, a_{44}) = D $$ for all $a_{11}, \dotsc, a_{44} \in K$, which is what we wanted to prove in the first place. PS: Also notice that in the same way(s) we can show that for every $n \times n$ square matrix $A$ we have $$ \det(\mathrm{adj}(A)) = \det(A)^{n-1}, $$ where $\mathrm{adj}(A)$ denotes the adjugate of $A$. Jendrik StelznerJendrik Stelzner Not the answer you're looking for? Browse other questions tagged linear-algebra proof-explanation or ask your own question. determinant of an $ n\times n$ matrix type Finding determinants using both reduction and cofactor expansion Determinant of anti-diagonal permutation matrix Prove that $\det(A)=\det(A^T)$ algebraically Determinant of a rank $1$ update of a scalar matrix, or characteristic polynomial of a rank $1$ matrix If $A$ is an invertible matrix, show that $\det(A)$ not equal $0$ and $\det(A^{-1})$ not equal $0$? Conditions for a Matrix to be Diagonalizable Given scalars $ \alpha, \beta, \gamma \in \mathbb{F}, $ prove that the following matrix is not invertible Find base in a $4\times 4$ matrix. For a unitary matrix $U$, what's the minimal value of the real part of $\det(U^*)\prod_i U_{ii}$?
CommonCrawl
Quantile regression for overdispersed count data: a hierarchical method Peter Congdon ORCID: orcid.org/0000-0003-1934-92051 Journal of Statistical Distributions and Applications volume 4, Article number: 18 (2017) Cite this article Generalized Poisson regression is commonly applied to overdispersed count data, and focused on modelling the conditional mean of the response. However, conditional mean regression models may be sensitive to response outliers and provide no information on other conditional distribution features of the response. We consider instead a hierarchical approach to quantile regression of overdispersed count data. This approach has the benefits of effective outlier detection and robust estimation in the presence of outliers, and in health applications, that quantile estimates can reflect risk factors. The technique is first illustrated with simulated overdispersed counts subject to contamination, such that estimates from conditional mean regression are adversely affected. A real application involves ambulatory care sensitive emergency admissions across 7518 English patient general practitioner (GP) practices. Predictors are GP practice deprivation, patient satisfaction with care and opening hours, and region. Impacts of deprivation are particularly important in policy terms as indicating effectiveness of efforts to reduce inequalities in care sensitive admissions. Hierarchical quantile count regression is used to develop profiles of central and extreme quantiles according to specified predictor combinations. Extensions of Poisson regression are commonly applied to overdispersed count data, focused on modelling the conditional mean of the response. However, conditional mean regression models may be sensitive to response outliers. We consider instead a Bayesian hierarchical approach to quantile regression of overdispersed count data, based on a Poisson log-normal (PLN) approach to overdispersion. The method set out here is for quantile regression for latent outcomes at the second stage of a hierarchical model. Focussing on median regression in particular, this method provides an approach to Bayesian robust regression for overdispersed count data. The technique is first illustrated with simulated overdispersed counts subject to contamination, such that conditional mean regression is adversely affected. It is shown that the hierarchical median regression via a Poisson log-normal representation (HQRPLN) more accurately reproduces the regression parameters assumed in the simulation than negative binomial or standard PLN regression. The HQRPLN estimates for contaminated data are competitive with those of classical methods for robust regression using a negative binomial density and M-estimation (Aeberhard et al. 2014; Chambers et al. 2014), and also with classical methods for median regression for count data (Machado and Santos Silva, 2005). It is also shown that HQRPLN accurately identifies the contaminated observations. A real application involves counts of ambulatory care sensitive (ACS) emergency admissions in 2014–15 according to 7518 English patient general practitioner (GP) practice. Such admissions are potentially avoidable given effective care and are often used as an index of health performance (Caminal et al. 2004). Predictors are practice deprivation, patient satisfaction with care (general satisfaction and satisfaction with opening hours), and the practice region of location. Hierarchical quantile Poisson log-normal regression is used to assess the most important predictors, variation in predictor effects by quantile, and varying impacts of predictors by region. The applied focus of the paper adopts a Bayesian strategy and uses a quantile regression approach that has, as one aspect, the benefit of robustness compared to conditional mean regression, which is demonstrated using simulated data. However, we also aim to demonstrate the utility of quantile regression in an analysis of a health performance index. To set the broader context, we consider classical methods for robust regression of overdispersed count data in section 2, before considering quantile regression, using classical methods and in terms of Bayesian implementation (section 3). Section 4 considers the Poisson log-normal representation for quantile count regression. The remaining sections involve data analysis: a simulation analysis involving contaminated count data (section 5), and finally, the ACS admissions analysis and results applying the HQRPLN method (sections 6 and 7). 2. Robust count regression via M-estimation and Bayesian strategies Classical approaches to robust regression for data {yi, i = 1,.., n} on covariates Xi of dimension p focus either on M-estimation, or median quantile regression (see next section). For linear regression under M-estimation, robustness may be achieved by incorporation in the estimation of objective functions Q(r) (Andersen, 2008) that downweight large positive and large negative standardized residuals ri = (yi − Xiβ)/s, where β is a regression parameter, and s is a scale estimate. For linear regression, estimation involves minimisation of \( \sum \limits_{\mathrm{i}=1}^{\mathrm{n}}\mathrm{Q}\left({\mathrm{r}}_{\mathrm{i}}\right) \), with corresponding estimation equations \( \frac{1}{\mathrm{n}}\sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{X}}_{\mathrm{i}\mathrm{j}}\uppsi \left({\mathrm{r}}_{\mathrm{i}}\right)=0 \), where ψ(r) = ∂Q(r)/∂r is the score or influence function. Regarding M-estimation for overdispersed counts, consider in particular, negative binomial NB(μi, σ) regression with offsets Oi, means μi = Oi exp(Xiβ), overdispersion parameter σ, and the NB2 parameterisation (Aeberhard et al., 2014; Hilbe, 2011). Then robustness may be achieved by objective functions that downweight large positive and large negative residuals ri = (yi − μi)/V0.5(μi). Thus Chambers et al. (2014) consider M-estimation for overdispersed counts using a negative binomial model. They estimate β using the Huber score function (Huber, 1973) and estimating equations $$ \frac{1}{\mathrm{n}}\sum \limits_{\mathrm{i}}\Delta \left({\mathrm{y}}_{\mathrm{i}},{\upmu}_{\mathrm{i}}\right)=0, $$ $$ \Delta \left({\mathrm{y}}_{\mathrm{i}},{\upmu}_{\mathrm{i}}\right)=\uppsi \left({\mathrm{r}}_{\mathrm{i}}\right)\mathrm{w}\left({\mathrm{X}}_{\mathrm{i}}\right)\frac{\upmu_{\mathrm{i}}{\mathrm{X}}_{\mathrm{i}}}{{\mathrm{V}}^{0.5}\left({\upmu}_{\mathrm{i}}\right)}-\mathrm{a}\left(\upbeta \right), $$ where the weights w(Xi) may be used to downweight leverage points (covariate outliers), and a(β) is a correction factor ensuring Fisher consistency. The Huber score function uses a cutpoint k to define (absolute) extreme residuals, such as k = 2, with ψ(r) = max(−k, min(k, r)). Chambers et al. (2014) use a robust moment estimator for θ = 1/σ, whereas Aeberhard et al. (2014) use M-estimation in a form of weighted maximum likelihood, preferring this on efficiency grounds. Bayesian regression methods intended as robust to outliers include ε−contamination priors (Moreno and Pericchi, 1993), modified likelihoods such as weighted likelihoods (Greco et al. 2008; Agostinelli and Greco, 2013), and localized regression (Wang and Blei, 2017). For overdispersed count regression, in particular, an ε−contamination approach might involve negative binomial or Poisson log-normal representations, and specify a main model and contamination model. The contamination model would be assumed to apply for a small subpopulation, with small prior probability ε (e.g. ε = 0.1 or ε = 0.05), and might involve an intercept or variance shift as compared to the main model. 3. Quantile regression: classical and Bayesian approaches An alternative approach to robustness, and the focus of this paper, is provided by quantile regression. Thus generalized linear models for discrete responses typically involve conditional mean estimation using both known predictors, and random effects to represent unknown covariates or overdispersion. However, mean regression models may be sensitive to response outliers and provide no information on factors affecting other distributional points (e.g. upper and lower 5% quantiles) of the response. By contrast, quantile regression estimates the relationship between the qth quantile Qy(q|X) of the response y and covariates X (Koenker and Hallock, 2001). Quantile regression was originally developed for continuous responses as count responses do not have continuous quantiles. For q ∈ (0, 1) and continuous y, classical quantile regression involves minimizing \( \sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{\uprho}_{\mathrm{q}}\left({\mathrm{y}}_{\mathrm{i}}-{\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}\right) \), where ρq(u) = u(q − I(u ≤ 0)). A special case is provided by median regression, involving minimization of the absolute deviations, \( \sum \limits_{\mathrm{i}=1}^{\mathrm{n}}\left|{\mathrm{y}}_{\mathrm{i}}-{\mathrm{X}}_{\mathrm{i}}\upbeta \right| \). This reduces the impact of outliers (influential observations) in the response space, providing a better fit for the majority of observations. Chambers et al. (2014) extend M-estimation to quantile regression, including count regression. For linear regression the estimating equations become $$ \frac{1}{\mathrm{n}}\sum \limits_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{X}}_{\mathrm{i}\mathrm{j}}{\Delta}_{\mathrm{q}}\left(\frac{{\mathrm{y}}_{\mathrm{i}}-{\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}}{\mathrm{s}}\right)=0, $$ where \( {\Delta}_{\mathrm{q}}\left(\frac{\mathrm{e}}{\mathrm{s}}\right)=2\uppsi \left(\frac{\mathrm{e}}{\mathrm{s}}\right)\left[\mathrm{qI}\left(\mathrm{e}>0\right)+\left(1-\mathrm{q}\right)\mathrm{I}\left(\mathrm{e}\le 0\right)\right] \), and s is a scale estimator. For overdispersed count data, similarly define scaled residuals $$ {\mathrm{r}}_{\mathrm{i}\mathrm{q}}=\left({\mathrm{y}}_{\mathrm{i}}-{\mathrm{Q}}_{\mathrm{q}}\left({\mathrm{X}}_{\mathrm{i}}\right)\right)/{\mathrm{V}}^{0.5}\left[{\mathrm{Q}}_{\mathrm{q}}\left({\upmu}_{\mathrm{i}}\right)\right], $$ where Qq(Xi) = Oi exp(Xiβq). Then the estimating equations for β are \( \frac{1}{\mathrm{n}}{\sum}_{\mathrm{i}}{\Delta}_{\mathrm{q}}\left({\mathrm{y}}_{\mathrm{i}},{\mathrm{Q}}_{\mathrm{q}}\left({\mathrm{X}}_{\mathrm{i}}\right)\right)=0, \) $$ {\Delta}_{\mathrm{q}}\left({\mathrm{y}}_{\mathrm{i}},{\mathrm{Q}}_{\mathrm{q}}\left({\mathrm{X}}_{\mathrm{i}}\right)\right)={\uppsi}_{\mathrm{q}}\left({\mathrm{r}}_{\mathrm{i}\mathrm{q}}\right)\mathrm{w}\left({\mathrm{X}}_{\mathrm{i}}\right)\frac{{\mathrm{Q}}_{\mathrm{q}}\left({\mathrm{X}}_{\mathrm{i}}\right){\mathrm{X}}_{\mathrm{i}}}{{\mathrm{V}}^{0.5}\left[{\mathrm{Q}}_{\mathrm{q}}\left({\mathrm{X}}_{\mathrm{i}}\right)\right]}-\mathrm{a}\left({\upbeta}_{\mathrm{q}}\right). $$ By contrast, Machado and Santos Silva (2005) propose quantile regression for count data based on adding uniform noise u to count responses y (i.e. jittering count responses), giving z = y + u, and apply quantile regression of the form $$ {\mathrm{Q}}_{{\mathrm{z}}_{\mathrm{i}}}\left(\left.\mathrm{q}\right|{\mathrm{X}}_{\mathrm{i}}\right)={\upeta}_{\mathrm{q}\mathrm{i}}=\exp \left({\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}\right). $$ As discussed by Yu and Moyeed (2001), a Bayesian approach to quantile regression for y continuous is obtained using an Asymmetric Laplace distribution (ALD), with density function $$ \mathrm{ALD}\left(\operatorname{}\mathrm{y}|{\upeta}_{\mathrm{q}},{\updelta}_{\mathrm{q}},\mathrm{q}\right)=\frac{\mathrm{q}\left(1-\mathrm{q}\right)}{\updelta_{\mathrm{q}}}\ \exp \left\lfloor -\frac{\uprho_{\mathrm{q}}\left(\mathrm{y}-{\upeta}_{\mathrm{q}}\right)}{\updelta_{\mathrm{q}}}\right\rfloor . $$ This distribution can in turn be represented as a scale mixture of normals (Tsionas, 2003). For observations i = 1,.., n, and assuming yi ~ ALD(ηqi, δq, q), one has $$ {\mathrm{y}}_{\mathrm{i}}={\upeta}_{\mathrm{q}\mathrm{i}}+{\upxi}_{\mathrm{q}}{\mathrm{W}}_{\mathrm{q}\mathrm{i}}+{\left[\frac{2{\mathrm{W}}_{\mathrm{q}\mathrm{i}}{\updelta}_{\mathrm{q}}}{\mathrm{q}\left(1-\mathrm{q}\right)}\right]}^{0.5}{\mathrm{u}}_{\mathrm{q}\mathrm{i}}, $$ where \( {\upxi}_{\mathrm{q}}=\frac{\left(1-2\mathrm{q}\right)}{\mathrm{q}\left(1-\mathrm{q}\right)} \), δq > 0 , Wqi ~ Exp(δq), and uqi ∼ N(0, 1), and the regression term ηqi = β0q + Xiβq may be expanded to include random effects. One potential issue with quantile regression, whether under classical or Bayesian estimation, is quantile crossing. Estimated conditional quantile functions may violate the monotonicity principle, with \( {\upeta}_{{\mathrm{q}}_1\mathrm{i}}>{\upeta}_{{\mathrm{q}}_2\mathrm{i}} \) when q1 < q2 for some covariate combinations, or random effect values if the regression terms ηqi include random effects. One can explicitly impose the constraints \( {\upeta}_{{\mathrm{q}}_{\mathrm{j}}\mathrm{i}}>{\upeta}_{{\mathrm{q}}_{\mathrm{j}-1}\mathrm{i}} \) (Bondell et al. 2010) in simultaneous estimation involving multiple quantile points, while Wu and Liu (2009) propose a sequential procedure ensuring that a regression at an additional quantile does not cross with previous ones. Assuming Bayesian inference, one possible criterion for assessing quantile crossing is whether the posterior mean ηqi follow the monotonicity constraint. A more exacting criterion considers all MCMC samples. In MCMC sampling (under simultaneous estimation) a full exploration of the parameter space may generate occasional quantile crossing which can be monitored via monotonicity indicators mit = 1 if monotonicity is maintained for observation i at iteration t. The relevant criterion for monotonicity would then require that \( \sum \limits_{\mathrm{i}}{\mathrm{m}}_{\mathrm{i}\mathrm{t}}=\mathrm{n} \) for all iterations. Where departures from monotonicity are not pronounced, one can impose monotonicity constrained sampling by rejecting any iterations t where \( \sum \limits_{\mathrm{i}}{\mathrm{m}}_{\mathrm{i}\mathrm{t}}<\mathrm{n} \), and basing inferences only on retained samples where \( \sum \limits_{\mathrm{i}}{\mathrm{m}}_{\mathrm{i}\mathrm{t}}=\mathrm{n} \). 4. Methods: hierarchical poisson log-normal Quantile regression was developed for normal linear regression with observed continuous responses. However, Bayesian quantile regression has been applied to latent continuous outcomes in the case of binary regression (Benoit and Van den Poel, 2012). In this paper, we follow a similar principle in an approach to quantile regression for overdispersed count data, avoiding the need for jittering. This approach involves a scale mixture version of the ALD (Yu and Moyeed, 2001) within a hierarchical Poisson-lognormal representation to account for overdispersion (e.g. Connolly & Thibaut, 2012). The quantile regression is for latent outcomes at the second stage of the hierarchical model, focussed on estimating latent incidence rates or relative risks. The Poisson log-normal representation is per se advantageous in that the tails of the log-normal are heavier than for the gamma distribution, and for data with outliers, the Poisson log-normal model may give a better fit than the negative-binomial model (Connolly et al. 2009; Sohn 1994; Miranda-Moreno et al. 2005; Wang and Blei, 2017). Thus for observed counts yi, one specifies for quantiles q = 1,.., Q, $$ {\mathrm{y}}_{\mathrm{i}}\sim \mathrm{Poi}\left({\upmu}_{\mathrm{qi}}\right), $$ $$ {\upmu}_{\mathrm{qi}}=\exp \left({\upnu}_{\mathrm{qi}}\right), $$ $$ {\upnu}_{\mathrm{q}\mathrm{i}}\sim \mathrm{N}\left({\upbeta}_{0\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}+{\upxi}_{\mathrm{q}}{\mathrm{W}}_{\mathrm{q}\mathrm{i}},\frac{2{\mathrm{W}}_{\mathrm{q}\mathrm{i}}{\updelta}_{\mathrm{q}}}{\mathrm{q}\left(1-\mathrm{q}\right)}\right), $$ $$ {\mathrm{W}}_{\mathrm{q}\mathrm{i}}\sim \mathrm{Exp}\left({\updelta}_{\mathrm{q}}\right). $$ The Wqi in (1) are measures of outlier status. Observations with higher Wqi have higher variances (lower precisions) and hence diminished influence on the likelihood. Predictions for cases with high Wqi are likely to have a wide uncertainty interval. For assessing which observations are response outliers in practice, the Wqi themselves may be highly skewed, so measuring scale is problematic even using robust scale measures. However, outlier detection rules can be used, based on adjusted boxplot rules, which include the interquartile range as an implicit scale measure (Hubert and Vandervieren, 2008; Carling, 2000; Verardi and Vermandele, 2016). One may also monitor transformed Wqi (e.g. log or square root), namely Uqi = log(Wqi), or transformed ratios of Wqi to the exponential mean 1/δq, Uqi = log(Wqiδq), and consider thresholds in standardised Uqi for detecting outliers. Another option is to derive exceedance probabilities against the exponential mean, Pr(Wqi > 1/δq|Y), or based on pairwise comparison against other Wqj(j ≠ i), namely \( \frac{1}{\mathrm{n}-1}\sum \limits_{\mathrm{j}\ne \mathrm{i}}\Pr \left({\mathrm{W}}_{\mathrm{qi}}>\left.{\mathrm{W}}_{\mathrm{qj}}\right|\mathrm{Y}\right) \) (Santos and Bolfarine, 2016), with higher exceedance probabilities characterising observations disparate from the majority of observations. The pairwise comparison measure can be obtained from monitoring ranks of sampled Wqi (e.g. using the rank command in rjags). Such exceedance probabilities are analogous to those used in disease mapping applications to detect high relative disease risk (Richardson et al. 2004). Santos and Bolfarine (2016) also mention outlier detection based on a Kullback-Liebler distance measure between estimated densities of each Wqi, though this would be computationally intensive for large samples. A residual distance measure to detect outliers is mentioned by Benites et al. (2015), which for linear quantile regression has the form \( {\mathrm{d}}_{\mathrm{q}\mathrm{i}}=\frac{\left|{\mathrm{y}}_{\mathrm{i}}-{\upbeta}_{0\mathrm{q}}-{\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}\right|}{\updelta_{\mathrm{q}}} \). For the application here the equivalent measure is \( {\mathrm{d}}_{\mathrm{q}\mathrm{i}}=\frac{\left|{\upnu}_{\mathrm{q}\mathrm{i}}-{\upbeta}_{0\mathrm{q}}-{\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}\right|}{\updelta_{\mathrm{q}}} \). Benites et al. (2015) detect outliers by this measure using graphical methods, but these become infeasible for large samples and instead one may consider standardised dqi to detect outliers. If there are offsets Oi (expected counts, times or populations exposed, etc), then these can be included as $$ {\upmu}_{\mathrm{qi}}={\mathrm{O}}_{\mathrm{i}}\exp \left({\upnu}_{\mathrm{qi}}\right), $$ $$ {\upnu}_{\mathrm{q}\mathrm{i}}\sim \mathrm{N}\left({\upbeta}_{0\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}}{\upbeta}_{\mathrm{q}}+{\upxi}_{\mathrm{q}}{\mathrm{W}}_{\mathrm{q}\mathrm{i}},\frac{2{\mathrm{W}}_{\mathrm{q}\mathrm{i}}{\updelta}_{\mathrm{q}}}{\mathrm{q}\left(1-\mathrm{q}\right)}\right). $$ In health applications, offsets are typically expected health events. In this case, ρqi = exp(β0q + Xiβq) can be obtained as predicted relative risks specific to quantile. If ∑yi = ∑ Oi then predicted relative risks will be centred around 1, and elevated relative risks will be associated with a high probability that relative risks exceed 1, even at low quantiles. Bayesian count regression often focuses on assessing cases with elevated mean incidence or mean relative risk. Under the quantile regression (1), extreme conditional quantiles of incidence (e.g. 5 and 95%) may be estimated from quantile specific regression, which allows covariate impacts to vary by quantile. The ability to examine central and extreme quantiles in relation to particular covariate combinations may be important for policy formulation or assessment (Reich et al. 2011). One may also focus on a lower quantile (such as 2.5 or 5%), and identify probabilities of excess incidence or relative risk at this quantile. This type of issue may occur in other applications (e.g. financial); for example, Takeuchi et al. (2006) mention that "For risk management and regulatory reporting purposes, a bank may need to estimate a lower bound on the changes in the value of its portfolio which will hold with high probability". 5. Simulated data example This analysis demonstrates that the HQRPLN method reproduces the underlying regression parameters for overdispersed count data subject to contamination, and also accurately identifies response outliers. Data generation follows the approach set out by Aeberhard et al. (2014), which is concerned with robust estimation for negative binomial regression, except that a larger sample size of n = 10,000 is taken. Two predictors are assumed, X1, with values generated as standard normal, the other X2 as binary (=1 for half the sample, = 0 for other cases). Then with generating ("true") regression parameters β = (0.5, 0.8, −0.4), negative binomial means μ = exp(β0 + β1X1 + β2X2), and overdispersion parameter σ = 0.7, the counts are generated in R as y < − rnbinom(n = n, mu = mu, size = 1/sigma). The mean count so generated is 1.9. The large sample size ensures that the regression parameters for the actual sample data are close to those used to generate the data, whereas for a smaller sample size (n = 200) the parameters for the sampled datasets fluctuate much more widely around the true values; this may be verified graphically using the R code (and negative binomial regression) in Additional file 1. Contamination is achieved by adding a constant C to the uncontaminated observations for a 5% random sub-sample taken without replacement (i.e. 500 observations out of the total sample). We consider six contamination settings, namely C = 5, C = 10, C = 15, C = 20, C = 25 and C = 30. The R code is provided in Additional file 1. We compare Bayesian regression estimates for the contaminated samples according to (a) negative binomial regression; (b) a standard Poisson log-normal (i.e. conditional mean estimation); and (c) a median regression under the HQRPLN representation. Comparisons are included with the classical methods: the Aeberhard et al. (2014) method, using the glmrob.nb.r code from ttps://github.com/williamaeberhard/glmrob.nb; the Chambers et al. (2014) method, using the glm.mq.nb option in the R package CountMQ (note that this does not provide confidence intervals), and the Machados and Santos Silva (2005) method using lqm.counts (https://www.rdocumentation.org/packages/lqmm/versions/1.5.3/topics/lqm.counts). Bayesian models are estimated using jagsUI in R. Normal priors with mean 0 and variance 1000 are assumed on regression parameters, and gamma priors, with shape 1 and inverse scale (rate) 0.001, are assumed on precision parameters and the HQRPLN parameter δ. Two chains are used, with convergence assessed using Brooks-Gelman-Rubin scale reduction factors (Brooks and Gelman, 1998). Sensitivity to the prior on δ may be an issue. We consider, in addition to the gamma prior, a uniform prior on δ, δ ~ U(0, 10000), and a parameterisation in terms of the exponential mean rate ϕ = 1/δ. Thus Wi ~ exp(1/ϕ), log(ϕ) = ω0, with ω0 assigned a diffuse normal N(0,1000) prior. It may be noted, in more general application terms, that the exponential rate prior potentially extends to a regression approach to explaining variation in Wi, with observation specific ϕi. Table 1 contains the resulting regression parameter estimates. It can be seen that negative binomial regression is most vitiated by response outliers, this distortion increasing with C. The Chambers et al. (2014) estimates are more robust than negative binomial regression, but not as robust as the Aeberhard et al. (2014) estimates or the HQRPLN estimates. Standard Poisson log-normal regression is more robust than the negative binomial regression and the Chambers et al. (2014) estimates, but outperformed by the HQRPLN method. The HQRPLN method provides estimates closer to the true values as compared to the Chambers et al. (2014) method, and the Machados and Santos Silva (2005) method, which tends to overestimate β1. All credible intervals from hierarchical median PLN regression include the true regression parameter values except for β1 under C = 5, and this method is otherwise comparable to Aeberhard et al. (2014). Table 2 shows that posterior estimates of δ are very similar for different priors; there is no appreciable sensitivity. Table 1 Regression parameter estimates (Means and 95% CrI or 95% CI) by estimation method, contaminated and uncontaminated dataa Table 2 Estimated δ under different priors and contamination levels One advantage of the hierarchical median PLN regression is in outlier detection. This can be assessed in terms of the concordance between the contamination sample and observations identified as having elevated Wi. In a real application, of course, the outliers would not be known in advance. However, in the case of simulation we can evaluate classification accuracy using established indices (Ruopp et al. 2008). We use five different outlier detection methods, reporting their sensitivity, specificity and the corresponding Youden index (Ruopp et al. 2008): pairwise comparison exceedance rates (Santos and Bolfarine, 2016); exceedance rates against the exponential mean; a standardised version of the residual distance measure (Benites et al. 2015); standardised Ui = log(Wiδ); and the modified boxplot rule (Hubert and Vandervieren, 2008) applied to posterior mean Wi. We consider the contamination level C = 20 in particular. The upper part of Table 3 sets out results if the threshold for exceedance rates is set at 0.7, and that for standardised measures set at 2. The latter threshold is a lower outlier threshold for standardised measures mentioned by Wilcox (2016). Outlier detection performance at these thresholds is slightly higher for the standardised Ui, with a 98.6% sensitivity and 97% specificity. In general, setting outlier thresholds higher reduces sensitivity while raising specificity. Setting the exceedance probability threshold at 0.8, and the standardised measures threshold at 3 (Wilcox, 2016, page 45), reduces performance for the first four measures (Table 3, lower panel). The settings for the boxplot rule are unchanged, following Hubert and Vandervieren (2008) guidelines, and it now has better performance. Table 3 Outlier detection by different methods (C = 20) and different cutpoints 6. Case study application The case study dataset consists of counts yi of ambulatory care sensitive (ACS) emergency admissions for n = 7518 GP practices in England's National Health Service (NHS) during 2014/15. The data are from the Care Quality Commission (source: https://www.cqc.org.uk/content/monitoring-gp-practices). The GP practices are arranged into four regions, responsible for planning and commissioning health care. Unplanned emergency admissions, including those for care sensitive conditions rated as potentially avoidable or preventable (Tian et al. 2012), show wide socioeconomic inequalities, being higher from more deprived areas (Sheringham et al. 2016). However, effectiveness of NHS agencies in tackling these inequalities varies considerably. One way proposed to measure inequality is the slope index of the outcome on a measure of social deprivation (Regidor, 2004). The analysis here uses GP practices as the observational unit and considers impacts of deprivation on care sensitive emergency admissions, and regional differences in that impact. Predictors are a GP practice deprivation score, the Index of Multiple Deprivation or IMD (DCLG, 2015), and two measures of perceived access to care for each GP practice. Access to primary care has been shown to reduce emergency hospital attendances and admissions (Dolton and Pathania, 2016). The access indicators are from an annual survey of patent views regarding their primary care (the GP Patient Survey) and are proportions of patients 'very satisfied' or 'fairly satisfied' with their GP practice opening hours, and proportions of patients describing the overall experience of their GP surgery as fairly good or very good. These predictors are denoted IMD, SatHrs and OvExp for short. Two of these predictors are already on a [0,1] scale. In order that variations in the strength of impacts of predictors can be straightforwardly compared, the GP practice deprivation score (with a range from 3.2 to 66.5) is transformed to a [0,1] scale using a linear transformation, \( \frac{\mathrm{IMD}-\min \left(\mathrm{IMD}\right)}{\max \left(\mathrm{IMD}\right)-\min \left(\mathrm{IMD}\right)} \). A region indicator (regi) of the GP practice location and affiliation is also included: 1 = London (reference), 2 = Midlands and East of England, 3 = North Of England, 4 = South of England (outside London). The analysis involves Poisson lognormal regression, including the scale mixture ALD. Two models are compared. One assumes a common effect of deprivation on care sensitive emergencies. Thus let X1 = IMD, X2 = SatHrs, X3 = OvExp, and Oi denote expected admissions, based on England wide ACS rates by age. The first model assumes for quantiles q = 1,.., Q $$ {\upnu}_{\mathrm{q}\mathrm{i}}\sim \mathrm{N}\left({\upeta}_{\mathrm{q}\mathrm{i}}+{\upxi}_{\mathrm{q}}{\mathrm{W}}_{\mathrm{q}\mathrm{i}},\frac{2{\mathrm{W}}_{\mathrm{q}\mathrm{i}}{\updelta}_{\mathrm{q}}}{\mathrm{q}\left(1-\mathrm{q}\right)}\right), $$ $$ {\upeta}_{\mathrm{qi}}={\upbeta}_{0\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}1}{\upbeta}_{1\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}2}{\upbeta}_{2\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}3}{\upbeta}_{3\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=2\right){\upbeta}_{4\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=3\right){\upbeta}_{5\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=4\right){\upbeta}_{6\mathrm{q}}. $$ The second model allows the impact of deprivation to vary by region. Thus $$ {\upeta}_{\mathrm{qi}}={\upbeta}_{0\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}1}{\upgamma}_{\mathrm{i}}+{\mathrm{X}}_{\mathrm{i}2}{\upbeta}_{2\mathrm{q}}+{\mathrm{X}}_{\mathrm{i}3}{\upbeta}_{3\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=2\right){\upbeta}_{4\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=3\right){\upbeta}_{5\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=4\right){\upbeta}_{6\mathrm{q}}, $$ $$ {\upgamma}_{\mathrm{i}}={\upbeta}_{1\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=2\right){\upbeta}_{7\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=3\right){\upbeta}_{8\mathrm{q}}+\mathrm{I}\left({\mathrm{reg}}_{\mathrm{i}}=4\right){\upbeta}_{9\mathrm{q}}. $$ This model is relevant to assessing whether regions vary in their effectiveness in tackling inequality in ambulatory sensitive admission rates: higher γi values indicate higher socio-economically based inequalities in such admissions. These models are compared using quantile regression over the 0.05, 0.50, and 0.95 (i.e. Q = 3) quantiles, with estimation simultaneous across the three quantiles. Regression analysis is carried out in WINBUGS14 (Lunn et al. 2000). Inferences are based on the second halves of 20,000 two chain runs with convergence assessed using Brooks-Gelman-Rubin diagnostics (Brooks and Gelman, 1998). Normal N (0,100) priors are adopted on β parameters, and gamma priors Ga (1,0.001) with shape 1 and rate 0.001 on scale parameters δq. Model fit is assessed using the widely applicable information criterion (WAIC) (Watanabe, 2010). The WAIC involves two elements: a log pointwise predictive density (lpd) and a complexity estimate (pwaic), with the WAIC obtained as −2(lpd-pwaic). Posterior predictive model checks (Berkhof et al., 2000) are based on sampling replicate data yrep , q. First, predictive coverage is assessed by the proportion of observations contained within the 95% credible intervals of yrep , qi (Gelfand, 1996). Second, denoting θ as model parameters, posterior predictive p-tests are obtained by evaluating specified test statistics, T(yrep , q|θ) and T(y|θ), and obtaining probabilities Pr[T(yrep , q|θ) > T(y|θ)]. The test statistics are the likelihood ratio statistic \( \sum \limits_{\mathrm{i}}{\mathrm{y}}_{\mathrm{i}}\log \left({\mathrm{y}}_{\mathrm{i}}/{\upmu}_{\mathrm{qi}}\right) \); the maximum yi; and the total of y, \( \sum \limits_{\mathrm{i}}{\mathrm{y}}_{\mathrm{i}} \). 7. Case study results Table 4 shows an advantage in fit for the second model for two of the three quantiles, although posterior predictive checks for both are satisfactory. Table 5 shows the regression parameters under the two models. Monotonicity is preserved using the more exacting criterion mentioned in section 3, namely that \( \sum \limits_{\mathrm{i}}{\mathrm{m}}_{\mathrm{i}\mathrm{t}}=\mathrm{n} \) for all iterations. Table 4 Fit and model checks Table 5 Estimated regression coefficients Both models show that the deprivation of the GP practice population is the strongest predictor of ambulatory sensitive admission levels. Higher levels of positive experience with the primary care provider (OvExp) have significant negative effects, but smaller impacts than those of deprivation. The effects of satisfaction with opening hours are comparatively small. The differential intercepts show that under both models, and for comparable predictor levels, London and the South have lower levels of ambulatory sensitive admissions than the other two regions, with the North having the highest differential against London and the South. Results for model 2 show show significantly lower differential slope effects at q = 0.05 for the Midlands, North and South regions (represented in parameters β7q , β8q and β9q). Table 6 shows the resulting overall deprivation slopes by region under model 2, with steepest slopes in London at q = 0.05, and in the South and Midlands at q = 0.95, and generally shallower slopes in the North. Table 6 Estimated deprivation slopes by region, model 2 There are different ways to represent the impacts of parameter estimates on estimated risks of ambulatory sensitive emergencies, and identifying priorities for intervention. One can demonstrate how relative risks vary by deprivation category and region, since differences in the slope index by region (as identified in model 2) imply varying gradients in relative risk over deprivation categories. Such varying gradients can be interpreted as variations in socioeconomic inequality (Regidor, 2004; Sheringham et al. 2016). We accordingly disaggregate the 7518 GP practices by their region of location, and according to the England-wide deprivation decile of the practice (that is into 40 subcells). Even in the more affluent South, there are some practices in the highest decile (most deprived practices). Table 7 shows posterior summaries, from the median regression (q = 0.5), regarding average predicted relative risks (RR) for GP practices by subcell. These average RR take into account practice level covariate profiles, and may be affected by satisfaction rates as well as by practice deprivation. It can be seen that, for comparable deprivation levels, GP practice relative risks of ambulatory sensitive emergencies are highest in the North. A risk gradient across ascending deprivation applies across all regions, but steepens in the South at the highest deprivation levels. Figure 1 represents these trends graphically. Table 7 Estimated relative risks (median quantile regression) for ambulatory sensitive emergency admission by region and deprivation decile of gp practice Hierarchical median regression. Predicted ACS relative risks by region and deprivation decile A second approach to assessing health care implications is to stipulate particular covariate combinations, and ascertain how these translate into varying relative risks according to region and quantile. To illustrate this, we set transformed practice deprivation to 0.5815 (corresponding to a high IMD score of 40 in the original scale), and the general satisfaction and opening hours satisfaction indicators at their mean values across England. In particular, we focus on the probability, by region, that the lower 0.05 quantile for relative risk exceeds 1. Table 8 shows that relative risks for all three quantiles significantly exceed 1 for the North and Midlands regions, but for the 0.05 quantile, the probability that relative risk exceeds 1 in London is inconclusive (at 0.66), and in the South is close to zero. Table 8 Predicted relative risks (RR) by region, specified values predictor combination Slope indices of inequality are measures of health care effectiveness at aggregate level across a set of GP practices. One may also be interested in identifying individual GP practices with elevated ambulatory sensitive admission levels, even after taking account of deprivation and other influences. Thus we can identify those practices with high outlier indicators, Wqi, and in particular those with high ACS admission totals yi, after taking account of the covariates and expected events. Table 9 shows response and covariate details for the GP practices with the 20 highest posterior mean W0.5 , i from the model 2 median regression. Also shown are values for the outlier indicators included in the simulation analysis (section 5). Thus practices 1, 2 and 19 in Table 9 have high yi (and high maximum likelihood relative risks ratios yi/Oi) even after taking account of covariate values, including high deprivation. Practices 6 and 7 have high yi despite average or below average deprivation. Practices 8, 9, 13, 17 and 19 have low yi despite high deprivation. Most other extreme outliers in Table 9 have unduly low yi. Table 9 Leading GP practice outliers, median regression, ambulatory sensitive emergency admissionsa There are 13 outlier practices (the first 13 practices in Table 9) according to the adjusted boxplot method of Hubert and Vandervieren (2008), which is applied to posterior mean W0.5 , i. Other methods provide less restrictive definitions. A cut off of 3 for standardised Uqi = log(Wqiδq) (cf. Table 3) leads to 42 observations being classed as outliers, and a cut off of 3 for standardised residual distance measures leads to 114 outliers. As a sensitivity analysis, alternative priors were assumed on the exponential scale parameters δq. Instead of the gamma Ga (1,0.001) prior, a uniform prior on δq is considered, δq ~ U(0, 10000), and also parameterisation in terms of exponential means ϕq = 1/δq. Thus Wqi ∼ exp(1/ϕq), log(ϕq) = ω0q, with ω0q assigned a diffuse normal N (0,1000) prior. Table 10 shows that the posterior δq are very similar under the different priors, and other inferences are not affected. Table 10 Estimated δq under different priors (Model 2) In this paper, a model for quantile regression within a hierarchical Poisson lognormal framework is proposed for overdispersed count responses. This technique has the advantage that a profile of incidence rates or relative risks across quantiles can be obtained, taking account of quantile specific covariate effects, and including estimates of uncertainty (e.g. the uncertainty attaching to lower and upper relative risk quantiles). Among methodological extensions that may be included are varying Wqi according to case specific covariates, and covariate selection. A simulation in R using known regression coefficients shows the technique accurately estimates the true regression coefficients when the data are contaminated by outliers, with performance comparable to that of Aeberhard et al. (2014). The technique also accurately identifies the sample observations subject to contamination. A real application focuses on estimating central, low and high quantile regressions for levels of ambulatory sensitive emergency admissions across English GP practices. Practice deprivation is the strongest predictor of such emergency admissions, and the deprivation effect varies by quantile under the second model considered for these data. In particular, using stipulated values for covariates, it was shown that relative risks for all quantiles significantly exceed 1 for the Midlands-East and North, but for the 0.05 quantile, the probabilities that relative risk exceeds 1 in London and the South are zero or inconclusive. Outlier GP practices (in the response space) were also identified. The methodology used in the paper may have utility in other health applications where institutional or regional variations in health outcomes are of policy concern. Aeberhard, W, Cantoni, E, Heritier, S: Robust inference in the negative binomial regression model with an application to falls data. Biometrics. 70(4), 920–931 (2014) Agostinelli, C, Greco L: A weighted strategy to handle likelihood uncertainty in Bayesian inference. Comput. Stat. 28(1), 319-339 (2013) Andersen, R: Modern methods for robust regression. Sage Publishing (2008) Benites, L, Lachos, V, Vilca, F: Case-deletion diagnostics for Quantile regression using the asymmetric Laplace distribution. arXiv preprint arXiv. 1509, 05099 (2015) Benoit, D, Van den Poel, D: Binary quantile regression: a Bayesian approach based on the asymmetric Laplace distribution. J. Appl. Econometrics. 27(7), 1174–1188 (2012) Berkhof, J, Van Mechelen, I, Hoijtink, H: Posterior predictive checks: principles and discussion. Comput. Stat. 15(3), 337–354 (2000) Bondell, H, Reich, B, Wang, H: Noncrossing quantile regression curve estimation. Biometrika. 97(4), 825–838 (2010) Brooks, S, Gelman, A: General methods for monitoring convergence of iterative simulations. J. Comput. Graphical Stat. 7(4), 434–455 (1998) Caminal, J, Starfield, B, Sánchez, E, Casanova, C, Morales, M: The role of primary care in preventing ambulatory care sensitive conditions. Eur. J. Public Health. 14(3), 246–251 (2004) Carling, K: Resistant outlier rules and the non-Gaussian case. Comput. Stat. Data Anal. 33(3), 249–258 (2000) Chambers, R, Dreassi, E, Salvati, N: Disease mapping via negative binomial regression M-quantiles. Stat. Med. 33(27), 4805–4824 (2014) Connolly, S, Dornelas, M, Bellwood, D, Hughes, T: Testing species abundance models: a new bootstrap approach applied to indo-Pacific coral reefs. Ecology. 90(11), 3138–3149 (2009) Connolly, S, Thibaut, L: A comparative analysis of alternative approaches to fitting species abundance models. J. Plant Ecol. 5, 32–45 (2012) Department of Communities and Local Government (DCLG): The English indices of deprivation 2015. Office of National Statistics and DCLG, London (2015) Dolton, P, Pathania, V: Can increased primary care access reduce demand for emergency care? Evidence from England's 7-day GP opening. J. Health Econ. 49, 193–208 (2016) Gelfand, A: Model determination using sampling-based methods, In: Gilks, PW, Richardson, S, Spiegelhalter, D (eds.) Markov Chain Monte Carlo. Chapman & Hall/CRC, Boca Raton (1996) Greco, L, Racugno, W, Ventura, L: Robust likelihood functions in Bayesian analysis. J. Stat. Plan. Inf. 138, 1258–1270 (2008) Hilbe, J: Negative Binomial Regression, 2nd edition. Cambridge University Press, Cambridge (2011) Huber, P: Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 1(5), 799–821 (1973) Hubert, M, Vandervieren, E: An adjusted boxplot for skewed distributions. Comput. Stat. Data Anal. 52(12), 5186–5201 (2008) Koenker, R, Hallock, K: Quantile regression. J. Econ. Perspect. 15, 143–156 (2001) Lunn, D, Thomas, A, Best, N, Spiegelhalter, D: WinBUGS -- a Bayesian modelling framework: concepts, structure, and extensibility. Stat. Comput. 10, 325–337 (2000) Machado, J, Santos Silva, J: Quantiles for counts. J. Am. Stat. Assoc. 100(472), 1226–1237 (2005) Miranda-Moreno, L, Fu, L, Saccomano, F, Labbe, A: Alternative risk model for ranking locations for safety improvement. Transportation Res. Record. 1908, 1–8 (2005) Moreno, E, Pericchi, L: Bayesian robustness for hierarchical ɛ-contamination models. J. Stat. Plan. Inf. 37, 159–168 (1993) Regidor, E: Measures of health inequalities: part 2. J. Epidemiol. Community Health. 58(11), 900–903 (2004) Reich, B, Fuentes, M, Dunson, D: Bayesian spatial quantile regression. J. Am. Stat. Assoc. 106(493), 6–20 (2011) Richardson, S, Thomson, A, Best, N, Elliott, P: Interpreting posterior relative risk estimates in disease-mapping studies. Environ. Health Perspect. 112, 1016–1025 (2004) Ruopp, M, Perkins, N, Whitcomb, B, Schisterman, E: Youden index and optimal cut-point estimated from observations affected by a lower limit of detection. Biom. J. 50(3), 419–430 (2008) Santos, B, Bolfarine, H: On Bayesian quantile regression and outliers. arXiv preprint arXiv. 1601, 07344 (2016) Sheringham, J, Asaria, M, Barratt, H, Raine, R, Cookson, R: Are some areas more equal than others? Socioeconomic inequality in potentially avoidable emergency hospital admissions within English local authority areas. J. Health Serv. Res. Policy. 22(2), 83–90 (2016) Sohn, S: A comparative study of four estimators for analyzing the random event rate of the Poisson process. J. Stat. Comput. Simul. 49(1–2), 1–10 (1994) Takeuchi, I, Le, Q, Sears, T, Smola, A: Nonparametric quantile estimation. J. Mach. Learn. Res. 7, 1231–1264 (2006) Tian, Y, Dixon, A, Gao, H: Emergency Hospital Admissions for Ambulatory Care-Sensitive Conditions: Identifying the Potential for Reductions. King's Fund, London (2012). https://www.kingsfund.org.uk/ Tsionas, E: Bayesian quantile inference. J. Stat. Comput. Simul. 73, 659–674 (2003) Verardi, V, Vermandele, C: Outlier identification for skewed and/or heavy-tailed unimodal multivariate distributions. J. de la Société Française de Statistique. 157(2), 90–114 (2016) Wang, C, Blei, D: A general method for robust Bayesian modeling. Bayesian Anal (forthcoming). (2017) Watanabe, S: Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. J. Mach. Learn. Res. 11, 3571–3594 (2010) Wilcox, R: Understanding and Applying Basic Statistical Methods Using R. John Wiley, Hoboken (2016) Wu, Y, Liu, Y: Stepwise multiple quantile regression estimation using non-crossing constraints. Stat. Interface. 2, 299–310 (2009) Yu, K, Moyeed, R: Bayesian quantile regression. Stat. Prob. Lett. 54(4), 437–447 (2001) We appreciate for the reviewers' insightful comments, which helped to improve the paper. There are no funding sources. Queen Mary University of London, London, UK Peter Congdon Search for Peter Congdon in: PC conceived of the method, performed the statistical analysis, and drafted the manuscript. Correspondence to Peter Congdon. The author declares that he has no competing interests. Additional file Additional file 1: Appendix 1. Sample Size for Simulations. Appendix 2. R Code for Simulations. (DOCX 23 kb) Congdon, P. Quantile regression for overdispersed count data: a hierarchical method. J Stat Distrib App 4, 18 (2017) doi:10.1186/s40488-017-0073-4 Quantile regression Overdispersion Count data Ambulatory sensitive Median regression
CommonCrawl
\begin{document} \title{Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning} \begin{abstract} Provably efficient Model-Based Reinforcement Learning (MBRL) based on optimism or posterior sampling (PSRL) is ensured to attain the global optimality asymptotically by introducing the complexity measure of the model. However, the complexity might grow exponentially for the simplest nonlinear models, where global convergence is impossible within finite iterations. When the model suffers a large generalization error, which is quantitatively measured by the model complexity, the uncertainty can be large. The sampled model that current policy is greedily optimized upon will thus be unsettled, resulting in aggressive policy updates and over-exploration. In this work, we propose \textit{Conservative Dual Policy Optimization} (CDPO) that involves a \textit{Referential Update} and a \textit{Conservative Update}. The policy is first optimized under a reference model, which imitates the mechanism of PSRL while offering more stability. A conservative range of randomness is guaranteed by maximizing the expectation of model value. Without harmful sampling procedures, CDPO can still achieve the same regret as PSRL. More importantly, CDPO enjoys monotonic policy improvement and global optimality simultaneously. Empirical results also validate the exploration efficiency of CDPO. \end{abstract} \section{Introduction} Model-Based Reinforcement Learning (MBRL) involves acquiring a model by interacting with the environment and learning to make the optimal decision using the model \citep{silver2017mastering,levine2016end}. MBRL is appealing due to its significantly reduced sample complexity compared to its model-free counterparts. However, greedy model exploitation that assumes the model is sufficiently accurate lacks guarantees for global optimality. The policies can be suboptimal and get stuck at local maxima even in simple tasks \citep{curi2020efficient}. As such, several provably-efficient MBRL algorithms have been proposed. Based on the principle of \textit{optimism in the face of uncertainty} (OFU) \citep{strehl2005theoretical,russo2013eluder,curi2020efficient}, OFU-RL achieves the global optimality by ensuring that the optimistically biased value is close to the real value in the long run. Based on Thompson Sampling \citep{thompson1933likelihood}, Posterior Sampling RL (PSRL) \citep{strens2000bayesian,osband2013more,osband2014model} explores by greedily optimizing the policy in an MDP which is sampled from the posterior distribution over MDPs. Beyond finite MDPs, to obtain a general bound that permits sample efficiency in various cases, we need to introduce additional complexity measure. For example, \citep{russo2013eluder,osband2014model} provide an $\Tilde{O}(\sqrt{d_E T})$ regret for both OFU and PSRL with eluder dimension $d_E$ capturing how effectively the model generalizes. However, it is recently shown \citep{dong2021provable,li2021eluder} that the eluder dimension for even the simplest nonlinear models cannot be polynomially bounded. The effectiveness of the algorithms will thus be crippled. The underlying reasons for such ineffectiveness are the aggressive policy updates and the over-exploration issue. Specifically, when a nonlinear model is used to fit complex transition functions, its generalizability will be poor compared to simple linear problems. If a random model is selected from the large hypothesis, e.g., optimistically chosen or sampled from the posterior, it is ``unsettled". In other words, the selected model can change dramatically between successive iterations. Policy updates under this model will also be aggressive and thus cause value degradation. What's worse, large epistemic uncertainty results in an unrealistic model, which drives agents for uninformative exploration. An exploration step can only eliminate an exponentially small portion of the hypothesis. In this work, we present \textit{Conservative Dual Policy Optimization} (CDPO), a simple yet provable MBRL algorithm. As the sampling process in PSRL harms policy updates due to the unsettled model during training, we propose the \textit{Referential Update} that greedily optimizes an intermediate policy under a \textit{reference model}. It mimics the sampling-then-optimization procedure in PSRL but offers more stability since we are free to set a steady reference model. We show that even without a sampling procedure, CDPO can match the expected regret of PSRL up to constant factors for any proper reference model, e.g., the least squares estimate where the confidence set is centered at. The \textit{Conservative Update} step then follows to encourage exploration within a reasonable range. Specifically, the objective of a reactive policy is to maximize the \textit{expectation} of model value, instead of a single model’s value. These two steps are performed in an iterative manner in CDPO. Theoretically, we show the statistical equivalence between CDPO and PSRL with the same order of expected regret. Additionally, we give the iterative policy improvement bound of CDPO, which guarantees monotonic improvement under mild conditions. We also establish the sublinear regret of CDPO, which permits its global optimality equipped with any model function class that has a bounded complexity measure. To our knowledge, the proposed framework is the first that \textit{simultaneously} enjoys global optimality and iterative policy improvement. Experimental results verify the existence of the over-exploration issue and demonstrate the practical benefit of CDPO. \section{Background} \subsection{Model-Based Reinforcement Learning} \label{bg_ml} We consider the problem of learning to optimize an infinite-horizon $\gamma$-discounted Markov Decision Process (MDP) over repeated episodes of interaction. Denote the state space and action space as $\mathcal{S}$ and $\mathcal{A}$, respectively. When taking action $a\in\mathcal{A}$ at state $s\in\mathcal{S}$, the agent receives reward $r(s,a)$ and the environment transits into a new state according to probability $s'\sim f^*(\cdot| s,a)$. Here, $f^*$ is a dirac measure for deterministic dynamics and is a probability distribution for probabilistic dynamics. In model-based RL, the true dynamical model $f^*$ is unknown and needs to be learned using the collected data through episodic (or iterative) interaction. The history data up to iteration $t$ then forms $\mathcal{H}_t=\{\left\{s_{h,i},a_{h,i},s_{h+1,i}\right\}_{h=0}^{H-1}\}_{i=1}^{t-1}$, where $H$ is the actual timesteps agents run in an episode. The posterior distribution of the dynamics model is estimated as $\phi(\cdot|\mathcal{H}_t)$. Alternatively, the frequentist model of the mean and uncertainty can also be estimated. Specifically, consider the model function class $\mathcal{F}=\{f: \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}\}$ with size $|\mathcal{F}|$, which contains the real model $f^*\in\mathcal{F}$. The confidence set (or model hypothesis set) $\mathcal{F}_t \subset \mathcal{F}$ is introduced to represent the range of dynamics that is statistically plausible \citep{russo2013eluder,osband2014model,curi2020efficient}. To ensure that $f^*\in\mathcal{F}_t$ with high probability, one way is to construct the confidence set as $\mathcal{F}_t := \{f\in\mathcal{F}\,|\, \lVert f - \hat{f}_t^{LS} \rVert_{2,E_t} \leq \sqrt{\beta_t}\}$. Here, $\beta_t$ is an appropriately chosen confidence parameter (via concentration inequality), the cumulative empirical 2-norm is defined by $\lVert g \rVert_{2,E_t}^2 := \sum_{i=1}^{t-1} \lVert g(x_i) \rVert_2^2$. The least squares estimate is \#\label{ls_def} \hat{f}_t^{LS} := \argmin_{f\in\mathcal{F}}\sum_{(s, a, s')\in\mathcal{H}_t}\lVert f(s, a) - s' \rVert_2^2. \# Denote the state and state-action value function associated with $\pi$ on model $f$ by $V^{f}_{\pi}: \mathcal{S}\rightarrow \mathbb{R}$ and $Q^{f}_{\pi}: \mathcal{S}\times\mathcal{A}\rightarrow \mathbb{R}$, respectively, which are defined as \$ V^{f}_{\pi}(s) = \EE\biggl[\sum_{h=0}^{\infty} \gamma^h r(s_h, a_h) \,\bigg|\, s_0=s, \pi, f\biggr], \,Q^{f}_{\pi}(s, a) = \EE\biggl[\sum_{h=0}^{\infty} \gamma^h r(s_h, a_h) \,\bigg|\, s_0=s, a_0=a, \pi, f\biggr]. \$ The objective of RL is to learn a policy $\pi^*=\argmax_{\pi}J(\pi)$ that maximizes the expected return $J(\pi)$. Denote the initial state distribution as $\zeta$. Under policy $\pi$, the state visitation measure $\nu_\pi(s)$ over $\mathcal{S}$ and the state-action visitation measure $\rho_\pi(s, a)$ over $\mathcal{S}\times\mathcal{A}$ in the true MDP are defined as \#\label{visitation} \nu_\pi(s) = (1 - \gamma) \cdot\sum_{h=0}^\infty\gamma^h \cdot\mathbb{P}(s_h = s), \quad \rho_\pi(s, a) = (1 - \gamma) \cdot\sum_{h=0}^\infty\gamma^h \cdot\mathbb{P}(s_h = s, a_h = a), \# where $s_0\sim\zeta$, $a_h\sim\pi(\cdot|s_h)$ and $s_{h+1}\sim f^*(\cdot|s_h, a_h)$. The objective $J(\pi)$ is then \#\label{obj} J(\pi) = \EE_{s_0\sim\zeta}[V^{f^*}_{\pi}(s_0)] = \EE_{(s, a)\sim\rho_\pi}[r(s, a)] \# \subsection{Cumulative Regret and Asymptotic Optimality} A common criterion to evaluate RL algorithms is the cumulative regret, defined as the cumulative performance discrepancy between policy $\pi_t$ at each iteration $t$ and the optimal policy $\pi^*$ over the run of the algorithm. The (cumulative) regret up to iteration $T$ is defined as: \#\label{regret_def} {\rm Regret}(T, \pi, f^*) := \sum_{t=1}^T \int_{s\in\mathcal{S}} \zeta(s) (V^{f^*}_{\pi^*}(s) - V^{f^{*}}_{\pi_t}(s)), \# In the Bayesian view, the model $f^*$, the learning policy $\pi$, and the regret are random variables that must be learned from the gathered data. The Bayesian expected regret is defined as: \#\label{bayesregret_defn} {\rm BayesRegret}(T, \pi, \phi) := \mathbb{E}\left[{\rm Regret}(T, \pi, f^*) \mid f^*\sim\phi \right]. \# One way to prove the asymptotic optimality is to show that the (expected) regret is sublinear in $T$, so that $\pi_t$ converges to $\pi^*$ within sufficient iterations. To obtain the regret bound, the \textit{width of confidence set} $\omega_t(s,a)$ is introduced to represent the maximum deviation between any two members in $\mathcal{F}_t$: \#\label{def_omega} \omega_t(s,a) = \sup_{\underline{f},\overline{f}\sim\mathcal{F}_t}\lVert \overline{f}(s,a) - \underline{f}(s,a)\rVert_2. \# \section{Provable Model-Based Reinforcement Learning} \label{analysis} In this section, we analyze the central ideas and limitations of greedy algorithms as well as two popular theoretically justified frameworks: optimistic algorithms and posterior sampling algorithms. \vskip4pt \noindent{\bf Greedy Model Exploitation.} Before introducing provable algorithms, we first analyze greedy model-based algorithms. In this framework, the agent takes actions assuming that the fitted model sufficiently accurately resembles the real MDP. Algorithms that lie in this category can be roughly divided into two groups: model-based planning and model-augmented policy optimization. For instance, Dyna agents \citep{sutton1990integrated,janner2019trust,hafner2019dream} optimize policies using model-free learners with model-generated data. The model can also be exploited in first-order gradient estimators \citep{heess2015learning,deisenroth2011pilco,clavera2020model} or value expansion \citep{feinberg2018model,buckman2018sample}. On the other hand, model-based planning, or model-predictive control (MPC) \citep{morari1999model,nagabandi2018neural}, directly generates optimal action sequences under the model in a receding horizon fashion. However, greedily exploiting the model without \textit{deep exploration} \citep{osband2019deep} will lead to suboptimal performance. The resulting policy can suffer from premature convergence, leaving the potentially high-reward region unexplored. Since the transition data is generated by the agent taking actions in the real MDP, the dual effect \citep{bar1974dual,klenske2016dual} that current action influences \textit{both} the next state and the model uncertainty is not considered by greedy model-based algorithms. \vskip4pt \noindent{\bf Optimism in the Face of Uncertainty.} A common provable exploration mechanism is to adopt the principle of \textit{optimism in the face of uncertainty} (OFU) \citep{strehl2005theoretical,russo2013eluder,curi2020efficient}. With OFU, the agent assigns to its policy an optimistically biased estimate of virtual value by \textit{jointly} optimizing over the policies and models inside the confidence set $\mathcal{F}_t$. At iteration $t$, the OFU-RL policy $\pi_t$ is defined as: \#\label{ofu} \pi_t = \argmax_\pi \max_{f_t\in\mathcal{F}_t} V_\pi^{f_t}. \# Most asymptotic analyses of optimistic RL algorithms can be abstracted as showing two properties: the virtual value $V^f_{\pi}$ is sufficiently high, and it is close to the real value $V^{f^*}_{\pi}$ in the long run. However, in complex environments where the generalizability of nonlinear models is limited, large epistemic uncertainty will result in an unrealistically large optimistic return that drives agents for uninformative exploration. What's worse, such suboptimal exploration steps eliminate only a small portion of the model hypothesis \citep{dong2021provable}, leading to a slow converging process and suboptimal practical performance. \vskip4pt \noindent{\bf Posterior Sampling Reinforcement Learning.} An alternative exploration mechanism is based on \textit{Thompson Sampling} (TS) \citep{thompson1933likelihood,russo2017tutorial}, which involves selecting the maximizing action from a statistically plausibly set of action values. These values can be associated with the MDP sampled from its posterior distribution, thus giving its name \textit{posterior sampling for reinforcement learning} (PSRL) \citep{strens2000bayesian,osband2013more,osband2014model}. The algorithm begins with a prior distribution of $f^*$. At each iteration $t$, a model $f_t$ is sampled from the posterior $\phi(\cdot|\mathcal{H}_t)$, and $\pi_t$ is updated to be optimal under $f_t$: \#\label{psrl} f_t \sim \phi(\cdot|\mathcal{H}_t), \ \pi_t = \argmax_\pi V_\pi^{f_t}. \# The insight is to keep away from actions that are unlikely to be optimal in the real MDP. Exploration is guaranteed by the randomness in the sampling procedure. Unfortunately, executing actions that are optimally associated with a single sampled model can cause similar over-exploration issues \citep{russo2017tutorial,russo2018satisficing}. Specifically, an imperfect model sampled from the large hypothesis can cause aggressive policy updates and value degradation between successive iterations. The suboptimality degree of the resulting policies depends on the epistemic model uncertainty. Besides, executing $\pi_t$ is not intended to offer performance improvement for follow-up policy learning, but only to narrow down the model uncertainty. However, this elimination procedure will be slow when the model suffers a large generalization error, which is quantitatively formulated in the model complexity measure below. \vskip4pt \noindent{\bf Complexity Measure and Generalization Bounds.} In RL, we seek to have the sample complexity for finding a near-optimal policy or estimating an accurate value function. When given access to a generative model (i.e., an abstract sampling model) in finite MDPs, it is known that the (minimax) number of transitions the agent needs to observe can be sublinear in the model size, i.e. smaller than $O(|\mathcal{S}|^2|\mathcal{A}|)$. Beyond finite MDPs where the number of states is large (or countably or uncountably infinite), we are interested in the learnability or generalization of RL. Unfortunately, it is impossible for agnostic reinforcement learning that finds the best hypothesis in some given policy, value, or model hypothesis class: the number of needed samples depends exponentially on the problem horizon \citep{kearns1999approximate}. Despite of the structural assumptions, e.g. linear MDPs \citep{yang2020reinforcement,jin2020provably,yang2019sample} or low-rank MDPs \citep{jiang2017contextual,modi2021model}, we focus on the generalization bounds that can cover various cases. This can be done with additional complexity measure, e.g. eluder dimension \citep{russo2013eluder}, witness rank \citep{sun2019model}, or bilinear rank \citep{du2021bilinear}. By introducing the eluder dimension $d_E$ \citep{russo2013eluder}, previous work \citep{osband2014model,osband2017posterior} established regret $\Tilde{O}(\sqrt{d_E T})$ for both OFU-RL and PSRL. Intuitively, the eluder dimension captures how effectively the model learned from observed data can extrapolate to future data, and permits sample efficiency in various (linear) cases. Nevertheless, it is shown in \citep{dong2021provable,li2021eluder} that even the simplest nonlinear models do not have a polynomially-bounded eluder dimension. The following result is from Thm. 5.2 in Dong et al. \citep{dong2021provable} and similar results are also established in \citep{li2021eluder}. \begin{theorem}[Eluder Dimension of Nonlinear Models \citep{dong2021provable}] \label{dong_theorem} The eluder dimension $dim_E(\mathcal{F},\varepsilon)$ (c.f. Definition \ref{elu_def}) of one-layer ReLU neural networks is at least $\Omega(\varepsilon^{-(d-1)})$, where $d$ is the state-action dimension, i.e. $(s, a)\in \mathbb{R}^d$. With more layers, the requirement of ReLU activation can be relaxed. \end{theorem} As a result, additional complexity is hidden in the eluder dimension, e.g. when we choose $\varepsilon = T^{-1}$, regret $\Tilde{O}(\sqrt{d_E T})$ contains $d_E = \Omega(T^{d-1})$ and is no longer sublinear in $T$. In this case, previous provable exploration mechanisms will lose the desired property of global optimality and sample efficiency, which is the underlying reason for the over-exploration issue. \section{Conservative Dual Policy Optimization} When using nonlinear models, e.g. neural networks, the over-exploration issue causes unfavorable performance in practice, in terms of slow convergence and suboptimal asymptotic values. To tackle this challenge, the key is to abandon the sampling process and have guarantees \textit{during} training. In this regard, we propose \textit{Conservative Dual Policy Optimization} (CDPO) that is simple yet provably efficient. By optimizing the policy following two successive update procedures iteratively, CDPO simultaneously enjoys monotonic policy value improvement and global optimality properties. \subsection{CDPO Framework} To begin with, consider the problem of maximizing the expected value, $\pi_{t} = \argmax_{\pi}\EE[V_\pi^{f^*} \,|\, \mathcal{H}_t]$, where $\EE[V^{f^*} \,|\, \mathcal{H}_t]$ denotes the expected values over the posterior. Obviously, we have the expected value improvement guarantee $\EE[V_{\pi_t}^{f^*} \,|\, \mathcal{H}_t] \geq \EE[V_{\pi_{t-1}}^{f^*} \,|\, \mathcal{H}_t]$. We can also perform expected value maximization in a trust-region to guarantee iterative improvement under any $f^*$. However, such updates will lose the desired global convergence guarantee and may get stuck at local maxima even with linear models. For this reason, we propose a dual procedure of policy optimization. \vskip4pt \noindent{\bf Referential Update.} The first update step returns an intermediate policy, denoted as $q_t$. This step is a greedy one in the sense that $q_t$ is optimal with respect to the value of a \textit{single} model $\Tilde{f}_t$, which we call a \textit{reference model}. Selecting a reference model and optimizing a policy w.r.t. it imitates the sampling-optimization procedure of PSRL. We will show in Section \ref{sec::equ} that if we pose the constraint $\Tilde{f}_t\in\mathcal{F}_t$, then CDPO achieves the same expected regret as PSRL, which implies global optimality. More importantly, policy optimization under $\Tilde{f}_t$ is more stable and can avoid the over-exploration issue in PSRL since we are free to set it as a steady reference between successive iterations. For example, we fix the reference model $\Tilde{f}_t$ as the least squares estimate $\hat{f}_t^{LS}$ defined in \eqref{ls_def}, instead of a random model \textit{sampled} from the large hypothesis that causes aggressive policy update. This gives us: \#\label{greedy_update} \hspace{-4.0cm}\text{Referential Update (with LS Reference):}\qquad q_t = \argmax_q V_q^{\hat{f}_t^{LS}}. \# \vskip4pt \noindent{\bf Constrained Conservative Update.} The conservative update then follows as the second stage of CDPO, which takes input $q_t$ and returns the reactive policy $\pi_{t+1}$: \#\label{conserv_update} \text{Conservative Update:}\,\,\pi_{t} = \argmax_{\pi} \EE\bigl[V_\pi^{f_t} \,\big|\, \mathcal{H}_t\bigr], \,\text{s.t.} \, \EE_{s\sim\nu_{q_t}}\Bigl[D_{\text{TV}}\bigl(\pi_{t}(\cdot|s), q_t(\cdot|s)\bigr)\Bigr] \leq \eta, \# where $D_{\text{TV}}(\cdot, \cdot)$ stands for the total variation distance and $\eta$ is the hyperparameter that characterizes the trust-region constraint and controls the degree of exploration. Compared with OFU-RL and PSRL, the above exploration and policy updates are conservative since the policy maximizes the \textit{expectation} of the model value, instead of a \textit{single} model's value (i.e. the optimistic model in OFU-RL and the sampled model in PSRL). The conservative update \eqref{conserv_update} avoids the pitfalls when the optimistic model or the posterior sampled model suffers large bias, which leads to aggressive policy updates and over-exploration during training. Notably, the term \textit{conservative} in our work differs from previous use, e.g. Conservative Policy Iteration \citep{kakade2002approximately,schulman2015trust}. While the latter refers to policy updates with constraints, ours is to emphasize the conservative range of randomness and the reduction of unnecessary over-exploration by shelving the sampling process. In our analysis, we follow previous work \citep{osband2014model,sun2018dual,curi2020efficient,luo2018algorithmic} and assume access to a policy optimization oracle. In practice, the problem of finding an optimal policy under a given model can be approximately solved by model-based solvers listed below. More fine-grained analysis can be obtained by applying off-the-shelf results established for policy gradient or MPC for specific policy or model function classes. This, however, is beyond the scope of this paper. \subsection{Practical Algorithm} \small\begin{wrapfigure}{R}{0.52\textwidth} \begin{minipage}{0.52\textwidth} \begin{algorithm}[H] \caption{Practical CDPO Algorithm} \textbf{Input:} Prior $\phi$, model-based policy optimization solver $\texttt{MBPO}(\pi, f, \mathcal{J})$. \begin{algorithmic}[1] \label{alg:cdpo} \FOR{iteration $t=1,...,T$} \STATE $q_t\leftarrow\texttt{MBPO}(\cdot, \hat{f}_{t}^{LS}, \eqref{greedy_update})$ \STATE Sample $N$ models $\{f_{t,n}\}_{n=1}^N$ \STATE $\pi_t\leftarrow\texttt{MBPO}(q_t, \{f_{t,n}\}_{n=1}^N, \eqref{conserv_update})$ \STATE Execute $\pi_{t}$ in the real MDP \STATE Update $ \mathcal{H}_{t+1} = \mathcal{H}_t \cup \left\{s_{h,t},a_{h,t},s_{h+1,t}\right\}_{h}$\hspace{-0.1cm} \STATE Update $\hat{f}_{t+1}^{LS}$ and $\phi$ \ENDFOR \RETURN policy $\pi_T$ \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure}\normalsize The pseudocode of CDPO is in Alg. \ref{alg:cdpo}. The model-based solver $\texttt{MBPO}(\pi, f, \mathcal{J})$ outputs the policy ($q_t$ or $\pi_t$) that optimizes the objective $\mathcal{J}$ with access to model $f$. Several different types of solvers can be leveraged, e.g., model-augmented model-free policy optimization such as Dyna \citep{sutton1990integrated}, model-based reparameterization gradient \citep{heess2015learning,clavera2020model}, or model-predictive control \citep{wang2019exploring}. Details of different optimization choices can be found in Appendix \ref{instan}. In experiments, we use Dyna and MPC solvers. With Pinsker's inequality, the total variation constraint in \eqref{conserv_update} is replaced by the KL divergence \citep{schulman2015trust,abdolmaleki2018maximum} in experiments. We follow previous work \citep{lu2017ensemble} to use neural network ensembles \citep{curi2020efficient,kidambi2021optimism} for model estimation and use calibrations \citep{kuleshov2018accurate,curi2020efficient} for accurate uncertainty measure. \section{Analysis} \label{ana} In this section, we first show the statistical equivalence between CDPO and PSRL in terms of the same {\rm BayesRegret} bound. Then we give the iterative policy value bound with monotonic improvement. Finally, we prove the global convergence of CDPO. The missing proofs can be found in the Appendix. \subsection{Statistical Equivalence between CDPO and PSRL} \label{sec::equ} We begin our analysis by highlighting the connection between CDPO and PSRL with the following theorem, from which we also show the role of the dual update procedure and the reference model. \begin{theorem}[CDPO Matches PSRL in {\rm BayesRegret}] \label{thm_connection} Let $\pi^{\text{PSRL}}$ be the policy of any posterior sampling algorithm for reinforcement learning optimized by \eqref{psrl}. If the {\rm BayesRegret} bound of $\pi^{\text{PSRL}}$ satisfies that for any $T>0$, ${\rm BayesRegret}(T, \pi^{\text{PSRL}}, \phi) \leq \mathcal{D}$, then for all $T>0$, we have for the CDPO policy $\pi^{\text{CDPO}}$ that ${\rm BayesRegret}(T, \pi^{\text{CDPO}}, \phi) \leq 3\mathcal{D}$. \end{theorem} \begin{hproof} We first sketch the general strategy in the PSRL analysis. Recall the definition of the Bayesian expected regret ${\rm BayesRegret}(T, \pi, \phi) := \EE[\sum_{t=1}^T \mathfrak{R}_t]$, where $\mathfrak{R}_t = V^{f^*}_{\pi^*} - V^{f^{*}}_{\pi_t}$. PSRL breaks down $\mathfrak{R}_t$ by adding and subtracting $V_{\pi_{f_t}}^{f_t}$, the value of the \textit{imagined} optimal policy $\pi_{f_t}$ under a sampled model $f_t$, i.e. $\pi_{f_t} = \argmax_{\pi} V_{\pi}^{f_t}$. \# \text{PSRL:}\qquad\mathfrak{R}_t = V^{f^*}_{\pi^*} - V^{f^{*}}_{\pi_t} = V^{f^*}_{\pi^*} - V^{f^{*}}_{\pi_{f_t}} = V^{f^*}_{\pi^*} - V^{f_t}_{\pi_{f_t}} + V^{f_t}_{\pi_{f_t}} - V^{f^{*}}_{\pi_{f_t}}, \# where the second equality follows from the definition of the PSRL policy. Following the law of total expectation and the Posterior Sampling Lemma (e.g. Lemma 1 in \citep{osband2013more}), we have $\EE[V^{f^*}_{\pi^*} - V^{f_t}_{\pi_{f_t}}] = 0$ by noting that $f^*$ and $f_t$ are identically distributed conditioned upon $\mathcal{H}_t$. Then we obtain \# {\rm BayesRegret}(T, \pi^{\text{PSRL}}, \phi) &= \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{f^{*}}_{\pi_{f_t}}]\leq \gamma\sum_{t=1}^T\EE\Bigl[\EE_{\rho}\bigl[L\|f_t(s_h, a_h) - f^*(s_h, a_h)\|_2\bigr]\Bigr]\notag\\ &\leq \gamma\frac{L}{1 - 4\delta}\sum_{t=1}^T\EE[\omega_t] + 4\gamma\delta T \leq \mathcal{D}, \# where the first inequality follows from the simulation lemma under the $L$-Lipschitz value assumption \citep{osband2014model}. The second inequality follows from the definition of $\omega_t$ in \eqref{def_omega} and the construction of confidence set such that $\mathbb{P}(f^*\in\bigcap\mathcal{F}_t)\geq 1 - 2\delta$ and $\mathbb{P}(f_t\in\bigcap\mathcal{F}_t, f^*\in\bigcap\mathcal{F}_t)\geq 1 - 4\delta$ via a union bound. As more data is collected, the model uncertainty is reduced and the sum of confidence set width $\omega_t$ will be sublinear in $T$ (c.f. Lemma \ref{set_high_prob} and \ref{sum_of_width}), indicating sublinear regret. When it comes to CDPO, we decompose the regret as \# \text{CDPO:}\qquad\mathfrak{R}_t = V^{f^*}_{\pi^*} - V^{f^{*}}_{\pi_t} = V^{f^*}_{\pi^*} - V^{f_t}_{\pi_{f_t}} + V^{f_t}_{\pi_{f_t}} - V^{f_t}_{\pi_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}, \# where the CDPO policy $\pi_t$ is defined in \eqref{conserv_update}. Since $\EE[V^{f^*}_{\pi^*} - V^{f_t}_{\pi_{f_t}}] = 0$, we have \#\label{eq::cdpo_bayes_con} &{\rm BayesRegret}(T, \pi^{\text{CDPO}}, \phi) = \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{f_t}_{\pi_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}]\notag\\ &\quad\leq \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{\Tilde{f}_t}_{\pi_{f_t}} + V^{\Tilde{f}_t}_{q_t} - V^{f_t}_{q_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}] \leq \frac{L}{1 - 4\delta}\sum_{t=1}^T3\EE[\omega_t] + 8\gamma\delta T \leq 3\mathcal{D}, \# where the first inequality follows from the greediness of $q_t$ and $\pi_t$ in the dual update steps, i.e., $V^{\Tilde{f}_t}_{\pi_{f_t}} \leq V^{\Tilde{f}_t}_{q_t}$ for any $\pi_{f_t}$ as well as $\EE[V^{f_t}_{\pi_t}] \geq \EE[V^{f_t}_{q_t}]$. The $8\delta T$ term is introduced since $\Tilde{f}_t\in\mathcal{F}_t$ and $\mathbb{P}(f_t\in\bigcap\mathcal{F}_t, \Tilde{f}_t\in\bigcap\mathcal{F}_t)\geq 1 - 2\delta$. \end{hproof} Theorem \ref{thm_connection} indicates that although CDPO performs conservative updates and abandons the sampling process, it matches the statistical efficiency of PSRL up to constant factors. The importance of the reference model and the dual procedure is also reflected in the proof. The referential update builds the bridge between $V^{f_t}_{\pi_{f_t}}$ and $V^{f_t}_{\pi_t}$. Policy optimization under the reference model mimics the sampling-then-optimization procedure of PSRL while offering more stability when the reference is steady, e.g., the least squares estimate we use. We formalize this idea below. \subsection{CDPO Policy Iterative Improvement} \label{mono_sec} One motivation for the conservative update is that it maximizes (thus improves) the expected value over the posterior. In this section, we are interested in the policy value improvement under any unknown $f^*$. Namely, we seek to have the iterative improvement bound $J(\pi_{t}) - J(\pi_{t-1})$, where the true objective $J$ is defined in \eqref{obj}. We impose the following regularity conditions on the underlying MDP transition and the state-action visitation. \begin{assumption}[Regularity Condition on MDP Transition] \label{assump::reg} Assume that the MDP transition function $f^*: \mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}$ is with additive $\sigma$-sub-Gaussian noise and bounded norm, i.e., $\|s\|_2\leq C$. \end{assumption} \begin{assumption}[Regularity Condition on State-Action Visitation] \label{reg_assumption} We assume that there exists $\kappa > 0$ such that for any policy $\pi_t$, $t\in[1, T]$, \# \biggl\{\EE_{\rho_{\pi_t}}\biggl[\Bigl(\frac{d\rho_{q_{t+1}}}{d\rho_{\pi_t}}(s, a)\Bigr)^2\biggr]\biggr\}^{1/2}\leq \kappa, \# where $d\rho_{q_{t+1}} / d\rho_{\pi_t}$ is the Radon-Nikodym derivative of $\rho_{q_{t+1}}$ with respect to $\rho_{\pi_t}$. \end{assumption} \begin{theorem}[Policy Iterative Improvement] \label{thm::pii} Suppose we have $\|\Tilde{f}(\cdot, \cdot)\|\leq C$ for $\Tilde{f}\in\mathcal{F}$ where the model class $\mathcal{F}$ is finite. Define $\iota := \max_{s, a}|A^{f^*}_{\pi}(s, a)|$, where $A^{f^*}_{\pi}$ is the advantage function defined as $A^{f^*}_{\pi}(s, a) := Q^{f^*}_{\pi}(s, a) - V^{f^*}_{\pi}(s)$. With probability at least $1 - \delta$, the policy improvement between successive iterations is bounded by \#\label{eq::pii} J(\pi_t) - J(\pi_{t - 1}) \geq\Delta(t) - (1+\kappa)\cdot\frac{22\gamma C^2\ln(|\mathcal{F}|/\delta)}{(1 - \gamma) H} - \frac{2\eta\iota}{1 - \gamma}, \# where $\Delta(t) := \EE_{s\sim\zeta}\bigl[V_{q_t}^{\Tilde{f}_t}(s) - V_{q_{t-1}}^{\Tilde{f}_t}(s)\bigr]\geq 0$ due to the greediness of $q_t$. \end{theorem} The above theorem provides the iterative improvement bound following the CDPO algorithm. When $H$ is large enough, the policy value improvement is at least $\Delta(t)$ by choosing a properly small $\eta$. In particular, the first term $\Delta(t)$ characterizes the policy improvement brought by the greedy exploitation in \eqref{greedy_update}, and $\Delta(t)\geq 0$ since $q_t$ is optimal under the reference model $\Tilde{f}_t$. The second term in \eqref{eq::pii} accounts for the generalization error of least square methods. Specifically, model $\Tilde{f}_t=\hat{f}_t^{LS}\in\mathcal{F}_t$ is trained to fit the history samples. However, we seek to have the model error bound over the state-action visitation measure, which requires the deviation from the empirical mean to its expectation using Bernstein’s inequality and union bound. Finally, the trust-region constraint in \eqref{conserv_update} brings the $4\eta\alpha / (1 - \gamma)$ term, which reduces to zero if $\eta$ is small. This makes intuitive sense as $\eta$ controls the degree of conservative exploration. \subsection{Global Optimality of CDPO} \label{asym_sec} We now analyze the global optimality of CDPO by studying its expected regret. As discussed in Section \ref{analysis}, agnostic reinforcement learning is impossible. Without structural assumptions, additional complexity measure is required for a generalization bound beyond finite settings. For this reason, we adopt the notation of \textit{eluder dimension} \citep{russo2013eluder,osband2014model}, defined as follows: \begin{definition}[($\mathcal{F},\varepsilon$)-Dependence] If we say $(s, a)\in\mathcal{S}\times\mathcal{A}$ is $(\mathcal{F},\varepsilon)$-dependent on $\{(s_i, a_i)\}_{i=1}^n\subseteq \mathcal{S}\times\mathcal{A}$, then \$ \forall f_1,f_2 &\in \mathcal{F}, \,\sum_{i=1}^{n} \bigl\| f_1(s_i, a_i) - f_2(s_i, a_i) \bigr\|_2^2 \leq \varepsilon^2 \Rightarrow \bigl\|f_1(s, a) - f_2(s, a)\bigr\|_2 \leq \varepsilon. \$ Conversely, $(s, a)\in\mathcal{S}\times\mathcal{A}$ is ($\mathcal{F},\varepsilon$)-independent of $\{(s_i, a_i)\}_{i=1}^n$ if and only if it does not satisfy the definition for dependence. \end{definition} \begin{definition}[Eluder Dimension] \label{elu_def} The eluder dimension $dim_E(\mathcal{F},\varepsilon)$ is the length of the longest possible sequence of elements in $\mathcal{S}\times\mathcal{A}$ such that for some $\varepsilon'\geq\varepsilon$, every element is ($\mathcal{F},\varepsilon'$)-independent of its predecessors. \end{definition} We make the following assumption on the Lipschitz continuity of the value function. \begin{assumption}[Lipschitz Continuous Value] \label{assump1} At iteration $t$, assume the value function $V^{f_t}_{\pi}$ for any policy $\pi$ is Lipschitz continuous in the sense that $|V^{f_t}_{\pi}(s_1) - V^{f_t}_{\pi}(s_2)| \leq L_t\lVert s_1 - s_2\rVert_2$. \end{assumption} Notably, Assumption \ref{assump1} holds under certain regularity conditions of the MDP, e.g. when the transition and rewards are Lipschitz continuous \citep{bastani2020sample,pirotta2015policy}. Under this assumption, many RL settings can be satisfied \citep{dong2021provable}, e.g., nonlinear models with stochastic Lipschitz policies and Lipschitz reward models, and is thus adopted by various model-based RL work \citep{luo2018algorithmic,chowdhury2019online,dong2021provable}. We now study the global optimality of CDPO by the following expected regret theorem, which can be seen as a direct consequence of Theorem \ref{thm_connection} that states the statistical equivalence between CDPO and PSRL. \begin{theorem}[Expected Regret of CDPO] \label{near_opt_bound} Let $N(\mathcal{F},\alpha,\lVert\cdot\rVert_2)$ be the $\alpha$-covering number of $\mathcal{F}$. Denote $d_E := \dim_{E}(\mathcal{F},T^{-1})$ for the eluder dimension of $\mathcal{F}$ at precision $1/T$. Under Assumption \ref{assump::reg} and \ref{assump1}, the cumulative expected regret of CDPO in $T$ iterations is bounded by \# &{\rm BayesRegret}(T, \pi, \phi) \leq \frac{\gamma T(3T - 5)L}{(T-1)(T-2)} \cdot\left(1 + \frac{1}{1 - \gamma} C d_E + 4\sqrt{T d_E\beta}\right) + 4\gamma C,\\ &\text{where \,}\beta := 8\sigma^2\log\Bigl(2N\bigl(\mathcal{F}, 1 / (T^2), \lVert\cdot\rVert_2\bigr)T\Bigr) + 2\bigl(8C + \sqrt{8\sigma^2 \log(8T^3)}\,\bigr)/T \text{ and } L := \EE[L_t].\notag \# \end{theorem} Here, the covering number is introduced since we are considering $\mathcal{F}$ that may contain infinitely many functions, for which we cannot simply apply a union bound. Besides, $\beta$ is the confidence parameter that contains $f^*$ with high probability (via concentration inequality). To clarify the asymptotics of the expected regret bound, we introduce another measure of dimensionality that captures the sensitivity of $\mathcal{F}$ to statistical overfitting. \begin{corollary}[Asymptotic Bound] Define the Kolmogorov dimension w.r.t. function class $\mathcal{F}$ as \$ d_K = \dim_{K}(\mathcal{F}) := \limsup_{\alpha \downarrow 0}\frac{\log(N(\mathcal{F},\alpha,\lVert\cdot\rVert_2))}{\log(1/\alpha)}. \$ Under the assumptions of Theorem \ref{near_opt_bound} and by omitting terms logarithmic in $T$, the regret of CDPO is \# &{\rm BayesRegret}(T, \pi, \phi) = \Tilde{O}(L\sigma\sqrt{d_K d_E T\,}). \# \end{corollary} The sublinear regret result permits the global optimality and sample efficiency for any model class with a reasonable complexity measure. Meanwhile, the iterative improvement theorem guarantees efficient exploration and good performance even when the model class is highly nonlinear. \section{Empirical Evaluation} \label{exp} \subsection{Understanding Different Exploration Mechanisms} We first provide insights and evidence of why CDPO exploration can be more efficient in the tabular $N$-Chain MDPs, which have optimal \textit{right} actions and suboptimal \textit{left} actions at each of the $N$ states. Settings and full results are provided in Appendix \ref{chain_exp}. In Figure \ref{post_fig}, we compare the posterior of CDPO and PSRL at the state that is the furthest away from the initial state, i.e. the state that is the hardest for the agents to reach and explore. \begin{figure*} \caption{CDPO and PSRL posterior on an $8$-Chain MDP and a $15$-Chain MDP, where the \textit{right} actions are optimal.} \label{post_fig} \caption{Regret curve of CDPO and PSRL when $N=8$ and $N=15$.} \label{reg_fig} \end{figure*} When training starts, both algorithms have a large variance of value estimation. However, as training progresses, CDPO gives more accurate and certain estimates, but \textit{only} for the optimal \textit{right} actions not for the suboptimal \textit{left} actions, while PSRL agents explore \textit{both} directions. This verifies the potential over-exploration issue in PSRL: as long as the uncertainty contains unrealistically large values, PSRL agents can perform uninformative exploration by acting suboptimally according to an inaccurate \textit{sampled} model. In contrast, CDPO replaces the sampled model with a stable mean estimate and cares about the \textit{expected} value, thus avoiding such pitfalls. We see in Figure \ref{reg_fig} that although CDPO has much larger uncertainty for the suboptimal \textit{left} actions, its regret is lower. \subsection{Exploration Efficiency with Nonlinear Model Class} In finite MDPs, PSRL-style agents can specify and try every possible action to finally obtain an accurate high-confidence prediction. However, our discussion in Section \ref{analysis} indicates that a similar over-exploration issue in more complex environments can lead to less informative exploration steps, which only eliminate an exponentially small portion of the uncertainty. To see its impact on the training performance, we report the results of provable algorithms with nonlinear models on several MuJoCo tasks in Figure \ref{fig::non_provable}. For OFU-RL, we mainly evaluate HUCRL \citep{curi2020efficient}, a deep algorithm proposed to deal with the intractability of the joint optimization. We observe that all algorithms achieve asymptotic optimality in the inverted pendulum. Since the dimension of the pendulum task is low, learning an accurate (and thus generalizable) model poses no actual challenge. However, in higher dimensional tasks such as half-cheetah, CDPO achieves a higher asymptotic value with faster convergence. Implementation details and hyperparameters are provided in Appendix \ref{exp_settings}. \begin{figure*} \caption{Performance of CDPO, PSRL, and HUCRL equipped with nonlinear models in several MuJoCo tasks: inverted pendulum swing-up, pusher goal-reaching, and half-cheetah locomotion.} \label{fig::non_provable} \end{figure*} \subsection{Comparison with Prior RL Algorithms} \label{comp_exp} We also examine a broader range of MBRL algorithms, including MBPO \citep{janner2019trust}, SLBO \citep{luo2018algorithmic}, and ME-TRPO \citep{kurutach2018model}. The model-free baselines include SAC \citep{haarnoja2018soft}, PPO \citep{schulman2017proximal}, and MPO \citep{abdolmaleki2018maximum}. The results are shown in Figure \ref{fig:compare_fig}. We observe that CDPO achieves competitive or higher asymptotic performance while requiring fewer samples compared to both the model-based and the model-free baselines. \begin{figure*} \caption{Comparison between CDPO and model-free, model-based RL baseline algorithms.} \label{fig:compare_fig} \end{figure*} \subsection{Ablation Study} We conduct ablation studies to provide a better understanding of the components in CDPO. One can observe from Figure \ref{fig:ablation} that the policies updated with only Referential Update or Conservative Update lag behind the dual framework. We also test the necessity and sensitivity of the constraint hyperparameter $\eta$. We see that a constant $\eta$ and a time-decayed $\eta$ achieve similar asymptotic values with a similar convergence rate, showing the robustness of CDPO. However, removing the constraint will lose the policy improvement guarantee, thus causing degradation. Ablation on different choices of $\texttt{MBPO}$ solver (Dyna and POPLIN-P \citep{wang2019exploring}) shows the generalizability of CDPO. \begin{figure*} \caption{Ablation studies on the effect of the dual update steps and the trust-region constraint. The robustness and generalizability of the CDPO framework are demonstrated by the results of different choices of the constraint threshold and different solvers.} \label{fig:ablation} \end{figure*} \section{Conclusions \& Future Work} \label{conclusion_disccuss} In this work, we present \textit{Conservative Dual Policy Optimization} (CDPO), a simple yet provable model-based algorithm. By iterative execution of the \textit{Referential Update} and \textit{Conservative Update}, CDPO explores within a reasonable range while avoiding aggressive policy update. Moreover, CDPO gets rid of the harmful sampling procedure in previous provable approaches. Instead, an intermediate policy is optimized under a stable \textit{reference} model, and the agent conservatively explore the environment by maximizing the \textit{expected} policy value. With the same order of regret as PSRL, the proposed algorithm can achieve global optimality while monotonically improving the policy. Considering our naive choice of the reference model, other more sophisticated designs should be a fruitful future direction. It will also be interesting to explore different choices of the $\texttt{MBPO}$ solvers, which we would like to leave as future work. \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{} Some components in our algorithm are naively designed. \item Did you discuss any potential negative societal impacts of your work? \answerYes{} See Appendix \ref{societal}. \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} See Assumption \ref{assump::reg} and \ref{assump1}. \item Did you include complete proofs of all theoretical results? \answerYes{} See Appendix \ref{main_proofs}. \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerNA{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{} Our code can be found in the supplemental material. \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \appendix \section{Proofs} \label{main_proofs} \subsection{Proof of Theorem \ref{thm::pii}} \begin{proof} We lay out the proof in two major steps. Firstly, we characterize the performance difference between $J(q_t)$ and $J(\pi_{t-1})$, which can be done by applying Lemma \ref{lemma_diff}. Specifically, we set $\pi_1$, $\pi_2$ in Lemma \ref{lemma_diff} to $q_t$, $\pi_{t-1}$ and set $f$ as the reference model $\Tilde{f}_t$. Then we obtain \#\label{qtminus} &J(q_t) - J(\pi_{t-1}) \notag\\ &\qquad= \Delta(t) - \frac{\gamma}{2(1 - \gamma)}\biggl(\EE_{\rho_{q_t}}\Bigl[\bigl\|\Tilde{f}_t(s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr]+ \EE_{\rho_{\pi_{t-1}}}\Bigl[\bigl\|\Tilde{f}_t(s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr]\biggr), \# where $\Delta(t) := \EE_{s\sim\zeta}\bigl[V_{q_t}^{\Tilde{f}_t}(s) - V_{\pi_{t-1}}^{\Tilde{f}_t}(s)\bigr] \geq 0$ due to the optimality of $q_t$ under $\Tilde{f}_t$, i.e., $q_t = \argmax_q V_q^{\Tilde{f}_t}$. Recall that the reference model is the least squares estimate, i.e., \$\Tilde{f}_t = \hat{f}_t^{LS} = \argmin_{f\in\mathcal{F}}\sum_{(s, a, s')\in\mathcal{H}_{t-1}}\bigl\| f(s, a) - s' \bigr\|_2^2,\$ where $\mathcal{H}_{t-1}$ is the trajectory in the real environment when following policy $\pi_{t-1}$. From the simulation property of continuous distribution, we have the following equivalence between the direct and indirect ways of drawing samples: \$ s'\sim f^*(\cdot|s, a) \ \equiv \ s'= f^*(s, a) + \epsilon, \epsilon\sim p(\epsilon), \$ where $p(\epsilon)$ is some noise distribution. Therefore, according to the Gaussian noise assumption, we obtain from the least squares generalization bound in Lemma \ref{lemma::ls} that \# \EE_{\rho_{\pi_{t-1}}}\Bigl[\bigl\|\Tilde{f}_t(s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr]\leq \frac{22C^2\ln(|\mathcal{F}|/\delta)}{H}, \# where $\epsilon_{\text{approx}} = 0$ in the generalization bound as the realizability is guaranteed since $\hat{f}_t^{LS}$ and $f^*$ are from the same function class $\mathcal{F}$. Similarly, we have for the intermediate policy $q_t$ that \# \EE_{\rho_{q_{t}}}\Bigl[\bigl\|\Tilde{f}_t(s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr]&\leq \EE_{\rho_{\pi_{t-1}}}\Bigl[\bigl\|\Tilde{f}_t(s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr] \cdot\biggl\{\EE_{\rho_{\pi_{t-1}}}\biggl[\Bigl(\frac{d\rho_{q_{t}}}{d\rho_{\pi_{t-1}}}(s)\Bigr)^2\biggr]\biggr\}^{1/2}\notag\\ &\leq \kappa\cdot\frac{22C^2\ln(|\mathcal{F}|/\delta)}{H}. \# Now we can bound \eqref{qtminus} by \#\label{eq::decom_fq} J(q_t) - J(\pi_{t-1}) \geq \Delta(t) - (1+\kappa)\cdot\frac{22\gamma C^2\ln(|\mathcal{F}|/\delta)}{(1 - \gamma) H}. \# The second step of the proof is to characterize the performance difference between $J(\pi_t)$ and $J(q_t)$. From the Performance Difference Lemma \ref{pd_lemma}, we obtain \# J(q_t) - J(\pi_t) &= \frac{1}{1 - \gamma}\cdot \EE_{(s, a)\sim\rho_{q_t}}\bigl[A^{f^*}_{\pi_t}(s, a)\bigr]\notag\\ &= \frac{1}{1 - \gamma}\cdot \EE_{s\sim \nu_{q_t}}\Bigl[\EE_{a\sim q_t}\bigl[A^{f^*}_{\pi_t}(s, a)\bigr]\Bigr]\notag\\ &= \frac{1}{1 - \gamma}\cdot \EE_{s\sim \nu_{q_t}}\Bigl[\EE_{a\sim q_t}\bigl[A^{f^*}_{\pi_t}(s, a)\bigr] - \EE_{a\sim \pi_t}\bigl[A^{f^*}_{\pi_t}(s, a)\bigr]\Bigr], \# where recall that $\iota := \max_{s, a}|A^{f^*}_{\pi}(s, a)|$ and the third equality holds due to $\EE_{a\sim \pi_t}\bigl[A^{f^*}_{\pi_t}(s, a)\bigr] = 0$ for any $s$. By the definition of the total variation distance, we can further bound the absolute difference as \# |J(q_t) - J(\pi_t)|\leq \frac{2\eta\iota}{1 - \gamma}, \# Thus, we have $J(\pi_t) - J(q_t) \geq -2\eta\iota / (1 - \gamma)$ and similarly $J(q_{t-1}) - J(\pi_{t-1}) \geq -2\eta\iota / (1 - \gamma)$. Combining with \eqref{eq::decom_fq} gives us the iterative improvement bound as follows: \# J(\pi_t) - J(\pi_{t - 1}) &= J(\pi_t) - J(q_t) + J(q_{t}) - J(\pi_{t - 1})\notag\\ &\geq\Delta(t) - (1+\kappa)\cdot\frac{22\gamma C^2\ln(|\mathcal{F}|/\delta)}{(1 - \gamma)H} - \frac{2\eta\iota}{1 - \gamma}. \# \end{proof} \subsection{Proof of Theorem \ref{near_opt_bound}} \label{proof_regret} \begin{proof} We are interested in the expected regret defined as ${\rm BayesRegret}(T, \pi, \phi) := \EE[\sum_{t=1}^T \mathfrak{R}_t]$, where $\mathfrak{R}_t = V^{f^*}_{\pi^*} - V^{f^{*}}_{\pi_t}$. Recall the definition of the reactive policy $\pi_t$ in CDPO (i.e. \eqref{conserv_update}) and the imagined best-performing policy $\pi_{f_t}$ under a sampled model $f_t$, i.e., $\pi_{f_t} = \max_{\pi} V_{\pi}^{f_t}$. From the Posterior Sampling Lemma, we know that if $\psi$ is the distribution of $f^*$, then for any sigma-algebra $\sigma(\mathcal{H}_t)$-measurable function $g$, \# \EE[g(f^*)\,|\,\mathcal{H}_t] = \EE[g(f_t)\,|\,\mathcal{H}_t]. \# The PS Lemma together with the law of total expectation gives us \#\label{star_t} \EE[V^{f^*}_{\pi^*} - V^{f_t}_{\pi_{f_t}}] = 0, \# where the equality holds since the true $f^*$ and the sampled $f_t$ are identically distributed when conditioned on $\mathcal{H}_t$. Therefore, we obtain the expected regret for CDPO as \#\label{eq::cdpo_bayes} {\rm BayesRegret}(T, \pi, \phi) &= \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{f_t}_{\pi_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}]\notag\\ &= \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{\Tilde{f}_t}_{\pi_{f_t}} + V^{\Tilde{f}_t}_{\pi_{f_t}} - V^{f_t}_{\pi_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}]\notag\\ &\leq \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{\Tilde{f}_t}_{\pi_{f_t}} + V^{\Tilde{f}_t}_{q_t} - V^{f_t}_{q_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}], \# where the inequality follows from the greediness of $q_t$ and the optimality of $\pi_t$ within a trust-region centered around $q_t$,i.e., $V^{\Tilde{f}_t}_{\pi_{f_t}} \leq V^{\Tilde{f}_t}_{q_t}$ for any $\pi_{f_t}$ and $V^{f_t}_{\pi_t} \geq V^{f_t}_{q_t}$. From the Simulation Lemma \ref{sim_lemma}, we have the bound of $\EE\Bigl[\bigl|V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr|\Bigr]$ for any policy $\pi$ as follows: \# \EE\Bigl[\bigl|V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr|\Bigr] &= \gamma\EE\Bigl[\bigl|\EE_{(s, a)\sim\Tilde{\rho}_\pi}[(f_t(\cdot|s, a) - \Tilde{f}_t(\cdot|s, a))\cdot V_\pi^{f_t}(s, a)]\bigr|\Bigr]\notag\\ &\leq \gamma\EE\biggl[\Bigl|\EE_{(s, a)\sim\Tilde{\rho}_\pi}\bigl[L_t\cdot\|f_t(s, a) - \Tilde{f}_t(s, a)\|_2\bigr]\Bigr|\biggr], \# where the first equation follows from Lemma \ref{sim_lemma} and $\Tilde{\rho}_\pi$ is the state-action visitation measure under model $\Tilde{f}_t$, the second inequality follows the simulation property of continuous distribution and the Lipschitz value function assumption. We define the event $A=\biggl\{\Tilde{f}_t\in\bigcap\limits_{t}\mathcal{F}_t, f_t\in\bigcap\limits_{t}\mathcal{F}_t\biggr\}$. Recall that the model is bounded by $\lVert f \rVert_2\leq C$. Then we can reduce the expected regret to a sum of set widths: \#\label{eq::plug_singletime} \EE\bigl[V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr] &\leq \gamma\EE\biggl[\Bigl|\EE_{(s, a)\sim\Tilde{\rho}_\pi}\bigl[\EE[L_t|A]\omega_t(s, a) + \bigl(1 - \mathbb{P}(A)\bigr)C \bigr]\Bigr|\biggr]. \# We can further know from the construction of the confidence set (c.f. Lemma \ref{set_high_prob}) that $\mathbb{P}\biggl(f^*\in\bigcap\limits_{t}\mathcal{F}_t\biggr)\geq 1 - 2\delta$ and $\mathbb{P}(A) \geq 1 - 2\delta$ since $f_t$, $f^*$ are identically distributed and $\mathbb{P}\bigl(\Tilde{f}_t\in\mathcal{F}_t\bigr) = 1$ as $\mathcal{F}_t$ is centered at the least squares model for all $t$. Besides, we have for \# \EE[L_t | A] \leq \frac{L_t}{P(A)} \leq \frac{L_t}{1-2\delta}. \# Plugging into \eqref{eq::plug_singletime}, we have \# \EE\bigl[V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr] &\leq \gamma\EE\biggl[\Bigl|\EE_{(s, a)\sim\Tilde{\rho}_\pi}\bigl[L_t / (1-2\delta) \omega_t(s, a) + 2\delta C \bigr]\Bigr|\biggr]\notag\\ &\leq \gamma\EE\biggl[\frac{L_t}{1-2\delta}\cdot \Bigl|\EE_{(s, a)\sim\Tilde{\rho}_\pi}\bigl[\omega_t(s, a)\bigr]\Bigr|\biggr] + 2\gamma\delta C. \# Summing over $T$ iterations gives us \# \sum_{t=1}^T \EE\bigl[V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr] \leq \gamma\sum_{t=1}^T\EE\biggl[\frac{L_t}{1-2\delta}\cdot \Bigl|\EE_{(s, a)\sim\Tilde{\rho}_\pi}\bigl[\omega_t(s, a)\bigr]\Bigr|\biggr] + 2\gamma\delta C T. \# By setting $\delta = 1/(2T)$, we obtain \#\label{forallpi} \sum_{t=1}^T \EE\bigl[V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr] &\leq \frac{\gamma LT}{T-1}\sum_{t=1}^{T}\EE_{\Tilde{\rho}_\pi}[\omega_{t}(s,a)] + \gamma C\notag\\ &\leq \frac{\gamma LT}{T-1}\cdot\left(1 + \frac{1}{1 - \gamma}Cd_{E} + 4\sqrt{T d_E\beta_{T}(1 / (2T),\alpha)}\right) + \gamma C, \# where the last inequality follows from Lemma \ref{sum_of_width} to bound the sum of the set width. We denote $d_{E} := \dim_{E}(\mathcal{F},T^{-1})$ for notation simplicity. Since \eqref{forallpi} holds for all policy $\pi$, we have the bound for $\EE[V^{f_t}_{\pi_{f_t}} - V^{\Tilde{f}_t}_{\pi_{f_t}}]$ and the bound for $\EE[V^{\Tilde{f}_t}_{q_t} - V^{f_t}_{q_t}]$. What remains in the expected regret \eqref{eq::cdpo_bayes} is the $\EE[V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}]$ term, which can be bounded similarly. Specifically, we define another event $B=\biggl\{f^*\in\bigcap\limits_{t}\mathcal{F}_t,f_t\in\bigcap\limits_{t}\mathcal{F}_t\biggr\}$. Since by construction $\mathbb{P}\biggl(f^*\in\bigcap\limits_{t}\mathcal{F}_t\biggr)\geq 1 - 2\delta$ and $\mathbb{P}\biggl(f_t\in\bigcap\limits_{t}\mathcal{F}_t\biggr)\geq 1 - 2\delta$, we have $\mathbb{P}(B) \geq 1 - 4\delta$ via a union bound. This implies the following bound \#\label{last_term} \sum_{t=1}^T \EE\bigl[V^{f_t}_{\pi} - V^{f^*}_{\pi}\bigr] &\leq\gamma\sum_{t=1}^T\EE\biggl[\frac{L_t}{1-4\delta}\cdot \Bigl|\EE\bigl[\omega_t(s, a)\bigr]\Bigr|\biggr] + 4\gamma\delta C T\notag\\ &\leq \frac{\gamma LT}{T-2}\sum_{t=1}^{T}\EE_{\rho_\pi}[\omega_{t}(s,a)] + 2\gamma C\notag\\ &\leq \frac{\gamma LT}{T-2}\cdot\left(1 + \frac{1}{1 - \gamma}Cd_{E} + 4\sqrt{T d_E\beta_{T}(1 / (2T),\alpha)}\right) + 2\gamma C, \# where the second inequality follows from the choice of $\delta$, i.e., $\delta = 1/(2T)$. Plugging \eqref{forallpi} and \eqref{last_term} into \eqref{eq::cdpo_bayes}, we obtain the expected regret as \# {\rm BayesRegret}(T, \pi, \phi) &\leq \sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{\Tilde{f}_t}_{\pi_{f_t}} + V^{\Tilde{f}_t}_{q_t} - V^{f_t}_{q_t} + V^{f_t}_{\pi_t} - V^{f^{*}}_{\pi_t}]\notag\\ &\leq \Bigl(\frac{2\gamma LT}{T-1} + \frac{\gamma LT}{T-2}\Bigr) \cdot\left(1 + \frac{1}{1 - \gamma} C d_E + 4\sqrt{T d_E\beta_{T}(1 / (2T),\alpha)}\right) + 4\gamma C\notag\\ &= \frac{\gamma T(3T - 5)L}{(T-1)(T-2)}\cdot\left(1 + \frac{1}{1 - \gamma} C d_E + 4\sqrt{T d_E\beta_{T}(1 / (2T),\alpha)}\right) + 4\gamma C. \# By setting $\alpha = 1 / (T^2)$ and $\delta = 1/(2T)$ in Lemma \ref{set_high_prob}, we have the following confidence parameter that can guarantee that $f^*$ is contained in the confidence set with high probability: \$ \beta_T(1 /(2T),1/(T^2)) = 8\sigma^2\log\Bigl(2N\bigl(\mathcal{F}, 1 / (T^2), \lVert\cdot\rVert_2\bigr)T\Bigr) + 2\bigl(8C + \sqrt{8\sigma^2 \log(8T^3)}\,\bigr)/T, \$ where recall that $N\bigl(\mathcal{F}, \alpha, \lVert\cdot\rVert_2\bigr)$ is the $\alpha$-covering number of $\mathcal{F}$ with respect to the $\|\cdot\|_2$-norm. \end{proof} \subsection{Proof of Theorem \ref{thm_connection}} \begin{proof} Denote the \textit{imagined} optimal policy $\pi_{f_t}$ under a sampled model $f_t$ as $\pi_{f_t} = \max_{\pi} V_{\pi}^{f_t}$. For PSRL, its expected regret can be decomposed as \#\label{eq::cdpo_bayes} {\rm BayesRegret}(T, \pi^{\text{PSRL}}, \phi) &= \sum_{t=1}^T\EE[V^{f^{*}}_{\pi^*} - V^{f^{*}}_{\pi_t}]\notag\\ &=\sum_{t=1}^T\EE[V^{f^{*}}_{\pi^*} - V^{f^{*}}_{\pi_{f_t}}]\notag\\ &=\sum_{t=1}^T\EE[V^{f_t}_{\pi_{f_t}} - V^{f^{*}}_{\pi_{f_t}}], \# where the second equality holds since the PSRL policy $\pi_t := \pi_{f_t}$ for a sampled $f_t$. The third equality follows from \eqref{star_t}, obtained by the Posterior Sampling Lemma and the law of total expectation. Similar with the proof in \ref{proof_regret}, we obtain from the Simulation Lemma \ref{sim_lemma} that \# \EE\Bigl[\bigl|V^{f_t}_{\pi_{f_t}} - V^{f^{*}}_{\pi_{f_t}}\bigr|\Bigr] &= \gamma\EE\Bigl[\bigl|\EE_{(s, a)\sim\rho_\pi}[(f_t(\cdot|s, a) - f^*(\cdot|s, a))\cdot V^\pi(s, a)]\bigr|\Bigr]\notag\\ &\leq \gamma\EE\biggl[\Bigl|\EE_{(s, a)\sim\rho_\pi}\bigl[L_t\cdot\|f_t(s, a) - f^*(s, a)\|_2\bigr]\Bigr|\biggr], \# where the equality follows from Lemma \ref{sim_lemma} and the inequality follows the simulation property of continuous distributions and the Lipschitz value function assumption. Define the event $E=\biggl\{f^*\in\bigcap\limits_{t}\mathcal{F}_t, f_t\in\bigcap\limits_{t}\mathcal{F}_t\biggr\}$. The expected regret can be reduced to the sum of set widths: \#\label{eq::plug_singletime} \EE\bigl[V^{f_t}_{\pi} - V^{\Tilde{f}_t}_{\pi}\bigr] &\leq \gamma\EE\biggl[\Bigl|\EE_{(s, a)\sim\rho_\pi}\bigl[\EE[L_t|E]\omega_t(s, a) + \bigl(1 - \mathbb{P}(E)\bigr)C \bigr]\Bigr|\biggr]\notag\\ &\leq \gamma\EE\biggl[\Bigl|\EE_{(s, a)\sim\rho_\pi}\bigl[L_t / (1-4\delta) \omega_t(s, a) + 4\delta C \bigr]\Bigr|\biggr]\notag\\ &\leq \gamma\EE\biggl[\frac{L_t}{1-4\delta}\cdot \Bigl|\EE_{(s, a)\sim\rho_\pi}\bigl[\omega_t(s, a)\bigr]\Bigr|\biggr] + 4\gamma\delta C, \# where the second inequality follows from the construction of confidence set that $\mathbb{P}\biggl(f^*\in\bigcap\limits_{t}\mathcal{F}_t\biggr)\geq 1 - 2\delta$ and thus $\mathbb{P}(E) \geq 1 - 4\delta$. Therefore, the PSRL expected regret can be bounded by \# {\rm BayesRegret}(T, \pi^{\text{PSRL}}, \phi) &\leq \gamma \frac{L}{1-4\delta} \sum_{t=1}^T\EE\bigl[\omega_t\bigr] + 4 T\gamma\delta C, \# From the proof in \ref{proof_regret}, the expected regret of CDPO is bounded by \# {\rm BayesRegret}(T, \pi^{\text{CDPO}}, \phi) &\leq \gamma \frac{L}{1-4\delta} \sum_{t=1}^T 3\EE\bigl[\omega_t\bigr] + 8 T\gamma\delta C, \# The claim is thus established. \end{proof} \section{Useful Lemmas} \begin{lemma}[Simulation Lemma] \label{sim_lemma} For any policy $\pi$ and transition $f_1$, $f_2$, we have \# V^{f_1}_{\pi} - V^{f_2}_{\pi} = \gamma(I - \gamma f^\pi_2)^{-1}(f_1 - f_2)V^{f_1}_{\pi}. \# \end{lemma} \begin{proof} Denote the expected reward under policy $\pi$ as $r_\pi$. Let $f^\pi$ be the transition matrix on state-action pairs induced by policy $\pi$, defined as $f^\pi_{(s, a), (s', a')}:= P(s'|s, a)\pi(a'|s')$. Then we have \$ V_\pi = r_\pi + \gamma f^\pi V_\pi. \$ Since $\gamma<1$, it is easy to verify that $I - \gamma f^\pi$ is full rank and thus invertible. Therefore, we can write \# V_\pi = (I - \gamma f^\pi)^{-1} r_\pi. \# Therefore, we conclude the proof by \$ V^{f_1}_{\pi} - V^{f_2}_{\pi} &= V^{f_1}_{\pi} - (I - \gamma f^\pi_2)^{-1} r_\pi\notag\\ &= (I - \gamma f^\pi_2)^{-1}\cdot\bigl((I - \gamma f^\pi_2) - (I - \gamma f^\pi_1)\bigr)V^{f_1}_{\pi}\notag\\ &= \gamma(I - \gamma f^\pi_2)^{-1}(f^\pi_1 - f^\pi_2)V^{f_1}_{\pi}\notag\\ &= \gamma(I - \gamma f^\pi_2)^{-1}(f_1 - f_2)V^{f_1}_{\pi}, \$ where the second equality follows from the Bellman equation. \end{proof} \begin{lemma}[Performance Difference Lemma] \label{pd_lemma} For all policies $\pi$, $\pi^*$ and distribution $\mu$ over $\mathcal{S}$, we have \# J(\pi) - J(\pi') = \frac{1}{1 - \gamma}\cdot \EE_{(s, a)\sim\sigma_\pi}[A^{\pi'}(s, a)]. \# \end{lemma} \begin{proof} This lemma is widely adopted in RL. Proof can be found in various previous works, e.g. Lemma 1.16 in \citep{agarwal2019reinforcement}. Let $\mathbb{P}^\pi(\tau|s_0 = s)$ denote the probability of observing trajectory $\tau$ starting at state $s_0$ and then following $\pi$. Then the value difference can be written as \$ V_{\pi}^{f^*}(s) - V_{\pi'}^{f^*}(s) &= \EE_{\tau\sim\mathbb{P}^\pi(\cdot|s_0 = s)}\Bigl[\sum_{h=0}^\infty \gamma^h r(s_h, a_h)\Bigr] - V_{\pi'}^{f^*}(s)\notag\\ &= \EE_{\tau\sim\mathbb{P}^\pi(\cdot|s_0 = s)}\Bigl[\sum_{h=0}^\infty \gamma^h \bigl(r(s_h, a_h) + V_{\pi'}^{f^*}(s_h) - V_{\pi'}^{f^*}(s_h)\bigr)\Bigr] - V_{\pi'}^{f^*}(s)\notag\\ &= \EE_{\tau\sim\mathbb{P}^\pi(\cdot|s_0 = s)}\Bigl[\sum_{h=0}^\infty \gamma^h \bigl(r(s_h, a_h) + \gamma V_{\pi'}^{f^*}(s_{h+1}) - V_{\pi'}^{f^*}(s_h)\bigr)\Bigr]\notag\\ \$ Following the law of iterated expectations, we obtain \# V_{\pi}^{f^*}(s) - V_{\pi'}^{f^*}(s)&= \EE_{\tau\sim\mathbb{P}^\pi(\cdot|s_0 = s)}\Bigl[\sum_{h=0}^\infty \gamma^h \bigl(r(s_h, a_h) + \gamma \EE[V_{\pi'}^{f^*}(s_{h+1})|s_h, a_h] - V_{\pi'}^{f^*}(s_h)\bigr)\Bigr]\notag\\ &= \EE_{\tau\sim\mathbb{P}^\pi(\cdot|s_0 = s)}\Bigl[\sum_{h=0}^\infty \gamma^h\bigl(Q_{\pi'}^{f^*}(s_h, a_h)- V_{\pi'}^{f^*}(s_h)\bigr)\Bigr]\notag\\ &= \EE_{\tau\sim\mathbb{P}^\pi(\cdot|s_0 = s)}\Bigl[\sum_{h=0}^\infty \gamma^h A_{\pi'}^{f^*}(s_h, a_h)\Bigr], \# where the third equation rearranges terms in the summation via telescoping, and the fourth equality follows from the law of total expectation. From the definition of objective $J(\pi)$ in \eqref{obj}, we obtain \# J(\pi) - J(\pi') &= \EE_{s_0\sim\zeta}[V^{f^*}_{\pi}(s_0) - V^{f^*}_{\pi'}(s_0)] \notag\\ &= \frac{1}{1 - \gamma}\EE_{(s, a)\sim\sigma_\pi}[A^{\pi'}(s, a)]. \# \end{proof} \begin{lemma}[Performance Difference and Model Error] \label{lemma_diff} For any two policies $\pi_1$ and $\pi_2$, it holds that \$ J(\pi_1) - J(\pi_2) &= \EE_{s\sim\zeta}\bigl[V_{\pi_1}^{f}(s) - V_{\pi_2}^{f}(s)\bigr]\notag\\ & - \frac{\gamma}{2(1 - \gamma)}\biggl(\EE_{\rho_{\pi_1}}\Bigl[\bigl\|f(\cdot|s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr] + \EE_{\rho_{\pi_2}}\Bigl[\bigl\|f(\cdot|s, a) - f^*(\cdot|s, a)\bigr\|_1\Bigr]\biggr). \$ \end{lemma} \begin{proof} The proof can be established by combining the Performance Difference Lemma and the Simulation Lemma. We refer to Corollary 3.1 in \citep{ross2012agnostic} or Lemma A.3 in \citep{sun2018dual} for a detailed proof. \end{proof} \begin{lemma}[Least Squares Generalization Bound] \label{lemma::ls} Given a dataset $\mathcal{H} = \{x_i, y_i\}_{i=1}^n$ where $x_i\in\mathcal{X}$ and $x_i, y_i\sim\nu$, and $y_i = f^*(x_i) + \epsilon_i$. Suppose $|y_i|\leq Y$ and $\epsilon_i$ is independently sampled noise. Given a function class $\mathcal{F}: \mathcal{X}\rightarrow [0, Y]$, we assume approximate realizable, i.e., $\min_{f\in\mathcal{F}}\EE_{x\sim\nu}\bigl[|f^*(x) - f(x)|^2\bigl]\leq \epsilon_{\text{approx}}$. Denote $\hat{f}$ as the least square solution, i.e., $\hat{f} = \argmin_{f\in\mathcal{F}}\sum_{i=1}^n \bigl(f(x_i) - y_i\bigr)^2$. With probability at least $1 - \delta$, we have \# \EE_{x\sim\nu}\Bigl[\bigl(\hat{f}(x) - f^*(x)\bigr)^2\Bigr]\leq \frac{22Y^2 \ln(|\mathcal{F}|/\delta)}{n} + 20\epsilon_{\text{approx}}. \# \end{lemma} \begin{proof} The result is standard and can be proved by using the Bernstein’s inequality and union bound. Detailed proof can be found at Lemma A.11 in \citep{agarwal2019reinforcement}. \end{proof} \begin{lemma}[Confidence sets with high probability] \label{set_high_prob} If the control parameter $\beta_t(\delta,\alpha)$ is set to \begin{equation} \label{beta_defn} \begin{aligned} \beta_t(\delta,\alpha) = 8\sigma^2 \log(N(\mathcal{F},\alpha,\lVert\cdot\rVert_2)/\delta) + 2\alpha t \left(8C + \sqrt{8\sigma^2\log(4t^2/\delta)}\right), \end{aligned} \end{equation} then for all $\delta>0$, $\alpha>0$ and $t\in\mathbb{N}$, the confidence set $\mathcal{F}_t =\mathcal{F}_t(\beta_t(\delta,\alpha))$ satisfies: \begin{equation} \begin{aligned} P\Bigl(f^*\in\bigcap\limits_{t}\mathcal{F}_t\Bigr)\geq 1-2\delta. \end{aligned} \end{equation} \end{lemma} \begin{proof} See \citep{osband2014model} Proposition 5 for a detailed proof. \end{proof} \begin{lemma}[Bound of Set Width Sum] \label{sum_of_width} If $\{\beta_t|t\in\mathbb{N}\}$ is nondecreasing with $\mathcal{F}_t = \mathcal{F}_t(\beta_t)$ and $\lVert f \rVert_2 \leq C$ for all $f\in\mathcal{F}$, then finite-horizon MDP we have \begin{equation} \begin{aligned} \sum_{t=1}^{T}\sum_{h=1}^H\omega_{t}(s_h, a_h) \leq 1 + HC\dim_{E}(\mathcal{F},T^{-1}) + 4\sqrt{\dim_{E}(\mathcal{F},T^{-1})\beta_{T}T}, \end{aligned} \end{equation} where $\omega_t(s,a)=\sup_{\underline{f},\overline{f}\sim\mathcal{F}_t} \lVert \overline{f}(s,a) - \underline{f}(s,a)\rVert_2$. \end{lemma} \begin{proof} See \citep{osband2014model} Proposition 6 for a detailed proof. \end{proof} \section{Limitations of Eluder Dimension} \label{eluder_defn} In Theorem \ref{near_opt_bound}, the \textit{eluder dimension} $d_E$ appears in the Bayes expected regret bound to capture how effectively the observed samples can extrapolate to unobserved transitions. For some specific function classes, Osband et al. \citep{osband2014model} provide the corresponding eluder dimension bound, e.g., for (generalized) linear function classes, quadratic function class, and for finite MDPs, c.f. Proposition 1-4 in \citep{osband2014model}. However, for non-linear models, Dong et al. \citep{dong2021provable} show that the $\varepsilon$-eluder dimension of one-layer neural networks is \textit{at least} exponential in model dimension. Similar results are also established in \citep{li2021eluder}. We refer to Section 5 in \citep{dong2021provable} or Section 4 in \citep{li2021eluder} for details and more explanations. \section{Additional Related Work} \label{additional_relate} Some MBRL work also concerns iterative policy improvement. SLBO \citep{luo2018algorithmic} provides a trust-region policy optimization framework based on OFU. However, the conditions for monotonic improvement cannot be satisfied by most parameterized models \citep{luo2018algorithmic,dong2021provable}, which leads to a greedy algorithm in practice. Prior work that shares similarities with ours contains DPI \citep{sun2018dual} and GPS \citep{levine2014learning,montgomery2016guided} as dual policy optimization procedures are adopted. Both DPI and GPS leverage a \textit{locally} accurate model and use different objectives for imitating the intermediate policy within a trust-region. However, the policy imitation procedure updates the policy parameter in a \textit{supervised} manner, which poses additional challenges for effective exploration, resulting in unknown convergence results even with a simple model class. In contrast, CDPO by taking the epistemic uncertainty into consideration can be shown to achieve global optimality. In fact, greedy model exploitation is provably optimal only in very limited cases, e.g., linear-quadratic regulator (LQR) settings \citep{mania2019certainty}. OFU-RL has shown to achieve an optimal sublinear regret when applied to online LQR \citep{abbasi2011regret}, tabular MDPs \citep{jaksch2010near} and linear MDPs \citep{jin2020provably}. Among them, HUCRL \citep{curi2020efficient} is a deep algorithm proposed to deal with the joint optimization intractability in \eqref{ofu}. Besides, Russo and Van Roy \citep{russo2013eluder,russo2014learning} unify the bounds in various settings (e.g., finite or linear MDPs) by introducing an additional model complexity measure --- eluder dimension. Other complexity measure include witness rank \citep{sun2019model}, linear dimensionality \citep{yang2020reinforcement} and sequential Rademacher complexity \citep{dong2021provable}. \section{Algorithm Instantiations} \label{instan} The model-based policy optimization solver $\texttt{MBPO}(\pi, \{f\}, \mathcal{J})$ in Algorithm \ref{alg:cdpo} can be instantiated as one of the following algorithms, Dyna-style policy optimization in Algorithm \ref{alg:CDPO-dyna}, model-based back-propagation in Algorithm \ref{alg:CDPO-bptt}, and model predictive control policy optimization in Algorithm \ref{alg:CDPO-mpc_policy}. By default, \texttt{MBPO} is instantiated as the Dyna solver (i.e. Algorithm \ref{alg:CDPO-dyna}) in our MuJoCo experiments and as the policy iteration solver in our $N$-Chain MDPs experiments. We note that the instantiations are not restricted to the listed algorithms, and many other \texttt{MBPO} algorithms that augment policy learning with a predictive model can also be leveraged, e.g., model-based value expansion \citep{feinberg2018model,buckman2018sample}. In the Referential Update step where no input policy exists in $\texttt{MBPO}(\cdot, \hat{f}_{t}^{LS}, \eqref{greedy_update}$, we initialize policy $\pi = \pi_{t-1}$, i.e. the reactive policy from the last iteration. \vskip4pt \noindent{\bf Dyna.} Dyna involves model-generated data and optimizing the policy with any model-free RL method, e.g., REINFORCE or actor-critic \citep{konda2000actor}. The state-action value can be estimated by learning a critic function or unrolling the model. In Constrained Conservative Update, the input objective function $\mathcal{J}$ is \eqref{conserv_update}, which is with constraints. Thus, the Lagrangian multiplier is introduced, similar to the model-free trust-region algorithms \citep{schulman2015trust,schulman2017proximal,abdolmaleki2018maximum}. \begin{algorithm} \caption{Dyna Model-Based Policy Optimization} \textbf{Input:} Policy $\pi$, model set $\{f\}$, objective function $\mathcal{J}$. \begin{algorithmic}[1] \label{alg:CDPO-dyna} \STATE Initialize a simulation data buffer $\hat{\mathcal{D}}$ \STATE Sample a batch of initial states from the initial distribution $\zeta$ \STATE \textcolor{gray}{\(\triangleright\) Data simulation} \FOR{initial state sample $s_0$} \FOR{model $f$ in model set $\{f\}$} \FOR{timestep $h=1,...,H$} \STATE Sample action $\hat{a}_h\sim\pi(\cdot|\hat{s}_h)$ \STATE Sample simulation state $\hat{s}_{h+1}\sim f(\hat{s}_h, \hat{a}_h)$ \STATE Append simulation data to buffer $\hat{\mathcal{D}} = \hat{\mathcal{D}} \cup (\hat{s}_{h}, \hat{a}_{h}, r_h, \hat{s}_{h+1})$ \ENDFOR \ENDFOR \ENDFOR \STATE \textcolor{gray}{\(\triangleright\) Policy optimization with any model-free algorithm \texttt{ModelFree}} \STATE Objective optimization of policy on the simulated data $\pi\leftarrow\texttt{ModelFree}(\hat{\mathcal{D}}, \pi)$ \end{algorithmic} \end{algorithm} \vskip4pt \noindent{\bf Back-Propagation Through Time.} BPTT \citep{kurutach2018model,xu2022accelerated} is a first-order model-based policy optimization framework based on pathwise gradient (or reparameterization gradient) \citep{suh2022differentiable}. There are also several variants including Stochastic Value Gradients (SVG) \citep{heess2015learning}, Model-Augmented Actor-Critic (MAAC) \citep{clavera2020model}, and Probabilistic Inference for Learning COntrol (PILCO) \citep{deisenroth2011pilco}. Specifically, the policy parameters are updated by directly computing the derivatives of the performance with respect to the parameters. When the optimization of objective function is constrained, the accumulating step (Algorithm \ref{alg:CDPO-bptt} Line 9) can be $L\leftarrow L + \gamma^h r(\hat{s}_h, \hat{a}_h) - \lambda D_{\text{KL}}$, where $\lambda$ is the Lagrangian multiplier and $D_{\text{KL}}$ is the corresponding KL constraint. \begin{algorithm} \caption{Model-Based Back-Propagation Policy Optimization} \textbf{Input:} Policy $\pi$, model set $\{f\}$, objective function $\mathcal{J}$. \begin{algorithmic}[1] \label{alg:CDPO-bptt} \STATE Initialize a simulation data buffer $\hat{\mathcal{D}}$ \STATE Start from initial state $s_0$ \STATE Reset $L\leftarrow 0$ \STATE \textcolor{gray}{\(\triangleright\) Data simulation} \FOR{model $f$ in model set $\{f\}$} \FOR{timestep $h=1,...,H$} \STATE Sample action $\hat{a}_h\sim\pi(\cdot|\hat{s}_h)$ \STATE Sample simulation state $\hat{s}_{h+1}\sim f(\hat{s}_h, \hat{a}_h)$ \STATE Accumulate reward and constraint to $L$ \ENDFOR \ENDFOR \STATE \textcolor{gray}{\(\triangleright\) Policy optimization} \STATE Compute policy gradient with back-propagation through time \STATE Objective optimization of policy $\pi\leftarrow\texttt{PolicyGradient}$ \end{algorithmic} \end{algorithm} \vskip4pt \noindent{\bf Model Predictive Control Policy Optimization.} MPC is a \textit{planning} framework that directly generates optimal action sequences under the model. Different from the above model-augmented policy optimization methods, MPC policy optimization directly generates optimal action sequences under the model and then distills the policy. Specifically, the pseudocode in Algorithm \ref{alg:CDPO-mpc_policy} begins with initial actions generated by the policy. Then with a shooting method, e.g., the cross-entropy method (CEM), the actions are refined and the policy that generates these optimal actions are distilled. Below, the algorithm to obtain the refined actions \texttt{EliteActions} can be CEM with action noise added to the action or policy parameter, i.e., POPLIN-A and POPLIN-P in \citep{wang2019exploring}. The policy can be updated by \texttt{UpdatePolicy} using behavior cloning. \vskip4pt \noindent{\bf Policy Iteration for Tabular MDPs.} In tabular settings where the state space $\mathcal{S}$ and action space $\mathcal{A}$ are discrete and countable, we can perform policy iteration under each model in the model set $\{f\}$. Here, the model is the tabular representation instead of function approximators. Based on the state-action values under various models, the optimal action at each state is the one that maximizes the weighted average of the values within the constraint of total variation distance. \begin{algorithm} \caption{Model Predictive Control Policy Optimization} \textbf{Input:} Policy $\pi$, model set $\{f\}$, objective function $\mathcal{J}$, algorithm to update actions \texttt{EliteActions}, algorithm to update policy \texttt{UpdatePolicy}. \begin{algorithmic}[1] \label{alg:CDPO-mpc_policy} \STATE Start from initial state $s_0$ \STATE Reset $J\leftarrow 0$ \STATE \textcolor{gray}{\(\triangleright\) Model-based planning} \FOR{model $f$ in model set $\{f\}$} \FOR{timestep $h=1,...,H$} \STATE Sample action $\hat{a}_h\sim\pi(\cdot|\hat{s}_h)$ \STATE Sample simulation state $\hat{s}_{h+1}\sim f(\hat{s}_h, \hat{a}_h)$ \STATE Accumulate reward and constraint to $J$ \ENDFOR \ENDFOR \STATE $\textbf{a}\leftarrow \texttt{EliteActions}(J,\hat{a}_{1:N})$ \STATE \textcolor{gray}{\(\triangleright\) Policy distillation} \STATE $\pi\leftarrow\texttt{UpdatePolicy}(\textbf{a})$ \end{algorithmic} \end{algorithm} \section{Experimental Settings and Results in $N$-Chain MDPs} \subsection{Settings of MuJoCo Experiments} \label{exp_settings} In the MuJoCo experiments, we use a $5$-layer neural network to approximate the dynamical model. We use deterministic ensembles \citep{chua2018deep} to capture the model epistemic uncertainty. Specifically, different ensembles are learned with independent transition data to construct the $1$-step ahead confidence interval at every timestep. Each ensemble is separately trained using Adam \citep{kingma2014adam}. And the number of ensemble heads can be set to 3, 4, or, 5, each of which is shown to be able to provide considerable performance in our experiments. All the experiments are repeated with $6$ random seeds. Since neural networks are not calibrated in general, i.e., the model uncertainty set is not guaranteed to contain the real dynamics, we follow HUCRL \citep{curi2020efficient} to re-calibrate \citep{kuleshov2018accurate} the model. Our MuJoCo code is also built upon the HUCRL GitHub repository. When using the Dyna model-based policy optimization, the number of gradient steps for each optimization procedure in an iteration is set to $20$. And we empirically find that the KL divergence (or total variance) constraint makes the algorithm more efficient when computing the $\argmax$ in the optimization step, since optimizing from $\pi_{t-1}$ at iteration $t$ needs fewer policy gradient steps if the policy update is constrained within a certain trust region. The task-specific and task-common settings and parameters are listed below in Table \ref{tab:mbrl_params}. \begin{table}[h] \caption{Experimental parameters.} \label{tab:mbrl_params} \centering \begin{tabular}{c|ccc} & Inverted Pendulum & Pusher & Half-Cheetah \\\hline\hline episode length $H$ & 200 & 150 & 1000 \\ dimension of state & 4 & 23 & 18 \\ dimension of action & 1 & 7 & 6 \\ action penalty & 0.001 & 0.1 & 0.1 \\\hline hidden nodes & \multicolumn{3}{c}{(200, 200, 200, 200, 200)} \\ activation function & \multicolumn{3}{c}{Swish} \\ optimizer & \multicolumn{3}{c}{Adam} \\ learning rate & \multicolumn{3}{c}{$10^{-3}$} \\ \end{tabular} \end{table} \subsection{Experiments in $N$-Chain MDPs} \label{chain_exp} Besides the experiments in MuJoCo, we also conduct tabular experiments in the $N$-Chain environment that is proposed in \citep{markou2019bayesian}. Specifically, there are in total $2$ actions and $N$ states in an MDP. The initial state is $s_1$ and the agent can choose to go \textit{left} or \textit{right} at each of the $N$ states. The \textit{left} action always succeeds and moves the agent to the left state, giving reward $r\sim\mathcal{N}(0, \delta^2)$. Taking the \textit{right} action at state $s_1, \ldots, s_{N-1}$ gives reward $r\sim\mathcal{N}(-\delta, \delta^2)$ and succeeds with probability $1 - 1/N$, moving the agent to the right state and otherwise moving the agent to the left state. Taking the \textit{right} action at $s_N$ gives reward $r\sim\mathcal{N}(1, \delta^2)$ and moves the agent back to $s_1$ with probability $1 - 1/N$. We set $\delta = 0.1\exp{(-N/4)}$, such that going \textit{right} is the optimal action at least up to $N=40$. As the number of states $N$ is increasing, the agent needs \textit{deep} exploration (e.g. guided by uncertainty) instead of \textit{dithering} exploration (e.g. epsilon-greedy exploration), such that the agent can keep exploring despite receiving negative rewards \citep{osband2019deep}. \begin{figure*} \caption{Illustration of the $N$-Chain MDP. Blue arrows correspond to action \textit{right} (optimal) and red arrows correspond to action \textit{left} (suboptimal). The figure is copied from \citep{markou2019bayesian}.} \label{fig:$N$-Chain_mdp} \end{figure*} For this reason, we evaluate the proposed algorithm CDPO and compare it with other Bayesian RL algorithms, including Bayesian Q-Learning (BQL) \citep{dearden1998bayesian}, Posterior Sampling for RL (PSRL) \citep{osband2013more}, the Uncertainty Bellman Equation (UBE) \citep{o2018uncertainty} and Moment Matching (MM) approach \citep{markou2019bayesian}. For CDPO, the dual optimization steps are solved by policy iteration, and the conservative update is performed within the total variation distance $\eta=0.2$ (c.f. Policy Iteration for Tabular MDPs in Appendix \ref{instan}). We choose conjugate priors to represent the posterior distribution: we use a Categorical-Dirichlet model for discrete transition distribution at each $(s, a)$, and a Normal-Gamma (NG) model for continuous reward distribution at each $(s, a, s')$. \begin{figure*} \caption{Posterior evolution of CDPO algorithm in the $8$-Chain MDP.} \label{fig:cdpo_chain} \end{figure*} \vskip4pt \noindent{\bf Evolution of Posterior.} Figure \ref{fig:cdpo_chain} demonstrates the evolution of the posterior of the CDPO algorithm in an $8$-Chain MDP. As training progresses the posteriors concentrate on the true optimal state-action values and the behavior policy converges on the optimal one. The fast reduction of uncertainty is central to achieving principled and efficient exploration. Compared to the posterior evolution of the PSRL algorithm corresponding to the optimal actions, i.e. the bottom row of curves in Figure \ref{fig:psrl_chain}, the expected value estimates of CDPO are closer to the ground-truth, and the variance is also smaller. Notably, the variance of CDPO might be higher for suboptimal actions, e.g., $s=8, a=\text{left}$ (the last image of the first row in Figure \ref{fig:cdpo_chain}). It is due to the conservative nature of CDPO that it only cares about the \textit{expected} value, instead of the value of a sampled (imperfect) model as in PSRL. In other words, as long as the uncertainty is large, the PSRL agents can take suboptimal actions to explore the uninformative regions, which causes the inefficient over-exploration issue. \begin{figure*} \caption{Posterior evolution of PSRL algorithm in the $8$-Chain MDP.} \label{fig:psrl_chain} \end{figure*} \vskip4pt \noindent{\bf Cumulative Regret.} We compare CDPO and previous algorithms on the $N$-Chain MDPs with various state sizes $N$ by measuring the cumulative regret of an oracle agent following the optimal policy. The results are shown in Figure \ref{fig:curve_regret}. To make the performances comparable on the same scale, we also provide the normalized regret in Figure \ref{fig:com_regret}. We observe that when the size of state space $N$ is relatively smaller, e.g. $N\leq 5$, CDPO, PSRL, BQL, and MM algorithms achieve sublinear regret. The performances of these algorithms are also comparable, showing the necessity of deep exploration. On the contrary, Q-Learning which only relies on dithering exploration mechanisms fail to find the optimal strategy. However, as $N$ is increasing, where the exploration must be effective for the agent to continually explore despite receiving negative rewards, the CDPO agents offer significantly lower cumulative regret and faster convergence. \begin{figure*} \caption{Comparison of cumulative regret.} \label{fig:curve_regret} \end{figure*} \begin{figure*} \caption{Performance comparison in terms of regret to the oracle.} \label{fig:com_regret} \end{figure*} \section{Algorithmic Comparisons between MBRL Algorithms} \label{other_algo} We provide algorithmic comparisons of four MBRL frameworks, including greedy model exploitation algorithms, OFU-RL, PSRL, and the proposed CDPO algorithm. The differences mainly lie in the model selection and policy update procedures. The high-level pseudocode is given in Algorithm \ref{alg:greedy_compare}, \ref{alg:ofu_compare}, \ref{alg:psrl_compare} and \ref{alg:CDPO_compare}. Among them, the greedy model exploitation algorithm is a naive instantiation, where other instantiations can include the ones that augment Algorithm \ref{alg:greedy_compare} with e.g., a dual framework that involves a locally accurate model and a supervised imitating procedure \citep{sun2018dual,levine2014learning}. In Algorithm \ref{alg:greedy_compare}, $\Tilde{f}_t$ can either be a probabilistic model or a deterministic model (with additive noise), which can be estimated via Maximum Likelihood Estimation (MLE) or minimizing the Mean Squared Error (MSE), respectively. \begin{figure} \caption{Naive Greedy Model Exploitation} \label{alg:greedy_compare} \caption{OFU-RL Algorithm} \label{alg:ofu_compare} \caption{PSRL Algorithm} \label{alg:psrl_compare} \caption{CDPO Algorithm} \label{alg:CDPO_compare} \end{figure} \section{Societal Impact} \label{societal} For real-world applications, interactions with the system imply energy or economic costs. With practical efficiency, CDPO reduces the training investment and is aligned with the principle of responsible AI. However, as an RL algorithm, CDPO is unavoidable to introduce safety concerns, e.g., self-driving cars make mistakes during RL training. Although CDPO does not explicitly address them, it may be used in conjunction with safety controllers to minimize negative impacts, while drawing on its powerful MBRL roots to enable efficient learning. \end{document}
arXiv
\begin{document} \author{Victoria Gitman} \today \address{New York City College of Technology (CUNY), Mathematics, 300 Jay Street, Brooklyn, NY 11201 USA} \email{[email protected]} \title{Proper and Piecewise Proper Families of Reals} \maketitle \begin{abstract} I introduced the notions of proper and piecewise proper families of reals to make progress on a long standing open question in the field of models of Peano Arithmetic about whether every Scott set is the standard system of a model of {\rm PA}. A Scott set is a family of reals closed under $\Delta_1$ definability and satisfying weak Konig's Lemma. A family of reals $\mathfrak{X}$ is proper if it is arithmetically closed and the quotient Boolean algebra $\mathfrak{X}/\mathrm{Fin}$ is a proper partial order. A family is piecewise proper if it is the union of a chain of proper families of size $\leq\omega_1$. I showed that under the Proper Forcing Axiom, every proper or piecewise proper family of reals is the standard system of a model of PA. Here, I explore the question of the existence of proper and piecewise proper families of reals of different cardinalities. \end{abstract} \section{Introduction} One of the central concepts in the field of models of Peano Arithmetic is the \emph{standard system} of a model of {\rm PA}. The \emph{standard system} of a model of {\rm PA} is the collection of subsets of the natural numbers that arise as intersections of the definable sets of the model with its \emph{standard part} $\mathbb {N}$. The notion of a \emph{Scott set} captures three key properties of standard systems. \begin{definition} $\mathfrak{X}\subseteq \mathcal{P}(\mathbb {N})$ is a \emph{Scott set} if \begin{itemize} \item [(1)] $\mathfrak{X}$ is a Boolean algebra of sets. \item [(2)] If $A\in\mathfrak{X}$ and $B$ is Turing computable from $A$, then $B\in\mathfrak{X}$. \item [(3)] If $T$ is an infinite binary tree coded by a set in $\mathfrak{X}$, then $\mathfrak{X}$ has a set coding some path through $T$. \end{itemize} \end{definition} In 1962, Scott showed that every standard system is a Scott set and the partial converse that every \emph{countable} Scott set is the standard system of a model of {\rm PA} \cite{scott:ssy}. The question of whether \emph{every} Scott is the standard system of a model of {\rm PA} became known in the folklore as Scott's Problem. In 1982, Knight and Nadel extended Scott's result to Scott sets of size $\omega_1$ \cite{knight:scott}. They showed that every Scott set of size $\omega_1$ is the standard system of a model of {\rm PA}. It has proved very difficult to make further progress on Scott's Problem. My approach, following Engstr\"om \cite{engstrom:thesis} and suggested several years earlier by Hamkins, Marker, etc., has been to use the set theoretic techniques of forcing and the forcing axioms. We can associate with every family of reals $\mathfrak{X}$, the poset $\mathfrak{X}/\mathrm{Fin}$ which consists of the infinite sets of $\mathfrak{X}$ under the ordering of \emph{almost inclusion}. Engstr\"om in \cite{engstrom:thesis} introduced the use of this poset in connection with Scott's Problem. A family of reals is \emph{arithmetically closed} if whenever $A$ is in it and $B$ is arithmetically definable from $A$, then $B$ is also in it. The arithmetic closure of $\mathfrak{X}$ is an essential ingredient in the constructions that make the posets $\mathfrak{X}/\mathrm{Fin}$ useful in investigating properties of uncountable models of {\rm PA} (for details of the constructions, see \cite{gitman:scott}). For this reason, whenever we view a family of reals as a poset we will always assume arithmetic closure. A family $\mathfrak{X}$ is \emph{proper} if it is arithmetically closed and the poset $\mathfrak{X}/\mathrm{Fin}$ is proper. A family $\mathfrak{X}$ is piecewise proper if it is the union of a chain of proper families each of which has size $\leq\omega_1$. I showed in \cite{gitman:scott} that under the Proper Forcing Axiom ({\rm PFA}), every proper or piecewise proper family of reals is the standard system of a model of {\rm PA}. I will give an extended discussion of properness and the {\rm PFA} in Section \ref{sec:proper}. Throughout the paper, I equate \emph{reals} with subsets of $\mathbb {N}$. It is easy to see that every countable arithmetically closed family of reals is proper and $\mathcal{P}(\mathbb {N})$ is proper as well (see Section \ref{sec:forcing}). Every arithmetically closed family of size $\leq\omega_1$ is trivially piecewise proper since it is the union of a chain of countable arithmetically closed families. It becomes much more difficult to find instances of uncountable proper families of reals other than $\mathcal{P}(\mathbb {N})$. Also, it was not clear for a while whether there are piecewise proper families of of size larger than $\omega_1$. My main results are: \begin{theorem} If {\rm CH} holds, then $\mathcal{P}^V(\mathbb {N})/\mathrm{Fin}$ remains proper in any generic extension by a c.c.c.\ poset. \end{theorem} \begin{theorem} There is a generic extension of $V$ by a c.c.c.\ poset, which contains continuum many proper families of reals of size $\omega_1$. \end{theorem} \begin{theorem} There is a generic extension of $V$ by a c.c.c.\ poset, which contains continuum many piecewise proper families of reals of size $\omega_2$. \end{theorem} \section{Proper Posets and the PFA}\label{sec:proper} Proper posets were invented by Shelah, who sought a class of $\omega_1$ preserving posets that would extend the c.c.c.\ and countably closed classes of posets and be preserved under iterations with countable support. The Proper Forcing Axiom ({\rm PFA}) was introduced by Baumgartner who showed that it is consistent by assuming the existence of a supercompact cardinal \cite{baumgartner:pfa}. This remains the best known upper bound on the consistency of {\rm PFA}. Recall that for a cardinal $\lambda$, the set $H_\lambda$ is the collection of all sets whose transitive closure has size less than $\lambda$. Let $\mathbb{P}$ be a poset and $\lambda$ be a cardinal greater than $2^{|\mathbb{P}|}$. Since we can always take an isomorphic copy of $\mathbb{P}$ on the cardinal $|\mathbb{P}|$, we can assume without loss of generality that $\mathbb{P}$ and $\mathcal{P}(\mathbb{P})$ are elements of $H_\lambda$. In particular, we want to ensure that all dense subsets of $\mathbb{P}$ are in $H_\lambda$. Let $M$ be a countable elementary submodel of $H_\lambda$ containing $\mathbb{P}$ as an element. If $G$ is a filter on $\mathbb{P}$, we say that $G$ is $M$-\emph{generic} if for every maximal antichain $A\in M$ of $\mathbb{P}$, the intersection $G\cap A\cap M\neq\emptyset$. It must be explicitly specified what $M$-generic means in this context since the usual notion of generic filters makes sense only for transitive structures and $M$ is not necessarily transitive. This definition of $M$-generic is closely related to the definition for transitive structures. To see this, let $M^*$ be the Mostowski collapse of $M$ and $\mathbb{P}^*$ be the image of $\mathbb{P}$ under the collapse. Let $G^*\subseteq \mathbb{P}^*$ be the pointwise image of $G\cap M$ under the collapse. Then $G$ is $M$-generic if and only if $G^*$ is $M^*$-generic for $\mathbb{P}^*$ in the usual sense. Later we will need the following important characterization of $M$-generic filters. \begin{theorem} If\/ $\mathbb{P}$ is a poset in $M\prec H_\lambda$, then a $V$-generic filter $G\subseteq\mathbb{P}$ is $M$-generic if and only if $M\cap \text{\emph{Ord}}=M[G]\cap\text{\emph{Ord}}$. \emph{(See, for example, \cite{shelah:proper}, p. 105)}\label{th:equproper} \end{theorem} \begin{definition} Let $\mathbb{P}\in H_\lambda$ be a poset and $M$ be an elementary submodel of $H_\lambda$ containing $\mathbb{P}$. Then a condition $q\in\mathbb{P}$ is $M$-\emph{generic} if and only if every $V$-generic filter $G\subseteq\mathbb{P}$ containing $q$ is $M$-generic. \end{definition} \begin{definition} A poset $\mathbb{P}$ is \emph{proper} if for every $\lambda> 2^{|\mathbb{P}|}$ and every countable $M\prec H_\lambda$ containing $\mathbb{P}$, for every $p\in\mathbb{P}\cap M$, there is an $M$-generic condition below $p$. \end{definition} When proving that a poset is proper it is often easier to use the following equivalent characterization which appears in \cite{shelah:proper} (p.\ 102). \begin{theorem}\label{th:equivproper} A poset $\mathbb{P}$ is proper if there exists a $\lambda>2^{|\mathbb{P}|}$ and a club of countable $M\prec H_\lambda$ containing $\mathbb{P}$, such that for every $p\in\mathbb{P}\cap M$, there is an $M$-generic condition below $p$. \end{theorem} Countably closed posets and c.c.c.\ posets are proper and all proper posets preserve $\omega_1$ \cite{shelah:proper}. \begin{definition} \emph{The Proper Forcing Axiom} ({\rm PFA}) is the assertion that for every proper poset $\mathbb{P}$ and every collection $\mathcal D$ of at most $\omega_1$ many dense subsets of $\mathbb{P}$, there is a filter on $\mathbb{P}$ that meets all of them. \end{definition} The Proper Forcing Axiom decides the size of the continuum. It was shown in \cite{veli:pfa} that under {\rm PFA}, continuum is $\omega_2$. \section{Proper and Piecewise Proper Families}\label{sec:forcing} Let $\mathfrak{X}$ be a family of reals. Define the poset $\mathfrak{X}/\mathrm{Fin}$ to consist of the infinite sets in $\mathfrak{X}$ under the ordering of almost inclusion. That is, for infinite $A$ and $B$ in $\mathfrak{X}$, we say that $A\leq B$ if and only if $A\subseteq_\mathrm{Fin} B$. Observe that $\mathfrak{X}/\mathrm{Fin}$ is forcing equivalent to forcing with the Boolean algebra $\mathfrak{X}$ modulo the ideal of finite sets. A familiar and thoroughly studied instance of this poset is $\mathcal{P}(\omega)/\mathrm{Fin}$. For a property of posets $\mathscr P$, if $\mathfrak{X}$ is an arithmetically closed family of reals and $\mathfrak{X}/\mathrm{Fin}$ has $\mathscr P$, I will simply say that \emph{$\mathfrak{X}$ has property $\mathscr P$}. An important point to be noted here is that whenever a family $\mathfrak{X}$ is discussed as a poset, I will always be assuming that \emph{it is arithmetically closed}. Recall that the reason for this is the need for arithmetic closure of $\mathfrak{X}$ in the constructions with models of {\rm PA} in which $\mathfrak{X}/\mathrm{Fin}$ is used. The easiest way to show that a poset is proper to show that it is c.c.c.\ or countably closed. Thus, every countable arithmetically closed family is proper since it is c.c.c.\ and $\mathcal{P}(\mathbb {N})$ is proper since it is countably closed. Unfortunately these two conditions do not give us any other instances of proper families. \begin{theorem}\label{th:wrong} Every c.c.c.\ family of reals is countable. \end{theorem} \begin{proof} Let $\mathfrak{X}$ be an arithmetically closed family of reals. If $x$ is a finite subset of $\mathbb {N}$, let $\ulcorner x \urcorner$ denote the code of $x$ using G\"{o}del's coding. For every $A\in\mathfrak{X}$, define an associated $A'=\{\ulcorner A\cap n\urcorner\mid n\in\mathbb {N}\}$. Clearly $A'$ is definable from $A$, and hence in $\mathfrak{X}$. Observe that if $A\neq B$, then $|A'\cap B'|<\omega$. Hence if $A\neq B$, we get that $A'$ and $B'$ are incompatible in $\mathfrak{X}/\mathrm{Fin}$. It follows that $\mathscr A=\{A'\mid A\in\mathfrak{X}\}$ is an antichain of $\mathfrak{X}/\mathrm{Fin}$ of size $|\mathfrak{X}|$. This shows that $\mathfrak{X}/\mathrm{Fin}$ always has antichains as large as the whole poset. \end{proof} Thus, the poset $\mathfrak{X}/\mathrm{Fin}$ has the worst possible chain condition, namely $|\mathfrak{X}|^+$-c.c.. \begin{theorem} \label{th:countclosed} Every countably closed family of reals is $\mathcal{P}(\mathbb {N})$. \end{theorem} \begin{proof} I will show that every $A\subseteq \mathbb {N}$ is in $\mathfrak{X}$. Define a sequence of subsets $\langle B_n\mid n\in\omega\rangle$ by $B_n=\{m\in\mathbb {N}\mid (m)_n=\chi_A(n)\}$ where $\chi_A$ is the characteristic function of $A$. Let $A_m=\cap_{n\leq m}B_n$ and observe that each $A_m$ is infinite. Thus, $A_0\geq A_1\geq\cdots \geq A_m\geq\ldots$ are elements of $\mathfrak{X}/\mathrm{Fin}$. By countable closure, there exists $C\in \mathfrak{X}/\mathrm{Fin}$ such that $C\subseteq_{\mathrm{Fin}}A_m$ for all $m\in \mathbb {N}$. Thus, $C\subseteq_\mathrm{Fin} B_n$ for all $n\in \mathbb {N}$. It follows that $A=\{n\in\mathbb {N}\mid\exists m\,\forall k\in C\,\text{ if }k>m, \text{ then } (k)_n=1\}$. This shows that $A$ is arithmetic in $C$, and hence $A\in\mathfrak{X}$ by arithmetic closure. Since $A$ was arbitrary, this concludes the proof that $\mathfrak{X}=\mathcal{P}(\mathbb {N})$. \end{proof} The assumption that $\mathfrak{X}$ is arithmetically closed is not necessary for Theorem \ref{th:countclosed}. Any family of reals $\mathfrak{X}$ such that $\mathfrak{X}/\mathrm{Fin}$ is countably closed must be arithmetically closed (see \cite{gitman:scott}) . Enayat showed in \cite{enayat:endextensions} that {\rm ZFC} proves the existence of an arithmetically closed family of size $\omega_1$ which collapses $\omega_1$, and hence is \emph{not} proper. Later Enayat and Shelah showed in \cite{shelah:borel} that there is a \emph{Borel} arithmetically closed family of size $\omega_1$ which is \emph{not} proper as well. I will show below that it is consistent with {\rm ZFC} that there are continuum many proper families of size $\omega_1$ and it is consistent with {\rm ZFC} that there are continuum many piecewise proper families of size $\omega_2$. But first I will consider the question of when does forcing to add new reals preserve the properness of the reals of the ground model. I will show that if {\rm CH} holds, forcing with a c.c.c.\ poset preserves the properness of the reals of the ground model. \begin{lemma}\label{le:intersect} Let $\mathfrak{X}_0\subseteq\mathfrak{X}_1\subseteq\cdots\subseteq \mathfrak{X}_\xi\subseteq\cdots$ for $\xi<\omega_1$ be a continuous chain of countable families of reals and let $\mathfrak{X}=\cup_{\xi<\omega_1} \mathfrak{X}_\xi$. If $M$ is a countable elementary substructure of some $H_\lambda$ and $\langle \mathfrak{X}_\xi\mid\xi<\omega_1\rangle\in M$, then $M\cap \mathfrak{X}=\mathfrak{X}_\alpha$ where $\alpha=\text{\emph{Ord}}^M\cap\omega_1$. \end{lemma} \begin{proof} Let $\alpha=\text{Ord}^M\cap\omega_1$. Suppose $\xi\in\alpha$, then $\xi\in M$, and hence $\mathfrak{X}_\xi\in M$. Since $\mathfrak{X}_\xi$ is countable, it follows that $\mathfrak{X}_\xi\subseteq M$. Thus, $\mathfrak{X}_\alpha\subseteq\mathfrak{X}\cap M$. Now suppose $A\in \mathfrak{X}\cap M$, then the least $\xi$ such that $A\in\mathfrak{X}_\xi$ is definable in $H_\lambda$. It follows that $\xi\in M$, and hence $\xi\in\alpha$. Thus, $\mathfrak{X}\cap M\subseteq \mathfrak{X}_\alpha$. \end{proof} \begin{lemma}\label{le:count} Suppose $\mathbb{P}$ is a c.c.c.\ poset and $G\subseteq \mathbb{Q}$ is $V$-generic for a countably closed poset $\mathbb{Q}$. Then $\mathbb{P}$ remains c.c.c.\ in $V[G]$. \end{lemma} \begin{proof} Suppose $\mathbb{P}$ does not remain c.c.c.\ in $V[G]$. Fix a $\mathbb{Q}$-name $\dot{A}$ and $r\in \mathbb{Q}$ such that $r\Vdash ``\dot{A}$ is a maximal antichain of $\check{\mathbb{P}}\text{ of size }\omega_1"$. Choose $q_0\leq r$ and $a_0\in \mathbb{P}$ such that $q_0\Vdash \check{a_0}\in \dot{A}$. Suppose that we have defined $q_0\geq q_1\geq\cdots\geq q_\xi\geq\cdots$ for $\xi<\beta$ where $\beta$ is some countable ordinal, together with a corresponding sequence $\langle a_\xi\mid \xi<\beta\rangle$ of elements of $\mathbb{P}$ such that $q_\xi\Vdash \check{a}_\xi\in \dot{A}$ and $a_{\xi_1}\neq a_{\xi_2}$ for all $\xi_1<\xi_2$. By countable closure of $\mathbb{Q}$, we can find $q\in\mathbb{Q}$ such that $q\leq q_\xi$ for all $\xi<\beta$. Let $q_\beta\leq q$ and $a_\beta\in \mathbb{P}$ such that $q_\beta\Vdash \check{a}_\beta\in \dot{A}$ and $a_\beta\neq a_\xi$ for all $\xi<\beta$. Such $a_\beta$ must exist since we assumed $r\Vdash ``\dot{A} \text{ is a maximal antichain of }\check{\mathbb{Q}}\text{ of size }\omega_1"$ and $q\leq r$. Thus, we can build a descending sequence $\langle q_\xi\mid \xi<\omega_1\rangle$ of elements of $\mathbb{Q}$ and a corresponding sequence $\langle a_\xi\mid \xi<\omega_1\rangle$ of elements of $\mathbb{P}$ such that $q_\xi\Vdash \check{a}_\xi\in\dot{A}$. But clearly $\langle a_\xi\mid \xi<\omega_1\rangle$ is an antichain in $V$ of size $\omega_1$, which contradicts the assumption that $\mathbb{P}$ was c.c.c.. \end{proof} \begin{theorem}\label{th:oldreals} If {\rm CH} holds, then $\mathcal{P}^V(\mathbb {N})/\mathrm{Fin}$ remains proper in any generic extension by a c.c.c.\ poset. \end{theorem} \begin{proof} Let $\mathbb{P}$ be a c.c.c.\ poset and fix a $V$-generic $g\subseteq \mathbb{P}$. In $V$, let $\mathcal{P}(\mathbb {N})=\cup_{\xi<\omega_1} \mathfrak{X}_\xi$ where each $\mathfrak{X}_\xi$ is countable and $\mathfrak{X}_0\subseteq\mathfrak{X}_1\subseteq\cdots\subseteq\mathfrak{X}_\xi\subseteq\cdots$ is a continuous chain. For sufficiently large cardinals $\lambda$, it is easy to see that $H_\lambda^{V[g]}=H_\lambda[g]$. The countable elementary substructures of $H_\lambda[g]$ of the form $M[g]$ where $M\subseteq V$ and $\langle \mathfrak{X}_\xi\mid \xi<\omega_1\rangle, \mathbb{P}\in M[g]$ form a club. So by Theorem \ref{th:equivproper}, it suffices to find generic conditions only for such elementary substructures. Fix a countable $M[g]\prec H_\lambda[g]$ in $V[g]$ such that $\langle \mathfrak{X}_\xi\mid\xi<\omega_1\rangle, \mathbb{P}\in M[g]$ and $M\subseteq V$. We need to prove that for every $B\in M[g]\cap \mathcal{P}(\mathbb {N})^V/\mathrm{Fin}$, there exists $A\in \mathcal{P}(\mathbb {N})^V/\mathrm{Fin}$ such that $A\subseteq_\mathrm{Fin} B$ and $A$ is $M[g]$-generic in $V[g]$. By Lemma \ref{le:intersect}, $M[g]\cap \mathcal{P}^V(\mathbb {N})=\mathfrak{X}_\alpha$ where $\alpha=\text{Ord}^M\cap \omega_1$. Let $\mathcal D=\{\mathscr D\cap \mathfrak{X}_\alpha\mid \mathscr D\in M\text{ and }\mathscr D\text{ dense in }\mathcal{P}^V(\mathbb {N})/\mathrm{Fin}\}$. Observe that $\mathcal D\subseteq V$ and $|\mathcal D|=\omega$. Since $\mathbb{P}$ is c.c.c., we can show that there is $\mathcal D'\supseteq \mathcal D$ of size $\omega$ in $V$. In $V$, use $\mathcal D'$ and $\mathfrak{X}_\alpha$ to define $\mathcal E=\{\mathscr D\in \mathcal D'\mid \mathscr D\text{ dense in }\mathfrak{X}_\alpha\}$. It is clear that $\mathcal D\subseteq \mathcal E$. By the countable closure of $\mathcal{P}(\mathbb {N})^V/\mathrm{Fin}$ in $V$, we can find an infinite $A\subseteq_\mathrm{Fin} B$ such that every $\mathscr D\in \mathcal E$ contains some $C$ above $A$. It follows that $A$ is $M$-generic. In fact, I will show that $A$ is $M[g]$-generic. To verify this, we need to check that whenever $A\in G$ and $G\subseteq \mathcal{P}(\mathbb {N})^V/\mathrm{Fin}$ is $V[g]$-generic, then $M[g][G]\cap \text{Ord}=M[g]\cap \text{Ord}$. Since we are forcing with $\mathbb{P}\times\mathcal{P}(\mathbb {N})/\mathrm{Fin}$, we have $M[g][G]=M[G][g]$. It is clear that $M\cap \text{Ord}=M[g]\cap \text{Ord}$, and so it remains to show that $M[G][g]\cap \text{Ord}=M\cap \text{Ord}$. Since $A\in G$ and $A$ is $M$-generic, we have that $M[G]\cap \text{Ord}=M\cap \text{Ord}$. The poset $\mathbb{P}$ remains c.c.c.\ in $V[G]$ by Lemma \ref{le:count} since $\mathcal{P}(\mathbb {N})^V/\mathrm{Fin}$ is countably closed. Also we have $M[G]\prec H_\lambda[G]$, even though $M[G]$ itself may not be an element of $V[G]$. Let $\mathscr A$ be a maximal antichain of $\mathbb{P}$ in $M[G]$, then $\mathscr A\in H_\lambda[G]$, and hence $\mathscr A$ has size $\omega$. It follows that $\mathscr A\subseteq M[G]$. Since $g$ is $V[G]$-generic, it must meet $\mathscr A$. So $g$ is $M[G]$-generic, and hence $M[G][g]\cap \text{Ord}=M[G]\cap \text{Ord}$. \end{proof} It follows that it is consistent that there are uncountable proper families other than $\mathcal{P}(\mathbb {N})$. Start in any universe satisfying {\rm CH} and force to add a Cohen real. In the resulting generic extension, the reals of $V$ will be an uncountable proper family. Next, I will show how to force the existence of many proper families. I will begin by looking at what properness translates into in this specific context. \begin{prop}\label{prop:proper} Suppose $\mathfrak{X}$ is a family of reals and $\mathscr A$ is a countable antichain of $\mathfrak{X}/\mathrm{Fin}$. Then for $B\in \mathfrak{X}$: \begin{itemize} \item[(1)] Every $V$-generic filter $G\subseteq \mathfrak{X}/\mathrm{Fin}$ containing $B$ meets $\mathscr A$. \item[(2)] There exists a finite list $A_0,\ldots, A_n\in \mathscr A$ such that $B\subseteq_\mathrm{Fin} A_0\cup\ldots\cup A_n$. \end{itemize} \end{prop} \begin{proof}$\,$\\ (2)$\Longrightarrow$(1): Suppose $B\subseteq_\mathrm{Fin} A_0\cup\ldots\cup A_n$ for some $A_0,\ldots, A_n\in \mathscr A$. Since a $V$-generic filter $G$ is an ultrafilter, one of the $A_i$ must be in $G$.\\ (1)$\Longrightarrow$(2): Assume that every $V$-generic filter $G$ containing $B$ meets $\mathscr A$ and suppose toward a contradiction that $(2)$ does not hold. Enumerate $\mathscr A=\{A_0,A_1,\ldots, A_n,\ldots\}$. It follows that for all $n\in\mathbb {N}$, the intersection $B\cap (\mathbb {N}-A_0)\cap\cdots\cap(\mathbb {N}-A_n)$ is infinite. Define $C=\{c_n\mid n\in\mathbb {N}\}$ such that $c_0$ is the least element of $B\cap(\mathbb {N}-A_0)$ and $c_{n+1}$ is the least element of $B\cap(\mathbb {N}-A_0)\cap\cdots\cap (\mathbb {N}-A_{n+1})$ greater than $c_n$. Clearly $C\subseteq B$ and $C\subseteq_\mathrm{Fin} (\mathbb {N}-A_n)$ for all $n\in \mathbb {N}$. Let $G$ be a $V$-generic filter containing $C$, then $B\in G$ and $(\mathbb {N}-A_n)\in G$ for all $n\in\mathbb {N}$. But this contradicts our assumption that $G$ meets $\mathscr A$. \end{proof} \begin{corollary}\label{cor:properequiv} A family of reals $\mathfrak{X}$ is proper if and only if there exists $\lambda>2^{|\mathfrak{X}|}$ such that for every countable $M\prec H_\lambda$ containing $\mathfrak{X}$, whenever $C\in M\cap \mathfrak{X}/\mathrm{Fin}$, then there is $B\subseteq_\mathrm{Fin} C$ in $\mathfrak{X}/\mathrm{Fin}$ such that for every maximal antichain $\mathscr A\in M$ of $\mathfrak{X}/\mathrm{Fin}$, there are $A_0,\ldots, A_n\in\mathscr A\cap M$ with $B\subseteq_\mathrm{Fin} A_0\cup\cdots\cup A_n$. \end{corollary} \begin{proof}$\,$\\ ($\Longrightarrow$): Suppose $\mathfrak{X}$ is proper. Then there is $\lambda>2^{|\mathfrak{X}|}$ such that for every countable $M\prec H_\lambda$ containing $\mathfrak{X}$ and every $C\in M\cap \mathfrak{X}/\mathrm{Fin}$, there is an $M$-generic $B\subseteq_\mathrm{Fin} C$ in $\mathfrak{X}/\mathrm{Fin}$. Fix a countable $M\prec H_\lambda$ containing $\mathfrak{X}$ and $C\in M\cap \mathfrak{X}/\mathrm{Fin}$. Let $B\subseteq_\mathrm{Fin} C$ be $M$-generic. Thus, every $V$-generic filter containing $B$ must meet $\mathscr A\cap M$ for every maximal antichain $\mathscr A\in M$ of $\mathfrak{X}/\mathrm{Fin}$. But since $\mathscr A\cap M$ is countable, by Proposition \ref{prop:proper}, there exist $A_0,\ldots,A_n\in \mathscr A\cap M$ such that $B\subseteq_\mathrm{Fin} A_0\cup\cdots\cup A_n$.\\ ($\Longleftarrow$): Suppose that there is $\lambda>2^{|\mathfrak{X}|}$ such that for every countable \hbox{$M\prec H_\lambda$} containing $\mathfrak{X}$, whenever $C\in M\cap \mathfrak{X}/\mathrm{Fin}$, then there is $B\subseteq_\mathrm{Fin} C$ in $\mathfrak{X}/\mathrm{Fin}$ such that for every maximal antichain $\mathscr A\in M$ of $\mathfrak{X}/\mathrm{Fin}$, there are $A_0,\ldots, A_n\in\mathscr A\cap M$ with $B\subseteq_\mathrm{Fin} A_0\cup\cdots\cup A_n$. Fix a countable $M\prec H_\lambda$ with $\mathfrak{X}\in M$ and $C\in M\cap \mathfrak{X}/\mathrm{Fin}$. Let $B\subseteq_\mathrm{Fin} C$ be as above. By Proposition \ref{prop:proper}, every $V$-generic filter $G$ containing $B$ must meet $\mathscr A\cap M$ for every maximal antichain $\mathscr A\in M$. Thus, $B$ is $M$-generic. Since $M$ was arbitrary, we can conclude that $\mathfrak{X}$ is proper. \end{proof} The hypothesis of Corollary \ref{cor:properequiv} can be weakened, by Theorem \ref{th:equivproper}, to finding for some $H_\lambda$, only a club of countable $M$ having the desired property. The next definition is key to all the remaining arguments in the paper. \begin{definition} Let $\mathfrak{X}$ be a countable family of reals, let $\mathcal D$ be some collection of dense subsets of $\mathfrak{X}/\mathrm{Fin}$, and let $B\in \mathfrak{X}$. We say that an infinite set $A\subseteq\mathbb {N}$ is $\langle \mathfrak{X},\mathcal D\rangle$\emph{-generic below} $B$ if $A\subseteq_\mathrm{Fin} B$ and for every $\mathscr D\in\mathcal D$, there is $C\in \mathscr D$ such that $A\subseteq_\mathrm{Fin} C$. \end{definition} Here one should think of the context of having some large family $\mathfrak{Y}\in M\prec H_\lambda$ for a countable $M$, $\mathfrak{X}=\mathfrak{Y}\cap M$, and $\mathcal D=\{\mathscr D\cap M\mid \mathscr D\in M\text{ and }\mathscr D\text{ dense in }\mathfrak{Y}/\mathrm{Fin}\}$. We think of $A$ as coming from the large family $\mathfrak{Y}$ and the requirement for $A$ to be $\langle \mathfrak{X},\mathcal D\rangle$-generic is a strengthening of the requirement to be $M$-generic. \begin{lemma}\label{f:gen_elt} Let $\mathfrak{X}$ be a countable family. Assume that $B\in\mathfrak{X}/\mathrm{Fin}$ and $G\subseteq \mathfrak{X}/\mathrm{Fin}$ is a $V$-generic filter containing $B$. Then in $V[G]$, there is an infinite $A\subseteq\mathbb {N}$ such that $A\subseteq_\mathrm{Fin} C$ for all $C\in G$. Furthermore, if $\mathcal D$ is the collection of dense subsets of $\mathfrak{X}/\mathrm{Fin}$ of $V$, then such an $A$ is $\langle \mathfrak{X},\mathcal D\rangle$-generic below $B$. \end{lemma} \begin{proof} Since $G$ is countable and directed in $V[G]$, there exists an infinite $A\subseteq\mathbb {N}$ such that $A\subseteq_\mathrm{Fin} C$ for all $C\in G$. For the ``furthermore" part, fix a dense subset $\mathscr D$ of $\mathfrak{X}/\mathrm{Fin}$ in $V$. Since there is $C\in G\cap \mathscr D$, we have $A\subseteq_\mathrm{Fin} C$. It is clear that $A\subseteq_\mathrm{Fin} B$ since $B\in G$. \end{proof} \begin{lemma}\label{le:gen_elt} Let $\mathfrak{X}_0\subseteq\mathfrak{X}_1\subseteq\cdots\subseteq \mathfrak{X}_\xi\subseteq\cdots$ for $\xi<\omega_1$ be a continuous chain of countable families of reals and let $\mathfrak{X}=\cup_{\xi<\omega_1} \mathfrak{X}_\xi$. Assume that for every $\xi<\omega_1$, if $B\in \mathfrak{X}_\xi$ and $\mathcal D$ is a countable collection of dense subsets of $\mathfrak{X}_\xi$, there is $A\in \mathfrak{X}/\mathrm{Fin}$ that is $\langle \mathfrak{X}_\xi,\mathcal D\rangle$-generic below $B$. Then $\mathfrak{X}$ is proper. \end{lemma} \begin{proof} Fix a countable $M\prec H_\lambda$ such that $\langle \mathfrak{X}_\xi\mid \xi<\omega_1\rangle\in M$. It suffices to show that generic conditions exist for such $M$ since these form a club. By Lemma \ref{le:intersect}, $\mathfrak{X}\cap M=\mathfrak{X}_\alpha$ where $\alpha=\text{Ord}^M\cap \omega_1$. Fix $B\in\mathfrak{X}_\alpha$ and let $\mathcal D=\{\mathscr D\cap M\mid \mathscr D\in M \text{ and }\mathscr D \text{ dense in }\mathfrak{X}/\mathrm{Fin}\}$. By hypothesis, there is $A\in\mathfrak{X}/\mathrm{Fin}$ that is $\langle\mathfrak{X}_\alpha,\mathcal D\rangle$-generic below $B$. Clearly $A$ is $M$-generic. Thus, we were able to find an $M$-generic element below every $B\in M\cap \mathfrak{X}/\mathrm{Fin}$. \end{proof} We are finally ready to show how to force the existence of a proper family of size $\omega_1$. \begin{theorem}\label{th:forcing} There is a generic extension of $V$ by a c.c.c.\ poset that satisfies $\neg{\rm CH}$ and contains a proper family of reals of size $\omega_1$. \end{theorem} \begin{proof} First, note that we can assume without loss of generality that $V\models \neg{\rm CH}$ since this is forceable by a c.c.c.\ forcing. The forcing to add a proper family of reals will be a c.c.c.\ finite support iteration $\mathbb{P}$ of length $\omega_1$. The iteration $\mathbb{P}$ will add, step-by-step, a continuous chain $\mathfrak{X}_0\subseteq\mathfrak{X}_1\subseteq\cdots\subseteq\mathfrak{X}_\xi\subseteq\cdots$ for $\xi<\omega_1$ of countable arithmetically closed families such that $\cup_{\xi<\omega_1} \mathfrak{X}_\xi$ will have the property of Lemma \ref{le:gen_elt}. The idea will be to obtain generic elements for $\mathfrak{X}_\xi$, as in Lemma \ref{f:gen_elt}, by adding generic filters. Once $\mathfrak{X}_\xi$ has been constructed, I will force over $\mathfrak{X}_\xi/\mathrm{Fin}$ below every one of its elements cofinally often before the iteration is over. Every time such a forcing is done, I will obtain a generic element for a new collection of dense sets. This element will be added to $\mathfrak{X}_{\delta+1}$ where $\delta$ is the stage at which the forcing was done. Fix a bookkeeping function $f$ mapping $\omega_1$ onto $\omega_1\times\omega$, having the properties that every pair $\langle \alpha, n\rangle$ appears cofinally often in the range and if $f(\xi)=\langle \alpha, n\rangle$, then $\alpha\leq \xi$. Let $\mathfrak{X}_0$ be any countable arithmetically closed family and fix an enumeration $\mathfrak{X}_0=\{B_0^0,B_1^0,\ldots, B_n^0\ldots\}$. Each subsequent $\mathfrak{X}_\xi$ will be created in $V^{\mathbb{P}_\xi}$. Suppose $\lambda$ is a limit and $G_\lambda$ is generic for $\mathbb{P}_\lambda$. In $V[G_\lambda]$, define $\mathfrak{X}_\lambda=\cup_{\xi<\lambda} \mathfrak{X}_\xi$ and fix an enumeration $\mathfrak{X}_\lambda=\{B_0^\lambda,B_1^\lambda,\ldots,B_n^\lambda,\ldots\}$. Consult $f(\lambda)=\langle \xi,n\rangle$ and define $\dot{\mathbb{Q}}_\lambda=\mathfrak{X}_\xi/\mathrm{Fin}$ below $B_n^\xi$. Suppose $\delta=\beta+1$, then $\mathbb{P}_\delta=\mathbb{P}_\beta*\dot{\mathbb{Q}}_\beta$ where $\dot{\mathbb{Q}}_\beta$ is $\mathfrak{X}_\xi/\mathrm{Fin}$ for some $\xi\leq\beta$ below one of its elements. In $V[G_\delta]=V[G_\beta][H]$, let $A\subseteq_\mathrm{Fin} B$ for all $B\in H$ and define $\mathfrak{X}_\delta$ to be the arithmetic closure of $\mathfrak{X}_\beta$ and $A$. Also in $V[G_\delta]$, fix an enumeration $\mathfrak{X}_\delta=\{B_0^\delta,B_1^\delta,\ldots,B_n^\delta,\ldots\}$. Consult $f(\delta)=\langle \xi,n\rangle$ and define $\dot{\mathbb{Q}}_\delta=\mathfrak{X}_\xi/\mathrm{Fin}$ below $B_n^\xi$. At limits, use finite support. The poset $\mathbb{P}$ is c.c.c.\ since it is a finite support iteration of c.c.c.\ posets (see \cite{jech:settheory}, p.\ 271). Let $G$ be $V$-generic for $\mathbb{P}$. It should be clear that we can use $G$ in $V[G]$ to construct an arithmetically closed Scott set $\mathfrak{X}=\cup_{\xi<\omega_1}\mathfrak{X}_\xi$. A standard nice name counting argument shows that $(2^{\omega})^V=(2^{\omega})^{V[G]}$. Since we assumed at the beginning that $V\models \neg{\rm CH}$, it follows that $V[G]\models\neg{\rm CH}$. Finally, we must see that $\mathfrak{X}$ satisfies the hypothesis of Lemma \ref{le:gen_elt} in $V[G]$. Fix $\mathfrak{X}_\xi$, a set $B\in X_\xi$, and a countable collection $\mathcal D$ of dense subsets of $\mathfrak{X}_\xi/\mathrm{Fin}$. Since the poset $\mathbb{P}$ is a finite support c.c.c.\ iteration and all elements of $\mathcal D$ are countable, they must appear at some stage $\alpha$ below $\omega_1$. Since we force with $\mathfrak{X}_\xi/\mathrm{Fin}$ below $B$ cofinally often, we have added a $\langle \mathfrak{X}_\xi, \mathcal D\rangle$-generic condition below $B$ at some stage above $\alpha$. \end{proof} \begin{corollary} There is a generic extension of $V$ that satisfies {\rm CH} and contains a proper family of reals of size $\omega_1$ other than $\mathcal{P}(\mathbb {N})$. \end{corollary} \begin{proof} As before, we can assume without loss of generality that $V\models \neg{\rm CH}$. Force with $\mathbb{P}*\dot{\mathbb{Q}}$ where $\mathbb{P}$ is the forcing iteration from Theorem \ref{th:forcing} and $\mathbb{Q}$ is the poset which adds a subset to $\omega_1$ with countable conditions. Let $G*H$ be $V$-generic for $\mathbb{P}*\dot{\mathbb{Q}}$, then clearly {\rm CH} holds in $V[G][H]$. Also the family $\mathfrak{X}$ created from $G$ remains proper in $V[G][H]$ since $\mathbb{Q}$ is a countably closed forcing, and therefore cannot affect the properness of a family of reals. \end{proof} We can push this argument further to show that it is consistent with {\rm ZFC} that there are \emph{continuum} many proper families of reals of size $\omega_1$. \begin{theorem}\label{th:continuum} There is a generic extension of $V$ by a c.c.c.\ poset that satisfies $\neg{\rm CH}$ and contains continuum many proper families of reals of size $\omega_1$. \end{theorem} \begin{proof} We start by forcing ${\rm MA}+\neg{\rm CH}$. Since this can be done by a c.c.c.\ forcing notion (\cite{jech:settheory}, \hbox{p. 272}), we can assume without loss of generality that $V\models{\rm MA}+\neg{\rm CH}$. Define a finite support product $\mathbb{Q}=\Pi_{\xi<2^\omega}\mathbb{P}^\xi$ where every $\mathbb{P}^\xi$ is an iteration of length $\omega_1$ as described in Theorem \ref{th:forcing}. Since Martin's Axiom implies that finite support products of c.c.c.\ posets are c.c.c.\ (see \cite{jech:settheory}, \hbox{p. 277}), the product poset $\mathbb{Q}$ is c.c.c.. Let $G\subseteq \mathbb{Q}$ be $V$-generic, then each $G^\xi=G\upharpoonright \mathbb{P}^\xi$ together with $\mathbb{P}^\xi$ can be used to build an arithmetically closed family $\mathfrak{X}^\xi$ as described in Theorem \ref{th:forcing}. Each such $\mathfrak{X}^\xi$ will be the union of an increasing chain of countable arithmetically closed families $\mathfrak{X}^\xi_\gamma$ for $\gamma<\omega_1$. First, I claim that all $\mathfrak{X}^\xi$ are distinct. Fixing $\alpha<\beta$, I will show that $\mathfrak{X}^\alpha\neq\mathfrak{X}^\beta$. Consider $V[G\upharpoonright \beta+1]=V[G\upharpoonright \beta][G^\beta]$ a generic extension by $(\mathbb{Q}\upharpoonright\beta)\times \mathbb{P}^\beta$. Observe that $\mathfrak{X}^\alpha$ already exists in $V[G\upharpoonright\beta]$. Recall that to build $\mathfrak{X}^\beta$, we start with an arithmetically closed countable family $\mathfrak{X}_0^\beta$ and let the first poset in the iteration $\mathbb{P}^\beta$ be $\mathfrak{X}_0^\beta/\mathrm{Fin}$. Let $g$ be the generic filter for $\mathfrak{X}_0^\beta/\mathrm{Fin}$ definable from $G^\beta$. The next step in constructing $\mathfrak{X}^\beta$ is to pick $A\subseteq\mathbb {N}$ such that $A\subseteq_\mathrm{Fin} B$ for all $B\in g$ and define $\mathfrak{X}_1^\beta$ to be the arithmetic closure of $\mathfrak{X}_0^\beta$ and $A$. It should be clear that $g$ is definable from $A$ and $\mathfrak{X}_0^\beta$. Since $g$ is $V[G\upharpoonright\beta]$-generic, it follows that $g\notin V[G\upharpoonright\beta]$. Thus, $A\notin V[G\upharpoonright\beta]$, and hence $\mathfrak{X}^\beta\neq\mathfrak{X}^\alpha$. It remains to show that each $\mathfrak{X}^\alpha$ is proper in $V[G]$. Fix $\alpha<2^\omega$ and let $V[G]=V[G\upharpoonright\alpha][G^\alpha][G_\text{tail}]$ where $G_\text{tail}$ is the generic for $\mathbb{Q}$ above $\alpha$. By the commutativity of products, $V[G\upharpoonright\alpha][G^\alpha][G_\text{tail}]= V[G\upharpoonright\alpha][G_\text{tail}][G^\alpha]$ and $G^\alpha$ is $V[G\upharpoonright\alpha][G_\text{tail}]$-generic. Fix a countable $M\prec H_\lambda^{V[G]}$ containing the sequence $\langle \mathfrak{X}_\xi^\alpha\mid \xi<\omega_1\rangle$ as an element. By Lemma \ref{le:intersect}, $M\cap \mathfrak{X}^\alpha$ is some $\mathfrak{X}_\gamma^\alpha$. This is the key step of the proof since it allows us to know exactly what $M\cap \mathfrak{X}^\alpha$ is, even though we know nothing about $M$. Let $G_\xi^\alpha=G^\alpha\upharpoonright \mathbb{P}_\xi^\alpha$ for $\xi<\omega_1$. Let $\mathcal D=\{\mathscr D\cap M\mid \mathscr D\in M\text{ and }\mathscr D \text{ dense in }\mathfrak{X}^\alpha/\mathrm{Fin}\}$. There must be some $\beta<\omega_1$ such that $\mathcal D\in V[G\upharpoonright\alpha][G_\text{tail}][G_\beta^\alpha]$. By construction, there must be some stage $\delta>\beta$ at which we forced with $\mathfrak{X}_\gamma^\alpha/\mathrm{Fin}$ and added a set $A$ such that $A\subseteq_\mathrm{Fin} B$ for all $B\in H$ where $G_{\delta+1}^\alpha=G_\delta^\alpha *H$. Now observe that $H$ is $V[G\upharpoonright\alpha][G_\text{tail}] [G^\alpha_\delta]$-generic for $\mathfrak{X}_\gamma^\alpha/\mathrm{Fin}$. Therefore $H$ meets all the sets in $\mathcal D$. So we can conclude that $A$ is $M$-generic. A standard nice name counting argument will again show that $(2^\omega)^V=(2^\omega)^{V[G]}$. Thus, $V[G]$ satisfies $\neg{\rm CH}$ and contains continuum many proper families of reals of size $\omega_1$. \end{proof} Similar techniques allow us to force the existence of a piecewise proper family of reals of size $\omega_2$. \begin{lemma}\label{le:piecewise} Let $\mathfrak{X}_0\subseteq\mathfrak{X}_1\subseteq\cdots\subseteq \mathfrak{X}_\xi\subseteq\cdots$ for $\xi<\omega_1$ be a continuous chain of countable families of reals and let $\mathfrak{X}=\cup_{\xi<\omega_1} \mathfrak{X}_\xi$. Assume that for every $\xi<\omega_1$, if $B\in \mathfrak{X}_\xi$ and $\mathcal D$ is a countable collection of dense subsets of $\mathfrak{X}_\xi$, there is $A\in \mathfrak{X}/\mathrm{Fin}$ that is $\langle \mathfrak{X}_\xi,\mathcal D\rangle$-generic below $B$. Then $\mathfrak{X}$ is proper and $\mathfrak{X}$ remains proper after forcing with any absolutely c.c.c.\ poset. \end{lemma} \begin{proof} The proof is a straightforward modification of the proof of Theorem \ref{th:oldreals}. Let $\mathbb{P}$ be an absolutely c.c.c.\ poset and $g\subseteq \mathbb{P}$ be $V$-generic. We need to show that $\mathfrak{X}$ is proper in $V[g]$. Fix a countable $M[g]\prec H_\lambda [g]$ in $V[g]$ such that $\langle \mathfrak{X}_\xi:\xi<\omega_1\rangle, \mathbb{P}\in M[g]$ and $M\subseteq V$. Let $\mathfrak{X}_\alpha=M\cap \mathfrak{X}$ and let $\mathcal D=\{\mathscr D\cap \mathfrak{X}_\xi\mid\mathscr D\in M\text{ and }\mathscr D\text{ dense in }\mathfrak{X}\}$. Observe that $\mathcal D\subseteq V$ and $|\mathcal D|=\omega$. Define $\mathcal E$ as in proof of Theorem \ref{th:oldreals}. Now choose $A\in \mathfrak{X}$ that is $\langle \mathfrak{X}_\alpha,\mathcal E\rangle$-generic in $V$. It follows that $A$ is $M$-generic. Next proceed exactly as in the proof of Theorem \ref{th:oldreals}, using the fact that $\mathbb{P}$ is absolutely c.c.c.\ in the final stage of the argument. \end{proof} \begin{theorem}\label{th:piecewise} There is a generic extension of $V$ by a c.c.c.\ poset which contains a piecewise proper family of reals of size $\omega_2$. \end{theorem} \begin{proof} We will define a c.c.c.\ forcing iteration $\mathbb{Q}_{\omega_2}$ of length $\omega_2$ to accomplish this. Let $\mathbb{Q}_0$ be the forcing to add a proper family $\mathfrak{X}_0$ of size $\omega_1$ (Theorem \ref{th:forcing}). At the $\alpha^{\text{th}}$-stage, force with the poset to add a proper family $\mathfrak{X}_{\alpha}\supseteq \cup_{\beta<\alpha}\mathfrak{X}_\beta$. Observe here, that the poset from Theorem \ref{th:forcing} can be very easily modified to the poset which adds a proper family extending any family of reals from the ground model. Let $G\subseteq \mathbb{Q}_{\omega_2}$ be $V$-generic. I claim each $\mathfrak{X}_\alpha$ remains proper in $V[G]$. Fix $\mathfrak{X}_\alpha$ and factor the forcing $\mathbb{Q}_{\omega_2}=\mathbb{Q}_\alpha*\mathbb{Q}_\text{tail}$. The family $\mathfrak{X}_\alpha$ is proper in $V[G_\alpha]$ and $\mathbb{Q}_\text{tail}$ is absolutely c.c.c.\ in $V[G_\alpha]$. The poset $\mathbb{Q}_\text{tail}$ is absolutely c.c.c\ since the forcing to add a proper family is a finite support iteration of countable posets. Thus, by Lemma \ref{le:piecewise}, $\mathfrak{X}_\alpha$ remains proper in $V[G_\alpha][G_\text{tail}]$. Thus, $\mathfrak{X}=\cup_{\alpha<\omega_2} \mathfrak{X}_\alpha$ is clearly piecewise proper. \end{proof} By exactly following the proof of Theorem \ref{th:continuum}, we can extend Theorem \ref{th:piecewise} to obtain: \begin{theorem} There is a generic extension of $V$ by a c.c.c.\ poset which contains continuum many piecewise proper families of reals of size $\omega_2$. \end{theorem} By Enayat's \cite{enayat:endextensions} example of a non-proper arithmetically closed family of size $\omega_1$, we know that there are piecewise proper families that are not proper. This follows by recalling that arithmetically closed families of size $\omega_1$ are trivially piecewise proper. It is not clear whether every proper family has to be piecewise proper. In particular, it is not known whether $\mathcal{P}(\mathbb {N})$ is piecewise proper. It follows that $\mathcal{P}(\mathbb {N})$ \emph{can be} piecewise proper from the proof of Theorem \ref{th:piecewise} since we can modify the construction to end up with $\mathfrak{X}=\mathcal{P}(\mathbb {N})$. Finally, I will discuss a possible construction for proper families under {\rm PFA}. The idea is, in some sense, to mimic the forcing iteration like that of Theorem \ref{th:forcing} in the ground model. Unfortunately, the main problem with the construction is that it is not clear whether we are getting the whole $\mathcal{P}(\mathbb {N})$. This problem never arose in the forcing construction since we were building families of size $\omega_1$ and knew that the continuum was larger than $\omega_1$. I will describe the construction and a possible way of ensuring that the resulting family is not $\mathcal{P}(\mathbb {N})$. Fix an enumeration $\{\langle A_\xi,B_\xi\rangle\mid \xi<\omega_2\}$ of $\mathcal{P}(\omega)\times \mathcal{P}(\omega)$. Also fix a bookkeeping function $f$ from $\omega_2$ onto $\omega_2$ such that each element appears cofinally in the range. I will build a family $\mathfrak{X}$ of size $\omega_2$ as the union of an increasing chain of arithmetically closed families $\mathfrak{X}_\xi$ for $\xi<\omega_2$. Start with any arithmetically closed family $\mathfrak{X}_0$ of size $\omega_1$. Suppose we have constructed $\mathfrak{X}_\beta$ for $\beta\leq\alpha$ and we need to construct $\mathfrak{X}_{\alpha+1}$. Consult $f(\alpha)=\gamma$ and consider the pair $\langle A_\gamma,B_\gamma\rangle$ in the enumeration of $\mathcal{P}(\omega)\times \mathcal{P}(\omega)$. First, suppose that $A_\gamma$ codes a countable family $\mathfrak{Y}\subseteq \mathfrak{X}_\alpha$ and $B_\gamma$ codes a countable collection $\mathcal D$ of dense subsets of $\mathfrak{Y}$. Let $G$ be some filter on $\mathfrak{Y}$ meeting all sets in $\mathcal D$ and let $A\subseteq_\mathrm{Fin} C$ for all $C\in G$. Define $\mathfrak{X}_{\alpha+1}$ to be the arithmetic closure of $\mathfrak{X}_\alpha$ and $A$. If the pair $\langle A_\gamma,B_\gamma\rangle $ does not code such information, let $\mathfrak{X}_{\alpha+1}=\mathfrak{X}_\alpha$. At limit stages take unions. I claim that $\mathfrak{X}$ is proper. Fix some countable $M\prec H_\lambda$ containing $\mathfrak{X}$. Let $\mathfrak{Y}=M\cap \mathfrak{X}$ and let $\mathcal D=\{\mathscr D\cap M\mid \mathscr D\in M\text{ and }\mathscr D \text{ dense in }\mathfrak{X}/\mathrm{Fin} \}$. There must be some $\gamma$ such that $\langle A_\gamma, B_\gamma\rangle$ codes $\mathfrak{Y}$ and $\mathcal D$. Let $\delta$ such that $M\cap \mathfrak{X}$ is contained in $\mathfrak{X}_\delta$, then there must be some $\alpha>\delta$ such that $f(\alpha)=\gamma$. Thus, at stage $\alpha$ in the construction we considered the pair $\langle A_\gamma,B_\gamma\rangle$. Since $\alpha>\delta$, we have $M\cap \mathfrak{X}=M\cap \mathfrak{X}_\alpha$. It follows that at stage $\alpha$ we added an $M$-generic set $A$ to $\mathfrak{X}$. A way to prove that $\mathfrak{X}\neq \mathcal{P}(\mathbb {N})$ would be to show that some fixed set $C$ is not in $\mathfrak{X}$. Suppose the following question had a positive answer: \begin{question}\label{con:proper} Let $\mathfrak{X}$ be an arithmetically closed family such that $C\notin \mathfrak{X}$ and $\mathfrak{Y}\subseteq \mathfrak{X}$ be a countable family. Is there a $\mathfrak{Y}/\mathrm{Fin}$-name $\dot{A}$ such that $1_{\mathfrak{Y}/\mathrm{Fin}}\Vdash `` \dot{A}\subseteq_\mathrm{Fin} B\text{ for all }B\in \dot{G}\text{ and }\check{C}$ is not in the arithmetic closure of $\dot{A}\text{ and }\check{\mathfrak{X}}$"? \end{question} Assuming that the answer to Question \ref{con:proper} is positive, let us construct a proper family $\mathfrak{X}$ in such a way that $C$ is not in $\mathfrak{X}$. We will carry out the above construction being careful in our choice of the filters $G$ and elements $A$. Start with $\mathfrak{X}_0$ that does not contain $C$ and assume that $C\notin \mathfrak{X}_\alpha$. Suppose the pair $\langle A_\gamma,B_\gamma\rangle$ considered at stage $\alpha$ codes meaningful information. That is, $A_\gamma$ codes a countable family $\mathfrak{Y}\subseteq \mathfrak{X}_\alpha$ and $B_\gamma$ codes a countable collection $\mathcal D$ of dense subsets of $\mathfrak{Y}$. Choose some transitive $N\prec H_{\omega_2}$ of size $\omega_1$ such that $\mathfrak{X}_\alpha$, $\mathfrak{Y}$, and $\mathcal D$ are elements of $N$. Since we assumed a positive answer to Question \ref{con:proper}, $H_{\omega_2}$ satisfies that there exists a $\mathfrak{Y}/\mathrm{Fin}$-name $\dot{A}$ such that $1_{\mathfrak{Y}/\mathrm{Fin}}\Vdash ``\dot{A}\subseteq_\mathrm{Fin} B\text{ for all }B\in \dot{G}\text{ and }\check{C}$ is not in the arithmetic closure of $\dot{A}\text{ and }\check{\mathfrak{X}_\alpha}$". But then $N$ satisfies the same statement by elementarity. Hence there is $\dot{A}\in N$ such that $N$ satisfies $1_{\mathfrak{Y}/\mathrm{Fin}}\Vdash ``\dot{A}\subseteq_\mathrm{Fin} B\text{ for all }B\in \dot{G}\text{ and }\check{C}$ is not in the arithmetic closure of $\dot{A}\text{ and}$ $\check{\mathfrak{X}_\alpha}$". Now use {\rm PFA} to find an $N$-generic filter $G$ for $\mathfrak{Y}/\mathrm{Fin}$. Since $G$ is fully generic for the model $N$, the model $N[G]$ will satisfy that $C$ is not in the arithmetic closure of $\mathfrak{X}_\alpha$ and $A=\dot{A}_G$. Thus, it is really true that $C$ is not in the arithmetic closure of $\mathfrak{X}_\alpha$ and $A$. Since $G$ also met all the dense sets in $\mathcal D$ and $A\subseteq_\mathrm{Fin} B$ for all $B\in G$, we can let $\mathfrak{X}_{\alpha+1}$ be the arithmetic closure of $\mathfrak{X}_\alpha$ and $A$. Thus, $C\notin \mathfrak{X}_{\alpha+1}$. We can conclude that $C\notin\mathfrak{X}$. \section{Questions} \begin{question} Can {\rm ZFC} or {\rm ZFC} + {\rm PFA} prove the existence of an uncountable proper family of reals other than $\mathcal{P}(\mathbb {N})$? \end{question} \begin{question} Can {\rm ZFC} or {\rm ZFC} + {\rm PFA} prove the existence of a piecewise proper family of size $\omega_2$? \end{question} \begin{question} Is it consistent with {\rm ZFC} that there are proper families of reals of size $\omega_2$ other than $\mathcal{P}(\mathbb {N})$? \end{question} \begin{question} What is the answer to Question \ref{con:proper}? \end{question} \begin{question} Can $\mathcal{P}(\mathbb {N})$ be non-piecewise proper? \end{question} \end{document}
arXiv
\begin{document} \title[Compositional Reasoning for Channel-Based Concurrent Resource Management]{Compositional Reasoning for Explicit Resource Management in Channel-Based Concurrency\rsuper*} \author[A.~Francalanza]{Adrian Francalanza\rsuper a} \address{{\lsuper a}ICT, University of Malta} \email{[email protected]} \author[E.~DeVries]{Edsko DeVries\rsuper b} \address{{\lsuper b}Well-Typed LLP, UK} \email{[email protected]} \author[M.~Hennessy]{Matthew Hennessy\rsuper c} \address{{\lsuper c}Trinity College Dublin, Ireland} \email{[email protected]} \thanks{{\lsuper c}Supported by SFI project SFI 06 IN.1 1898.} \keywords{\pic, concurrency, memory management, coinductive reasoning} \titlecomment{{\lsuper*}An extended abstract of a preliminary version of the paper has appeared in \cite{DevFraHen09}} \maketitle \begin{abstract} We define a \pic variant with a costed semantics where channels are treated as resources that must explicitly be allocated before they are used and can be deallocated when no longer required. We use a substructural type system tracking permission transfer to construct coinductive proof techniques for comparing behaviour and resource usage efficiency of concurrent processes. We establish full abstraction results between our coinductive definitions and a contextual behavioural preorder describing a notion of process efficiency \wrt its management of resources. We also justify these definitions and respective proof techniques through numerous examples and a case study comparing two concurrent implementations of an extensible buffer. \end{abstract} \section{Introduction} \label{sec:introduction} We investigate the \emph{behaviour} and \emph{space efficiency} of concurrent programs with \emph{explicit} \emph{resource-management}. In particular, our study focuses on \emph{channel-passing concurrent programs}: we define a \pic variant, called \picr, where the only resources available are channels; these channels must explicitly be allocated before they can be used, and can be deallocated when no longer required. As part of the operational model of the language, channel allocation and deallocation have costs associated with them, reflecting the respective resource usage. Explicit resource management is typically desirable in settings where resources are \emph{scarce}. Resource management programming constructs such as explicit deallocation provide fine-grained control over how these resources are used and recycled. By comparison, in automated mechanisms such as garbage collection, unused resources (in this case, memory) tend to remain longer in an unreclaimed state \cite{JonesGC96,GC2011}. Explicit resource management constructs such as memory deallocation also carry advantages over automated mechanisms such as garbage collection techniques when it comes to \emph{interactive} and \emph{real-time} programs \cite{Bulka:1999:ECP:320041,JonesGC96,GC2011}. In particular, garbage collection techniques require additional computation to determine otherwise explicit information as to which parts of the memory to reclaim and at what stage of the computation; the associated overheads may lead to uneven performance and intolerable pause periods where the system becomes unresponsive \cite{Bulka:1999:ECP:320041}. In the case of channel-passing concurrency with explicit memory-management, the analysis of the relative behaviour and efficiency of programs is non-trivial for a number of reasons. Explicit memory-management introduces the risk of either premature or multiple deallocation of resources along separate threads of execution; these are more difficult to detect than in single-threaded programs and potentially result in problems such as wild pointers or corrupted heaps which may, in turn, lead to unpredictable, even catastrophic, behaviour \cite{JonesGC96,GC2011}. It also increases the possibility of memory leaks, which are often not noticeable in short-running, terminating programs but subtly eat up resources over the course of long-running programs. In a concurrent settings such as ours, complications relating to the assessment and comparison of resource consumption is further compounded by the fact that the runtime execution of channel-passing concurrent programs can have \emph{multiple interleavings}, is sometimes \emph{non-deterministic} and often \emph{non-terminating}. \subsection{Scenario:} \label{sec:scenario} Consider a setting with two servers, $\ptit{S}_1$ and $\ptit{S}_2$, which repeatedly listen for service requests on channels $\ctit{srv}_1$ and $\ctit{srv}_2$, respectively. Requests send a \emph{return} channel on $\ctit{srv}_1$ or $\ctit{srv}_2$ which is then used by the servers to service the requests and send back answers, $\textit{v}_1$ and $\textit{v}_2$. A possible implementation for these servers is given in \eqref{eq:3} below, where \piRecX{P} denotes a process $P$ recursing at $w$, $\piIn{\ctit{c}}{x}{P}$ denotes a process inputting on channel \ctit{c} some value that is bound to the variable $x$ in the continuation $P$, and $\piOut{\ctit{c}}{v}{P}$ outputs a value $v$ on channel \ctit{c} and continues as $P$: \begin{equation}\label{eq:3} \ptit{S}_i \deftri \piRecX{\;\piIn{\ctit{srv}_i}{x}{\; \piOut{x}{\textit{v}_i}}{\;w}} \qquad\qquad \text{for $i \in \sset{1,2}$} \end{equation} Clients that need to request service from \emph{both} servers, so as to report back the outcome of both server interactions on some channel, \ctit{ret}, can be programmed in a variety of ways: \begin{equation}\label{eq:clients} \begin{split} \ptit{C}_0 &\deftri \piRecX{\;\piAll{x_1}{\piAll{x_2}\;\piOut{\ctit{srv}_1}{x_1}\,\piIn{x_1}{y}{\;\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\;\piOut{\ctit{ret}}{(y,z)}{\;w}}}}} \\ \ptit{C}_1 &\deftri \piRecX{\,\piAll{x}{\;\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\;\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\piOut{\ctit{ret}}{(y,z)}{\;w}}}}} \\ \ptit{C}_2 &\deftri \piRecX{\piAll{x}{\;\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\;\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\;\piFree{x}{\;\piOut{\ctit{ret}}{(y,z)}{\;w}}}}}} \end{split} \end{equation} $\ptit{C}_0$ corresponds to an idiomatic \pic client. In order to ensure that it is the sole recipient of the service requests, it creates \emph{two} new return channels to communicate with $\ptit{S}_1$ and $\ptit{S}_2$ on $\ctit{srv}_1$ and $\ctit{srv}_2$, using the command $\piAll{x}{P}$; this command allocates a \emph{new} channel \ctit{c} and binds it to the variable $x$ in the continuation $P$. Allocating a new channel for each service request ensures that the return channel used between the client and server is \emph{private} for the duration of the service, preventing interferences from other parties executing in parallel. One important difference between the computational model considered in this paper and that of the standard \pic is that channel allocation is an expensive operation \ie it incurs an additional \emph{(spatial)} cost compared to the other operations. Client $\ptit{C}_1$ attempts to address the inefficiencies of $\ptit{C}_0$ by allocating only \emph{one} additional new channel, and \emph{reusing} this channel for both interactions with the servers. Intuitively, this channel reuse is valid, \ie it preserves the client-server behaviour $\ptit{C}_0$ had with servers $\ptit{S}_1$ and $\ptit{S}_2$, because the server implementations above use the received return-channels \emph{only once}. This single channel usage guarantees that return channels remain private during the duration of the service, despite the reuse from client $\ptit{C}_1$. Client $\ptit{C}_2$ attempts to be more efficient still. More precisely, since our computational model does not assume implicit resource reclamation, the previous two clients can be deemed as having \emph{memory leaks}: at every iteration of the client-server interaction sequence, $\ptit{C}_0$ and $\ptit{C}_1$ allocate new channels that are not disposed of, even though these channels are never used again in subsequent iterations. By contrast, $\ptit{C}_2$ deallocates unused channels at the end of each iteration using the construct $\piFree{\ctit{c}}{P}$. In this work we develop a formal framework for comparing the behaviour of concurrent processes that explicitly allocate and deallocate channels. For instance, processes consisting of the servers $\ptit{S}_1$ and $\ptit{S}_2$ together with any of the clients $\ptit{C}_0$, $\ptit{C}_1$ or $\ptit{C}_2$ should be \emph{related}, on the basis that they exhibit the same behaviour. In addition, we would like to \emph{order} these systems, based on their relative efficiencies \wrt the (channel) resources used. We note that there are various, at times contrasting, notions of efficiency that one may consider. For instance, one notion may consider acquiring memory for long periods to be less efficient than repeatedly allocating and deallocating memory; another notion of efficiency could instead focus on minimising the allocation and deallocation operations used, as these as considerably more expensive than other operations. In this work, we mainly focus on a notion of efficiency that accounts for the relative memory allocations required to carry out the necessary computations. Thus, we would intuitively like to develop a framework yielding the following preorder, where $\sqsubsetsim$ reads "more efficient than": \begin{equation} \label{eq:1a} \ptit{S}_1 \piParal \ptit{S}_2 \piParal \ptit{C}_2 \quad\sqsubsetsim\quad \ptit{S}_1 \piParal \ptit{S}_2 \piParal \ptit{C}_1 \quad\sqsubsetsim\quad \ptit{S}_1 \piParal \ptit{S}_2 \piParal \ptit{C}_0 \end{equation} A pleasing property of this preorder would be \emph{compositionality}, which implies that orderings are preserved under larger contexts, \ie for all (valid) contexts $\mathcal{C}[-]$, $P \,\sqsubsetsim\, Q$ implies $\mathcal{C}[P] \,\sqsubsetsim\, \mathcal{C}[Q]$. Dually, compositionality would also improve the scalability of our formal framework since, to show that $\mathcal{C}[P] \,\sqsubsetsim\, \mathcal{C}[Q]$ (for some context $\mathcal{C}[-]$), it suffices to obtain $P \,\sqsubsetsim\, Q$. For instance, in the case of \eqref{eq:1a}, compositionality would allow us to factor out the common code, \ie the servers $\ptit{S}_1$ and $\ptit{S}_2$ as the context $\ptit{S}_1 \piParal \ptit{S}_2 \piParal [-]$, and focus on showing that \begin{equation}\label{eq:1b} \ptit{C}_2 \;\sqsubsetsim\; \ptit{C}_1 \;\sqsubsetsim\; \ptit{C}_0 \end{equation} \subsection{Main Challenges:} \label{sec:main-challenges} The details are however far from straightforward. To begin with, we need to assess relative program cost over potentially infinite computations. Thus, rudimentary aggregate measures such as adding up the total computation cost of processes and comparing this total at the end of the computation is insufficient for system comparisons such as \eqref{eq:1a}. In such cases, a preliminary attempt at a solution would be to compare the \emph{relative cost} for \emph{every} server interaction (action): in the sense of \cite{Arun-Kumar:1992}, the preorder would then ensure that every \emph{costed} interaction by the inefficient clients must be matched by a corresponding \emph{cheaper} interaction by the more efficient client (and, dually, costed interactions by the efficient client must be matched by interactions from the inefficient client that are as costly or more). \begin{equation}\label{eq:clients-complicated} \begin{split} \!\!\!\ptit{C}_3 &\deftri \piRecX{\piAll{x_1}{\piAll{x_2}{\;\,\piOut{\ctit{srv}_1}{x_1}\,\piIn{x_1}{y}{\;\,\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\;\,\piFree{x_1}{\piFree{x_2}{\piOut{\ctit{ret}}{(y,z)}{w}}}}}}}} \end{split} \end{equation} There are however problems with this approach. Consider, for instance, $\ptit{C}_3$ defined in \eqref{eq:clients-complicated}. Even though this client allocates two channels for every iteration of server interactions, it does not exhibit any memory leaks since it deallocates them both at the end of the iteration. It may therefore be sensible for our preorder to equate $C_3$ with client $C_2$ of \eqref{eq:clients} by having $\ptit{C}_2 \;\sqsubsetsim\; \ptit{C}_3$ as well as $\ptit{C}_3 \;\sqsubsetsim\; \ptit{C}_2$. However showing $\ptit{C}_3 \;\sqsubsetsim\; \ptit{C}_2$ would not be possible using the preliminary strategy discussed above, since, $\ptit{C}_3$ must engage in more expensive computation (allocating two channels as opposed to 1) by the time the interaction with the first server is carried out. Worse still, an analysis strategy akin to \cite{Arun-Kumar:1992} would not be applicable for a comparison involving the clients $\ptit{C}_1$ and $\ptit{C}_3$. In spite of the fact that over the course of its entire computation $\ptit{C}_3$ requires less resources than $\ptit{C}_1$, \ie it is more efficient, client $\ptit{C}_3$ appears to be \emph{less efficient} than $\ptit{C}_1$ after the interaction with the first server on channel $\ctit{srv}_1$ since, at that stage, it has allocated two new channels as opposed to one. However, $\ptit{C}_1$ becomes less efficient for the remainder of the iteration since it never deallocates the channel it allocates whereas $\ptit{C}_3$ deallocates both channels. To summarise, for comparisons $\ptit{C}_3 \;\sqsubsetsim\; \ptit{C}_2$ and $\ptit{C}_3 \;\sqsubsetsim\; \ptit{C}_1$, we need our analysis to allow a process to be \emph{temporarily inefficient} as long as it can recover later on. In this paper, we use a costed semantics to define an efficiency preorder to reason about the relative cost of processes over potentially infinite computation, based on earlier work by \cite{Kiehn05,LuttgenV06}. In particular, we adapt the concept of \emph{cost amortisation} to our setting, used by our preorders to compare processes that are eventually more efficient than others over the course of their entire computation, but are temporarily less efficient at certain stages of the computation. Issues concerning cost assessment are however not the only obstacles tackled in this work; there are also complications associated with the compositionality aspects of our proposed framework. More precisely, we want to limit our analysis to \emph{safe} contexts, \ie contexts that use resources in a sensible way, \eg not deallocating channels while they are still in use. In addition, we also want to consider behaviour \wrt a subset of the possible safe contexts. For instance, our clients from \eqref{eq:clients} only exhibit the same behaviour \wrt servers that $(i)$ accept \emph{(any number of)} requests on channels $\ctit{srv}_1$ and $\ctit{srv}_2$ containing a return channel, which then $(ii)$ use this channel at most \emph{once} to return the requested answer. We can characterise the interface between the servers and the clients using fairly standard channel type descriptions adapted from \cite{KobayashiPT:linearity} in \eqref{eq:2a}, where $\chantypW{\tV}$ describes a channel than can be used \emph{any} number of times (\ie the channel-type attribute $\unres$) to communicate values of type \tV, whereas $\chantypO{\tV}$ denotes an \emph{affine} channel (\ie a channel type with attribute $\affine$) that can be used \emph{at most} once to communicate values of type \tV: \begin{equation} \label{eq:2a} \ctit{srv}_1 : \chantypW{\chantypO{\tV_1}}, \quad \ctit{srv}_2 : \chantypW{\chantypO{\tV_2}} \end{equation} In the style of \cite{Yoshida07:linearity, hennessy04behavioural}, we could then use this interface to abstract away from the actual server implementations described in \eqref{eq:3} and state that, \wrt contexts that observe the channel mappings of \eqref{eq:2a}, client $\ptit{C}_2$ is more efficient than $\ptit{C}_1$ which is, in turn, more efficient than $\ptit{C}_0$. These can be expressed as: \begin{align} \label{eq:4} \ctit{srv}_1 : \chantypW{\chantypO{\tV_1}}, \ctit{srv}_2 : \chantypW{\chantypO{\tV_2}} &\,\models\; \ptit{C}_2 \,\sqsubsetsim\, \ptit{C}_1 \hspace{3cm}\\ \label{eq:5} \ctit{srv}_1 : \chantypW{\chantypO{\tV_1}}, \ctit{srv}_2 : \chantypW{\chantypO{\tV_2}} &\,\models\; \ptit{C}_1 \,\sqsubsetsim\, \ptit{C}_0 \end{align} Unfortunately, the machinery of \cite{Yoshida07:linearity, hennessy04behavioural} cannot be easily extended to our costed analysis because of two main reasons. First, in order to limit our analysis to safe computation, we would need to show that clients $\ptit{C}_0$, $\ptit{C}_1$ and $\ptit{C}_2$ adhere to the channel usage stipulated by the type associations in \eqref{eq:2a}. However, the channel reuse in $\ptit{C}_1$ and $\ptit{C}_2$ (an essential feature to attain space efficiency) requires our analysis to associate potentially different types (\ie $\chantypO{\tV_1}$ and $\chantypO{\tV_2}$) to the same return channel; this channel reuse at different types amounts to a form of \emph{strong update}, a degree of flexibility not supported by \cite{Yoshida07:linearity, hennessy04behavioural}. Second, the equivalence reasoning mechanisms used in \cite{Yoshida07:linearity, hennessy04behavioural} would be substantially limiting for processes with channel reuse. More specifically, consider the slightly tweaked client implementation of $\ptit{C}_2$ below: \begin{align}\label{eq:6} \ptit{C}'_2 & \deftri \piRecX{\piAll{x}{\bigl(\piOutA{\ctit{srv}_1}{x}\,\piParal\,\piIn{x}{y}{(\piOutA{\ctit{srv}_2}{x}\,\piParal\,\piIn{x}{z}{\piFree{x}{\piOut{\ctit{c}}{(y,z)}{X}}})}\bigr)}} \end{align} The only difference between the client in \eqref{eq:6} and the original one in \eqref{eq:clients} is that $\ptit{C}_2$ \emph{sequences} the service requests before the service inputs, \ie $\ldots\piOut{\ctit{srv}_1}{x}\,\piInA{x}{y}{\ldots}$ and $\ldots\piOut{\ctit{srv}_2}{x}\,\piInA{x}{z}{\ldots}$, whereas $\ptit{C}'_2$ parallelises them, \ie $\ldots\piOutA{\ctit{srv}_1}{x}\,\piParal\,\piInA{x}{y}{\ldots}$ and $\ldots\piOutA{\ctit{srv}_2}{x}\,\piParal\,\piInA{x}{z}{\ldots}$. Resource-centric type disciplines such as \cite{EFH:uniqueness:journal:12,AliasTypes:SmithWM00} preclude name matching for a particular resource once all the permissions to use that resource have been used up; this feature is essential to statically reason about a number of basic design patterns for reuse. For such type settings, it turns out that the client implementations $\ptit{C}_2$ and $\ptit{C}'_2$ exhibit the same behaviour because the return channel used by both clients for \emph{both} server interactions is private, \ie unknown to the respective servers; as a result, the servers cannot answer the service on that channel before it is receives it on either $\ctit{srv}_1$ or $\ctit{srv}_2$.\footnote{Analogously, in the \pic, $\piRes{d}{(\piOutA{c}{d} \piParal \piIn{d}{x}{P})}$ is indistinguishable from $\piRes{d}{(\piOut{c}{d}{\piIn{d}{x}{P}})}$} Through \emph{scope extrusion}, theories such as \cite{Yoshida07:linearity, hennessy04behavioural} can reason adequately about the first server interaction, and relate $\ldots\piOut{\ctit{srv}_1}{x}\,\piInA{x}{y}{\ldots}$ of $\ptit{C}_2$ with $\ldots\piOutA{\ctit{srv}_1}{x}\,\piParal\,\piIn{x}{y}{\ldots}$ of $\ptit{C}_2$. However, they have no mechanism for tracking channel locality post scope extrusion, thereby recovering the information that the return channel \emph{becomes private again} to the client after the first server interaction (since the servers use up the permission to use the return channel once they reply on it). This prohibits \cite{Yoshida07:linearity, hennessy04behavioural} from determining that the second server interaction is just an instance of the first server interaction, thus failing to relate these two implementations. In \cite{EFH:uniqueness:journal:12} we developed a substructural type system based around a type attribute describing channel \emph{uniqueness}, and this was used to statically ensure safe computations for \picr. In this work, we weave this type information into our framework, imbuing it with an operational permission-semantics to reason compositionally about the costed behaviour of (safe) processes. More specifically, in \eqref{eq:clients}, when $\ptit{C}_2$ allocates channel $x$, no other process knows about $x$: from a typing perspective, but also operationally, $x$ is \emph{unique} to $\ptit{C}_2$. Client $\ptit{C}_2$ then sends $x$ on $\ctit{srv}_1$ at an \emph{affine} type, which (by definition) limits the server to use $x$ at most once. At this point, from an operational perspective, $x$ is to $\ptit{C}_2$, the entity previously ``owning'' it, \emph{unique-after-1} (communication) use. This means that after one communication step on $x$, (the derivative of) $\ptit{C}_2$ recognises that all the other processes apart from it must have used up the single affine permission for $x$, and hence $x$ becomes once again \emph{unique} to $\ptit{C}_2$. This also means that $\ptit{C}_2$ can safely \emph{reuse} $x$, possibly at a different object type (strong update), or else safely deallocate it. The concept of affinity is well-known in the process calculus community. By contrast, uniqueness (and its duality to affinity) is used far less. In a compositional framework, uniqueness can be used to record the guarantee at one end of a channel corresponding to the restriction associated with affine channel usage at the other; an operational semantics can be defined, tracking the \emph{permission transfer} of affine permissions back and forth between processes as a result of communication, addressing the aforementioned complications associated with idioms such as channel reuse. We employ such an operational (costed) semantics to define our efficiency preorders for concurrent processes with explicit resource management, based on the notion of amortised cost discussed above. \subsection{Paper Structure:} \label{sec:structure} Section~\ref{sec:language} introduces our language with constructs for explicit memory management and defines a costed semantics for it. We illustrate issues relating to resource usage in this language through a case study in Section~\ref{sec:case-study}, discussing different implementations for an unbounded buffer. Section~\ref{sec:cost-bisim} develops a labelled-transition system for our language that takes into consideration some representation of the observer and the permissions that are exchanged between the program and the observer; it is a typed transition system similar to \cite{PierceS96,hennessy04behavioural,Hennessy07}, nuanced to the resource-focussed type system of \cite{EFH:uniqueness:journal:12}. Based on this transition system, the section also defines a coinductive cost-based preorder and proves a number of properties about it. Section~\ref{sec:characterisation} justifies the cost-based preorder by relating it with a behavioural contextual preorder defined in terms of the reduction semantics of Section~\ref{sec:language}. Section~\ref{sec:proofs-relat-effic} applies the theory of Section~\ref{sec:cost-bisim} to reason about the efficiency of the unbounded buffer implementations of Section~\ref{sec:case-study}. Finally, Section~\ref{sec:RelatedWork} surveys related work and Section~\ref{sec:conclusion} concludes. \section{The Language} \label{sec:language} \begin{display}{\picr Syntax}{fig:syntax-ext} \begin{equation*} \begin{array}{l@{\hspace{1ex}}r@{\hspace{1ex}}lllllllllll} P, Q & \bnfdef & \piOut{u}{\vec{v}}{P} & \textsl{(output)} & \bnfsep & \piIn{u}{\vec{x}}{P} & \textsl{(input)} \\ & \bnfsep & \piNil & \textsl{(nil)} & \bnfsep & \piIf{u=v}{P}{Q} & \textsl{(match)} \\ & \bnfsep & \piRecX{P} & \textsl{(recursion)} & \bnfsep & x & \textsl{(process variable)} \\ & \bnfsep & P \piParal Q & \textsl{(parallel)} & \bnfsep & \piAll{x}{P} & \textsl{(allocate)} \\ & \bnfsep & \piFree{u}{P} & \textsl{(deallocate)}\\ \end{array} \end{equation*} \end{display} \figref{fig:syntax-ext} shows the syntax for our language, the resource \pic, or \picr for short. It has the standard \pic constructs with the exception of scoping, which is replaced with primitives for explicit channel allocation, \piAll{x}{P}, and deallocation, \piFree{x}{P}. The syntax assumes two separate denumerable sets of channel names $c,d \in \Chans$, and variables $x, y, z, w \in \Vars$, and lets identifiers $u,\,v$ range over both sets, $\Chans \cup \Vars$. The input construct, \piIn{c}{x}{P}, recursion construct, \piRecX{P}, and channel allocation construct, \piAll{x}{P}, are binders whereby free occurrences of the variables $x$ and $w$ in $P$ are bound. As opposed to more standard versions of the \pic, we \emph{do not} use name scoping to bind and bookkeep the visibility of names; we shall however use alternative mechanisms to track name knowledge and usage in subsequent development. \begin{display}{\picr Reduction Semantics}{fig:reduction-semantics} \textbf{Contexts}\\ \begin{mathpar} \begin{array}{rl} \ctxt &\bnfdef \quad \ctxtEmp{-} \quad | \quad \piCtxtPar{\ctxt}{P} \quad | \quad \piCtxtPar{P}{\ctxt}\\ \end{array}\\ \begin{array}{rll} \ctxtEmp{\sysSP} & \deftxt \sysSP \\ \ctxtPar{\ctxtGenN{\context}{\sysSP}}{Q} & \deftxt \sys{\sV'}{(P'\piParal Q)} \quad& \text{if } \ctxtGenN{\context}{\sysSP}=\sys{\sV'}{P'}\\ \ctxtPar{Q}{\ctxtGenN{\context}{\sysSP}} & \deftxt \sys{\sV'}{(Q\piParal P')} \quad& \text{if } \ctxtGenN{\context}{\sysSP}=\sys{\sV'}{P'}\\\\ \end{array} \end{mathpar} \textbf{Structural Equivalence}\\ \begin{equation*} \begin{array}{l@{\hspace{2ex}}l@{\piStructS}l@{\hspace{4ex}}l@{\hspace{2ex}}r@{\piStructS}l@{\hspace{4ex}}l@{\hspace{2ex}}l@{\piStructS}l} \rtit{sCom} & P \piParalL Q & Q\piParalL P & \rtit{sAss} & P \piParalL (Q \piParalL R) & (P \piParalL Q) \piParalL R & \rtit{sNil} & P \piParalL \piNil & P \\\\ \end{array} \end{equation*} \textbf{Reduction Rules}\\ \begin{mathpar} \begin{prooftree} \justifiedBy{\rtit{rCom}} \sys{\sV,c }{\piOut{c}{\vec{d}}{P} \piParal \piIn{c}{\vec{x}}{Q}}\piRedCost{0} \sys{\sV,c }{P\piParal Q\subC{\,\vec{d}\,}{\,\vec{x}\,}} \end{prooftree} \\ \begin{prooftree} \justifiedBy{\rtit{rThen}} \sys{\sV,c }{\piIf{c=c}{P}{Q}}\piRedCost{0} \sys{\sV,c }{P} \end{prooftree} \\ \begin{prooftree} \justifiedBy{\rtit{rElse}} \sys{\sV,c,d }{\piIf{c=d}{P}{Q}}\piRedCost{0} \sys{\sV,c,d }{Q} \end{prooftree} \\ \begin{prooftree} \strut \justifiedBy{\rtit{rRec}} \sysS{\piRecX{P}}\piRedCost{0} \sysS{P\subC{\piRecX{P}}{w}} \end{prooftree} \qquad \begin{prooftree} P \piStruct P' \quad \sysS{P'} \piRedCost{k} \sysS{Q'} \quad Q'\piStruct Q \justifiedBy{\rtit{rStr}} \sysSP\piRedCost{k} \sysS{Q} \end{prooftree} \\ \begin{prooftree} \justifiedBy{\rtit{rAll}} \sys{\sV }{\piAll{x}{P}}\piRedCost{+1} \sys{\sV,c }{P\subC{c}{x}} \end{prooftree} \qquad \begin{prooftree} \strut \justifiedBy{\rtit{rFree}} \sys{\sV,c }{\piFree{c}{P}}\piRedCost{-1} \sys{\sV }{P} \end{prooftree}\\ \end{mathpar} \textbf{Reflexive Transitive Closure}\\ \begin{equation*} \begin{prooftree} \strut \justifies \sysSP \piRedCost{0}^\ast \sysSP \end{prooftree} \qquad \begin{prooftree} \sysSP \piRedCost{k}^\ast \sys{\sV'}{P'} \qquad \sys{\sV'}{P'} \piRedCost{l} \sys{\sV''}{P''} \justifies \sysSP \piRedCost{{k+l}}^\ast \sys{\sV''}{P''} \end{prooftree} \end{equation*} \end{display} \picR processes run in a resource environment, ranged over by $\sV, \sVV$, representing predicates over channel names stating whether a channel is allocated or not. We find it convenient to denote such functions as a list of channels representing the set channels that are allocated, \eg the list $c,d$ denotes the set $\sset{c,d}$, representing the resource environment returning \textit{true} for channels $c$ and $d$ and \textit{false} otherwise - in this representation, the order of the channels in the list is unimportant, but duplicate channels are disallowed; as shorthand, we also write $\sV,c$ to denote $\sV\cup\sset{c}$ whenever $c\not\in\sV$. In this paper we consider only resource environments with an \emph{infinite} number of deallocated channels, \ie $\sV$ is a total function. Models with finite resources can be easily accommodated by making $\sV$ partial; this also would entail a slight change in the semantics of the allocation construct, which could either block or fail whenever there are no deallocated resources left. Although interesting in its own right, we focus on settings with infinite resources as it lends itself better to the analysis of resource efficiency that follows. We refer to the pair \sysSP, consisting of a resource environment \sV\ and a \emph{closed} process\footnote{A closed process has no free variables. Note that the absence of name binders \ie no name scoping, means that all names are free.} $P$ as a \emph{system}; note that \emph{not all} free names in $P$ need to be allocated \ie present in \sV: intuitively, any name $c$ used by $P$ and $c \not\in \sV$ represents a \emph{dangling pointer}. Contexts consist of parallel composition of processes; they are however defined over systems, through the grammar and the respective definition at the top of \figref{fig:reduction-semantics}. The reduction relation is defined as the least \emph{contextual} relation over systems satisfying the rules in \figref{fig:reduction-semantics}. More specifically our reduction relation leaves the following rule implicit: \begin{mathpar} \begin{prooftree} \sysS{P} \;\piRedCost{k}\; \sysS{Q} \justifiedBy{\rtit{rCtx}} \ctxtGen{\sysSP}\;\piRedCost{k}\; \ctxtGen{\sysS{Q}} \end{prooftree} \end{mathpar} Rule (\rtit{rStr}) extends reductions to structurally equivalent processes, $P\piStruct Q$, \ie processes that are identified up to superfluous \piNil\ processes, and commutativity/associativity of parallel composition (see the structural equivalence rules \figref{fig:reduction-semantics}). Most rules follow those of the standard \pic, \eg (\rtit{rRec}), with the exception of those involving resource handling. For instance, the rule for communication (\rtit{rCom}) requires the communicating channel to be \emph{allocated}. Allocation (\rtit{rAll}) chooses a deallocated channel, allocates it, and substitutes it for the bound variable of the allocation construct.\footnote{The expected side-condition $c \!\not\in\! \sV$ is implicit in the notation $(\sV,c)$ used in the system $\sys{\sV,c}{P\subC{c}{x}}$ to which it reduces, since $c$ cannot be present in \sV\ for $\sV,c$ to be valid.} Deallocation (\rtit{rFree}) changes the states of a channel from allocated to deallocated, making it available for future allocations. The rules are annotated with a cost reflecting resource usage; allocation has a cost of $+1$, deallocation has a (negative) cost of $-1$ while the other reductions carry no cost, \ie $0$. \figref{fig:reduction-semantics} also shows the natural definition of the reflexive transitive closure of the costed reduction relation. In what follows, we use $k, l\in \Ints$ as integer metavariables to range over costs. \begin{exa}\label{ex:bad-behaviour} The following reduction sequence illustrates potential unwanted behaviour resulting from resource mismanagement: \begin{align} \label{eq:54} &\sys{M, c }{\piFree{c}{(\piOutA{c}{\num{1}} \piParal \piIn{c}{x}{P})} \;\piParal\; \piAll{y}{ (\piOutA{y}{\num{42}} \piParal \piIn{y}{z}{Q})}} & \piRedCost{-1} \\ \label{eq:1} &\sys{M\phantom{, c} }{\piOutA{c}{\num{1}} \piParal \piIn{c}{x}{P} \;\piParal\; \piAll{y}{ (\piOutA{x}{\num{42}} \piParal \piIn{x}{z}{Q})}} & \piRedCost{+1} \\ \label{eq:2} &\sys{M, c }{\piOutA{c}{\num{1}} \piParal \piIn{c}{x}{P} \;\piParal\; \piOutA{c}{\num{42}} \piParal \piIn{c}{z}{Q}} \end{align} Intuitively, allocation should yield ``fresh'' channels \ie channels that are not in use by any active process. This assumption is used by the right process in system \eqref{eq:54}, $\piAll{y}{ (\piOutA{y}{\num{42}} \piParal \piIn{y}{z}{Q})}$, to carry out a \emph{local} communication, sending the value \num{42} on some local channel $y$ that no other process is using. However, the premature deallocation of the channel $c$ by the left process in \eqref{eq:54}, $\piFree{c}{(\piOutA{c}{\num{1}} \piParal \piIn{c}{x}{P})}$, allows channel $c$ to be reallocated by the right process in the subsequent reduction, \eqref{eq:1}. This may then lead to unintended behaviour since we may end up with interferences when communicating on $c$ in the residuals of the left and right processes, \eqref{eq:2}.\footnote{Operationally, we do not describe errors that may result from attempted communications on deallocated channels (we do not have error values). This may occur after reduction \eqref{eq:54}, if the residual of the left process communicate on channel $c$. Rather, communications on deallocated channels are blocked.} $\Box$ \end{exa} \begin{display}{Type Attributes and Types}{fig:types} \begin{gather*} \begin{array}{lrllllll} \aV & \bnfdef & \unrestricted & \text{(unrestricted)} &\bnfsepp \affine & \text{(affine)} & \bnfsepp \unique{i} & \text{(unique after $i$ steps)}\\[1em] \tV & \bnfdef & \tVV & \text{(channel type)} &\bnfsepp \proctyp & \text{(process type)} \\ \tVV & \bnfdef & \chantyp{\vec{\tVV}}{\aV} & \text{(channel)} &\bnfsepp \chanrec{X}{\tVV} & \text{(recursion)} &\bnfsepp X & \text{(variable)} \end{array} \end{gather*} \end{display} In \cite{EFH:uniqueness:journal:12} we defined a type system that precludes unwanted behaviour such as in \exref{ex:bad-behaviour}. The type syntax is shown in \figref{fig:types}. The main type entities are \emph{channel types}, denoted as $\chantyp{\tVVlst}{a}$, where \emph{type attributes} $a$ range over \begin{itemize} \item \affine, for affine, imposing a restriction/obligation on usage; \item $\unique{i}$, for unique-after-$i$ usages ($i \in \mathbb{N}$), providing guarantees on usage; \item $\unrestricted$, for unrestricted channel usage without restrictions or guarantees. \end{itemize} Uniqueness typing can be seen as dual to affine typing \cite{harrington:uniquenesslogic}, and in \cite{EFH:uniqueness:journal:12} we make use of this duality to keep track of uniqueness across channel-passing parallel processes: an attribute $\unique{i}$ typing an endpoint of a channel $c$ accounts for (at most) $i$ instances of affine attributes typing endpoints of that same channel. A channel type $\chantyp{\tVVlst}{a}$ also describes the type of the values that can be communicated on that channel, \tVVlst, which denotes a list of types $\tVV_1,\ldots,\tVV_n$ for $n\in\Nats$; when $n=0$, the type list is an empty list and we simply write $\chantyp{}{a}$. Note the difference between $\chantyp{\tVVlst}{\affine}$, \ie a channel with an affine usage restriction, and $\chantyp{\tVVlst}{\unique{1}}$, \ie a channel with a unique-after-1 usage guarantee. We denote fully unique channels as $\chantyp{\tVVlst}{\uniqueNow}$ in lieu of $\chantyp{\tVVlst}{\unique{0}}$. The type syntax also assumes a denumerable set of type variables $X,Y$, bound by the recursive type construct \chanrecX{\tVV}. In what follows, we restrict our attention to \emph{closed, contractive} types, where every type variable is bound and appears within a channel constructor $\chantyp{-}{\aV}$; this ensures that channel types such as $\chanrec{X}{X}$ are avoided. We assume an equi-recursive interpretation for our recursive types \cite{Pierce:2002:TPL} (see \rtit{tEq} in \figref{fig:typingrules}), characterised as the least type-congruence satisfying rule \rtit{eRec} in \figref{fig:typingrules}. \begin{display}{Typing processes}{fig:typingrules} \textbf{Logical rules}\\[1em] \begin{tabular}{llll} \quad & \begin{prooftree} \tprocP{\env, \envmap{u}{\chantyp{\tVlst}{\aV-1}}} \justifiedBy{tOut} \tproc{\env, \envmap{u}{\chantyp{\tVlst}{\aV}}, \overrightarrow{\envmap{v}{\tV}}}{\piOut{u}{\vec{v}}{P}} \end{prooftree} & \begin{prooftree} \tprocP{\env, \envmap{u}{\chantyp{\tVlst}{\aV-1}}, \overrightarrow{\envmap{x}{\tV}}} \justifiedBy{tIn} \tproc{\env, \envmap{u}{\chantypA{\tVlst}}}{\piIn{u}{\vec{x}}{P}} \end{prooftree}\;\quad & \begin{prooftree} \tprocP{\env_1} \quad \tproc{\env_2}{Q} \justifiedBy{tPar} \tproc{\env_1, \env_2\,}{\,P \piParal Q} \end{prooftree} \\[2em] & \begin{prooftree} u, v \in \Gamma \quad \tprocE{P} \quad \tprocE{Q} \justifiedBy{tIf} \tprocE{\piIf{u=v}{P}{Q}} \end{prooftree} \;\quad & \begin{prooftree} \tprocP{\env^\unrestricted, \envmap{x}{\proctyp}} \justifiedBy{tRec} \tproc{\env^\unrestricted}{\piRecX{P}} \end{prooftree} & \begin{prooftree} \strut \justifiedBy{tVar} \tproc{\envmap{x}{\proctyp}}{x} \end{prooftree} \\[2em] & \begin{prooftree} \tprocEP \justifiedBy{tFree} \tproc{\env,\envmap{u}{\chantyp{\vec{\tV}}{\uniqueNow}}}{\piFree{u}{P}} \end{prooftree} & \begin{prooftree} \tprocP{\env,\envmap{x}{\chantyp{\vec{\tV}}{\uniqueNow}}} \justifiedBy{tAll} \tprocE{\piAll{x}{P}} \end{prooftree} & \begin{prooftree} \strut \justifiedBy{tNil} \tproc{\emptyset}{\piNil} \end{prooftree} \\[2em] & \begin{prooftree} \tproc{\env'}{P} \qquad \env \ensuremath{\mathrel{\prec}} \env' \justifiedBy{tStr} \tproc{\env}{P} \end{prooftree} \\[2em] \end{tabular} where $\env^\unrestricted$ can only contain unrestricted assumptions and all bound variables are fresh.\\[1em] \textbf{Structural rules} $(\ensuremath{\mathrel{\prec}})$ is the least reflexive transitive relation satisfying \\ \begin{mathpar} \begin{prooftree} \tV = \tV_1 \circ \tV_2 \justifiedBy{tCon} \env, \envmap{u}{\tV} \ensuremath{\mathrel{\prec}} \env, \envmap{u}{\tV_1}, \envmap{u}{\tV_2} \end{prooftree} \qquad \begin{prooftree} \tV = \tV_1 \circ \tV_2 \justifiedBy{tJoin} \env, \envmap{u}{\tV_1}, \envmap{u}{\tV_2} \ensuremath{\mathrel{\prec}} \env, \envmap{u}{\tV} \end{prooftree} \qquad \begin{prooftree} \tV_1 \sim \tV_2 \justifiedBy{tEq} \env,\envmap{u}{\tV_1} \ensuremath{\mathrel{\prec}} \env,\envmap{u}{\tV_2} \end{prooftree} \\ \begin{prooftree} \strut \justifiedBy{tWeak} \env, \envmap{u}{\tV} \ensuremath{\mathrel{\prec}} \env \end{prooftree} \qquad \begin{prooftree} \tV_1 \subtype \tV_2 \justifiedBy{tSub} \env,\envmap{u}{\tV_1} \ensuremath{\mathrel{\prec}} \env,\envmap{u}{\tV_2} \end{prooftree} \qquad \begin{prooftree} \strut \justifiedBy{tRev} \env,\envmap{u}{\chantyp{\vec{\tV_1}}{\uniqueNow}} \ensuremath{\mathrel{\prec}} \env,\envmap{u}{\chantyp{\vec{\tV_2}}{\uniqueNow}} \end{prooftree} \end{mathpar}\\[1em] $\begin{array}{cc} \textbf{Equi-Recursion} & \textbf{Counting channel usage}\\[0.5em] \qquad\begin{prooftree} \phantom{\aV_1 \subtype \aV_2} \justifiedBy{eRec} \chanrecX{\tVV} \sim \tVV\subC{\chanrecX{\tVV}}{X} \end{prooftree} \qquad & \qquad\envmap{c}{\chantyp{\vec{\tV}}{\aV-1}} \deftxt \begin{cases} \varepsilon \qquad \textit{ (empty list)} & \text{if }\,\aV=\affine\\ \envmap{c}{\chantyp{\vec{\tV}}{\unrestricted}} & \text{if }\,\aV=\unrestricted\\ \envmap{c}{\chantyp{\vec{\tV}}{\unique{i}}} & \text{if }\,\aV=\unique{i + 1} \end{cases} \end{array}$ \\[1em] \textbf{Type splitting} \begin{equation*} \begin{prooftree} \justifiedBy{pUnr} \chantyp{\tVlst}{\unrestricted} = \chantyp{\tVlst}{\unrestricted} \circ \chantyp{\tVlst}{\unrestricted} \end{prooftree} \qquad \begin{prooftree} \justifiedBy{pProc} \proctyp = \proctyp \circ \proctyp \end{prooftree} \qquad \begin{prooftree} \justifiedBy{pUnq} \chantyp{\tVlst}{\unique{i}} = \chantyp{\tVlst}{\affine} \circ \chantyp{\tVlst}{\unique{i+1}} \end{prooftree} \end{equation*}\\[1em] \textbf{Subtyping} \begin{equation*} \begin{prooftree} \phantom{\aV_1 \subtype \aV_2} \justifiedBy{sIndx} \unique{i} \subtype \unique{i+1} \end{prooftree} \qquad \begin{prooftree} \phantom{\aV_1 \subtype \aV_2} \justifiedBy{sUnq} \unique{i} \subtype \unrestricted \end{prooftree} \qquad \begin{prooftree} \phantom{\aV_1 \subtype \aV_2} \justifiedBy{sAff} \unrestricted \subtype \affine \end{prooftree} \qquad \begin{prooftree} \aV_1 \subtype \aV_2 \justifiedBy{sTyp} \chantyp{\tVlst}{\aV_1} \subtype \chantyp{\tVlst}{\aV_2} \end{prooftree} \end{equation*} \end{display} \begin{gather*} \begin{prooftree} \tprocEP \qquad \dom(\env) \subseteq \sV \qquad \Gamma \text{ is consistent} \justifiedBy{tSys} \tproc{\env}{\sysSP} \end{prooftree} \end{gather*} The rules for typing processes are given in \figref{fig:typingrules} and take the usual shape $\tproc{\env}{P}$ stating that process $P$ is well-typed with respect to the environment $\env$, a list of pairs of identifiers and types. Systems are typed according to (\rtit{tSys}) above: a system $\sys{M}{P}$ is well-typed under $\env$ if $P$ is well-typed \wrt $\env$, $\env \vdash P$, and $\env$ only contains assumptions for channels that have been allocated, $\dom(\env) \subseteq \sV$. This restricts channel usage in $P$ to allocated channels and is key for ensuring safety. In \cite{EFH:uniqueness:journal:12}, typing environments are multisets of pairs of identifiers and types; we do not require them to be partial functions. However, the (top-level) typing rule for systems (\rtit{tSys}) requires that the typing environment is \emph{consistent}. A typing environment is consistent if whenever it contains multiple assumptions about a channel, then these assumptions can be derived from a \emph{single assumption} using the structural rules of the type system (see the structural rule \rtit{tCon} and the splitting rule \rtit{pUnq} in \figref{fig:typingrules}). \begin{defi}[Consistency]\label{def:consistency} A typing environment $\env$ is \emph{consistent} if there is a partial map $\env'$ such that $\env' \ensuremath{\mathrel{\prec}} \env$. \end{defi} The environment structural rules, $\env_1\ensuremath{\mathrel{\prec}}\env_2$, defined in \figref{fig:typingrules}, govern the way type environments are syntactically manipulated. For instance, rules \rtit{tCon} and \rtit{tJoin} state that type assumptions for the same identifier can be split or joined according to the type splitting relation $\tV = \tV_1 \circ \tV_2$, also defined in \figref{fig:typingrules}: apart from standard splitting of unrestricted channels, \rtit{pUnr}, and process types, \rtit{pProc}, we note that a unique-after-$i$ channel may be split into a unique-after-$(i+1)$ channel and an affine channel; we also note that affine channels are \emph{never} split. The environment structural rules also allow for weakening, \rtit{tWeak}, equi-recursive manipulation of types, \rtit{tEq} and \rtit{eRec}, and subtyping, \rtit{tSub}; the latter rule is defined in terms of the subtyping relation also stated in \figref{fig:typingrules} (bottom) where, for instance, an unrestricted channel can be used instead of an affine channel (that can be used at most once). The key novel structural rule is however \rtit{tRev}, which allows us to change (revise) the object type of a channel whenever we are guaranteed that the type assumption for that identifier is unique. These rules are recalled from \cite{EFH:uniqueness:journal:12} and the reader is encouraged to consult that document for more details. The consistency condition of \defref{def:consistency} ensures that there is no mismatch in the duality between the guarantees of unique types and the restrictions of affine types, which allows sound compositional type-checking by our type system. For instance, consistency rules out environments such as \begin{equation} \envmap{c}{\chantyp{\tVV}{\uniqueNow}}, \envmap{c}{\chantypO{\tVV}}\label{eq:1:lang} \end{equation} where a process typed under the guarantee that a channel $c$ is unique now, \envmap{c}{\chantyp{\tVV}{\uniqueNow}}, contradicts the fact that some other process may be typed under the affine usage allowed by the assumption $\envmap{c}{\chantypO{\tVV}}$. For similar reasons, consistency also rules out environments such as \begin{equation} \envmap{c}{\chantyp{\tVV}{\uniqueNow}}, \envmap{c}{\chantypW{\tVV}}\label{eq:2:lang} \end{equation} However, it does not rule out environments such as \eqref{eq:3:lang} even though the guarantee provided by \envmap{c}{\chantypUU{\tVV}{2}} is too conservative: it states that channel $c$ will become unique after \emph{two} uses but, in actual fact, it becomes unique after one use since the (top-level) environment contains only \emph{one} other affine type assumption, \envmap{c}{\chantypO{\tVV}}, that other processes can be typed at. \begin{equation} \envmap{c}{\chantypUU{\tVV}{2}}, \envmap{c}{\chantypO{\tVV}}\label{eq:3:lang} \end{equation} A less conservative uniqueness typing guarantee would therefore be \envmap{c}{\chantypUU{\tVV}{1}} as shown in \eqref{eq:3a:lang} below; this environment constitutes another case of a consistent environment allowed by Definition~\ref{def:consistency}. \begin{equation} \envmap{c}{\chantypUU{\tVV}{1}}, \envmap{c}{\chantypO{\tVV}}\label{eq:3a:lang} \end{equation} The type system is \emph{substructural}, implying that typing assumptions can be used \emph{only once} during typechecking \cite{Pierce:2004:ATT}. This is clearly manifested in the output and input rules, \rtit{tOut} and \rtit{tIn} in \figref{fig:typingrules}. In fact, using the operation $\envmap{c}{\chantyp{\vec{\tV}}{\aV-1}}$ (see\footnote{This operation on type assumptions, $\envmap{c}{\chantyp{\vec{\tV}}{\aV-1}}$, defined in \figref{fig:typingrules}, describes the cases where, when using an affine type assumption to typecheck a process, the continuation of the process in the rule premise is typed without that assumption (the operation returns no type assumption), whereas when using an unrestricted or unique-after-$i$ assumptions, the premise judgement use \wrt (new) unrestricted and unique-after-$(i-1)$ assumptions, respectively. Note that the operation $\envmap{c}{\chantyp{\vec{\tV}}{\aV-1}}$ is not defined for $\aV=\uniqueNow$. See \cite{EFH:uniqueness:journal:12} for more detail.} \figref{fig:typingrules}), rule \rtit{tOut} collapses three different possibilities for typing output processes, which could alternatively have been expressed as the three separate typing rules in \eqref{eq:7:lang}. \begin{equation}\label{eq:7:lang} \begin{split} &\begin{prooftree} \tprocEP \justifiedBy{tOutA} \tproc{\env,\, \envmap{u}{\chantypO{\tVlst}},\, \overrightarrow{\envmap{v}{\tV}}\;}{\;\piOut{u}{\vec{v}}{P}} \end{prooftree} \qquad\qquad \begin{prooftree} \tprocP{\env,\, \envmap{u}{\chantypW{\tVlst}}} \justifiedBy{tOutW} \tproc{\env,\, \envmap{u}{\chantypW{\tVlst}},\, \overrightarrow{\envmap{v}{\tV}}\;}{\;\piOut{u}{\vec{v}}{P}} \end{prooftree} \\[0.5em] &\hspace{3cm}\begin{prooftree} \tprocP{\env,\, \envmap{u}{\chantyp{\tVlst}{\unique{i}}}} \justifiedBy{tOutU} \tproc{\env,\, \envmap{u}{\chantyp{\tVlst}{\unique{i+1}}},\, \overrightarrow{\envmap{v}{\tV}}\;}{\;\piOut{u}{\vec{v}}{P}} \end{prooftree} \end{split} \end{equation} Rule \rtit{tOutA} states that an output of values $\vec{v}$ on channel $u$ is allowed if the type environment has an \emph{affine} channel-type assumption for that channel, \envmap{u}{\chantypO{\tVlst}}, and the corresponding type assumptions for the values communicated, $\overrightarrow{\envmap{v}{\tV}}$, match the object type of the affine channel-type assumption, \tVlst; in the rule premise, the continuation $P$ must also be typed \wrt the \emph{remaining} assumptions in the environment, \emph{without} the assumptions consumed by the conclusion. Rule \rtit{tOutW} is similar, but permits outputs on $u$ for environments with an \emph{unrestricted} channel-type assumption for that channel, \envmap{u}{\chantypW{\tVlst}}. The continuation $P$ is typechecked \wrt the remaining assumptions and a \emph{new} assumption, $\envmap{u}{\chantypW{\tVlst}}$; this assumption is identical to the one consumed in the conclusion, so as to model the fact that uses of channel $u$ are unrestricted. Rule \rtit{tOutU} is again similar, but it allows outputs on channel $u$ for a \emph{``unique after $i\!+\!1$''} channel-type assumption; in the premise of the rule, $P$ is typechecked \wrt the remaining assumptions and a \emph{new} assumption $\envmap{u}{\chantyp{\tVlst}{\unique{i}}}$, where $u$ is now \emph{unique after $i$} uses. Analogously, the input rule, \rtit{tIn}, also encodes three input cases (listed below): \begin{equation} \label{eq:8:lang} \begin{split} &\begin{prooftree} \tprocP{\env, \overrightarrow{\envmap{x}{\tV}}} \justifiedBy{tInO} \tproc{\env, \envmap{u}{\chantypO{\tVlst}}}{\piIn{u}{\vec{x}}{P}} \end{prooftree} \qquad\; \begin{prooftree} \tprocP{\env, \envmap{u}{\chantypW{\tVlst}}, \overrightarrow{\envmap{x}{\tV}}} \justifiedBy{tInW} \tproc{\env, \envmap{u}{\chantypW{\tVlst}}}{\piIn{u}{\vec{x}}{P}} \end{prooftree} \qquad \begin{prooftree} \tprocP{\env, \envmap{u}{\chantyp{\tVlst}{\unique{i}}}, \overrightarrow{\envmap{x}{\tV}}} \justifiedBy{tInU} \tproc{\env, \envmap{u}{\chantyp{\tVlst}{\unique{i+1}}}}{\piIn{u}{\vec{x}}{P}} \end{prooftree}\quad \end{split} \end{equation} Parallel composition (\rtit{tPar}) enforces the substructural treatment of type assumptions, by ensuring that type assumptions are used by either the left process or the right, but not by both. However, some type assumption can be \emph{split} using contraction, \ie rules (\rtit{tStr}) and (\rtit{tCon}). For example, an assumption $c : \chantyp{\tVlst}{\unique{i}}$ can be split as $c : \chantyp{\tVlst}{\affine}$ and $c : \chantyp{\tVlst}{\unique{i+1}}$---see (\rtit{pUnq}). The rest of the rules in Figure~\ref{fig:typingrules} are fairly straightforward. Even though these typing rules do not require \env to be consistent, the consistency requirement at the top level typing judgement (\rtit{tSys}) ensures that whenever a process is typed \wrt a unique assumption for a channel, $\chantypU{\tVlst}$, no other process has access to that channel. It can therefore safely deallocate it (\rtit{tFree}), or change the object type of the channel (\rtit{tRev}). Dually, when a channel is newly allocated it is assumed unique (\rtit{tAll}). Note also that name matching is only permitted when channel permissions are owned, $u, v \in \Gamma $ in (\rtit{tIf}). Uniqueness can therefore also be thought of as ``freshness'', a claim we substantiate further in Section~\ref{sec:Bisimulation}. In \cite{EFH:uniqueness:journal:12} we prove the usual subject reduction and progress lemmas for this type system, given an (obvious) error relation. \begin{exa} All client implementations discussed in Section~\ref{sec:introduction} typecheck \wrt the type environment $$\env=\emap{\ctit{srv}_1}{\chantypW{\chantypO{\tV_1}}}, \emap{\ctit{srv}_2}{\chantypW{\chantypO{\tV_2}}}, \emap{\ctit{ret}}{\chantypW{\tV_1,\tV_2}}.$$ For instance, to typecheck $\ptit{C}_2$ from \eqref{eq:clients}, we can apply the typing rules \rtit{tRec} and \rtit{tAll} from Figure~\ref{fig:typingrules} to obtain the typing sequent: \begin{equation}\label{eq:7:ts} \tproc{\env,\,\emap{w}{\proctyp},\, \emap{x}{\chantypU{\tV_1}}\;}{\;\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\;\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\;\piFree{x}{\;\piOut{\ctit{ret}}{(y,z)}{\;w}}}}} \end{equation} Using the environment structural rules (\ie \rtit{tCon}) we can split the type assumption for $x$: \begin{equation*} \env,\, \emap{w}{\proctyp},\, \emap{x}{\chantypU{\tV_1}} \quad\ensuremath{\mathrel{\prec}}\quad \env,\, \emap{w}{\proctyp},\, \emap{x}{\chantypO{\tV_1}},\, \emap{x}{\chantypUU{\tV_1}{1}} \end{equation*} Using \rtit{tStr} and \rtit{tOut} we can type \eqref{eq:7:ts} to obtain \begin{equation*} \tproc{\env,\,\emap{w}{\proctyp},\, \emap{x}{\chantypUU{\tV_1}{1}}\;}{\;\piIn{x}{y}{\;\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\;\piFree{x}{\;\piOut{\ctit{ret}}{(y,z)}{\;w}}}}} \end{equation*}\, After applying \rtit{tIn} to typecheck the input, we are left with the sequent \begin{equation*} \tproc{\env,\, \emap{w}{\proctyp},\, \emap{x}{\chantypU{\tV_1}},\, \emap{y}{\tV_1} \;}{\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\;\piFree{x}{\;\piOut{\ctit{ret}}{(y,z)}{\;w}}}} \end{equation*} In particular, we note that the input typing rule stipulates that the input continuation process needs to type \wrt the following type assumption for $\emap{x}{\chantyp{\tV_1}{{\unique{1} - 1}}}$ which is equal to $\emap{x}{\chantypU{\tV_1}}$. Since $x$ is unique now, we can change the object type from $\tV_1$ to $\tV_2$ using \rtit{tRev}, which allows us to type the interactions with $\ctit{srv}_2$ in analogous fashion. This leaves us with \begin{equation*} \tproc{\env,\, \emap{w}{\proctyp},\, \emap{x}{\chantypU{\tV_2}},\, \emap{y}{\tV_1},\, \emap{z}{\tV_2} \;}{\piFree{x}{\;\piOut{\ctit{ret}}{(y,z)}{\;w}}} \end{equation*} which we can discharge using rules \rtit{tFree}, \rtit{tOut} and \rtit{tVar}. \end{exa} \newcommand{\ensuremath{\tV_\text{\rm rec}}\xspace}{\ensuremath{\tV_\text{\rm rec}}\xspace} \newcommand{\ensuremath{\text{\rm Buff}}\xspace}{\ensuremath{\text{\rm Buff}}\xspace} \newcommand{\ensuremath{\text{\rm eBuff}}\xspace}{\ensuremath{\text{\rm eBuff}}\xspace} \newcommand{\ensuremath{\text{\rm Frn}}\xspace}{\ensuremath{\text{\rm Frn}}\xspace} \newcommand{\ensuremath{\text{\rm Frn'}}\xspace}{\ensuremath{\text{\rm Frn'}}\xspace} \newcommand{\pFff}[1]{\ensuremath{\text{\rm Frn''}(#1)}\xspace} \newcommand{\pFfff}[2]{\ensuremath{\text{\rm Frn'''}(#1,#2)}\xspace} \newcommand{\ensuremath{\text{\rm Bck}}\xspace}{\ensuremath{\text{\rm Bck}}\xspace} \newcommand{\ensuremath{\text{\rm Bck'}}\xspace}{\ensuremath{\text{\rm Bck'}}\xspace} \newcommand{\pBbb}[1]{\ensuremath{\text{\rm Bck''}(#1)}\xspace} \newcommand{\pBbbb}[2]{\ensuremath{\text{\rm Bck'''}(#1,#2)}\xspace} \newcommand{\ensuremath{\text{\rm eBk}}\xspace}{\ensuremath{\text{\rm eBk}}\xspace} \newcommand{\ensuremath{\text{\rm eBk'}}\xspace}{\ensuremath{\text{\rm eBk'}}\xspace} \newcommand{\pBEee}[1]{\ensuremath{\text{\rm eBk''}(#1)}\xspace} \newcommand{\pBEeee}[3]{\ensuremath{\text{\rm eBk'''}(#1,#2,#3)}\xspace} \newcommand{\pBEeeee}[2]{\ensuremath{\text{\rm eBk''''}(#1,#2)}\xspace} \newcommand{\ensuremath{\env_\text{ext}}\xspace}{\ensuremath{\env_\text{ext}}\xspace} \section{A Case Study} \label{sec:case-study} Resource management is particularly relevant to programs manipulating (unbounded) regular structures. We consider the concurrent implementation of an unbounded buffer, \ensuremath{\text{\rm Buff}}\xspace, receiving values to queue on channel \ctit{in} and dequeuing values by outputting on channel \ctit{out}. \begin{align*} \ensuremath{\text{\rm Buff}}\xspace & \;\deftxt\; \piIn{\ctit{in}}{y}{\;\piAll{z}{\;\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_1}{(y,z)}\bigr)}} \;\;\piParalS\;\; \piIn{c_1}{(y,z)}{\;{\piOut{\ctit{out}}{y}{\;\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}}\\ \ensuremath{\text{\rm Frn}}\xspace & \;\deftxt\; \piRec{w}{\;\piIn{b}{x}{\;\piIn{\ctit{in}}{y}{\piAll{z}{\;\bigl(w\piParal \piOutA{b}{z} \piParal \piOutA{x}{(y,z)}\bigr)}}}}\\ \ensuremath{\text{\rm Bck}}\xspace & \;\deftxt\; \piRec{w}{\;\piIn{d}{x}{\;\piIn{x}{(y,z)}{\;\piOut{\ctit{out}}{y}{\;\bigl(w \piParal \piOutA{d}{z}\bigr)}}}} \end{align*} In order to decouple input requests from output requests while still preserving the order of inputted values, the process handling inputs in \ensuremath{\text{\rm Buff}}\xspace, $\piIn{\ctit{in}}{y}{\piAll{z}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_1}{(y,z)}\bigr)}}$, stores inputted values $v_1,\ldots,v_n$ as a queue of interconnected outputs \begin{equation} \piOutA{c_1}{(v_1,c_2)} \piParalS \ldots \piParalS \piOutA{c_n}{(v_n,c_{n+1})}\label{eq:4cs} \end{equation} on the internal\footnote{Subsequent allocated channels are referred to as $c_2,c_3,$ \etc.} channels $c_1,\ldots,c_{n+1}$. The process handling the outputs, $\piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}}$, then reads from the head of this queue, \ie the output on channel $c_1$, so as to obtain the first value inputted, $v_1$, and the next head of the queue, $c_2$. The input and output processes are defined in terms of the recursive processes, \ensuremath{\text{\rm Frn}}\xspace and \ensuremath{\text{\rm Bck}}\xspace \resp, which are parameterised by the channel to output (\resp input) on next through the channels $b$ and $d$.\footnote{This models parametrisable process definitions \ensuremath{\text{\rm Frn}}\xspace(x) and \ensuremath{\text{\rm Bck}}\xspace(x) within our language.} Since the buffer is \emph{unbounded}, the number of internal channels used for the queue of interconnected outputs, \eqref{eq:4cs}, is not fixed and these channels cannot therefore be created up front. Instead, they are created on demand by the input process for every value inputted, using the \picr construct \piAll{z}{P}. The newly allocated channel $z$ is then passed on the next iteration of \ensuremath{\text{\rm Frn}}\xspace through channel $b$, \piOutA{b}{z}, and communicated as the next head of the queue when adding the subsequent queue item; this is received by the output process when it inputs the value at the head of the chain and passed on the next iteration of \ensuremath{\text{\rm Bck}}\xspace through channel $d$, \piOutA{d}{z}. \subsection{Typeability and behaviour of the Buffer} \label{sec:typab-behav-pbuf} Our unbounded buffer implementation, \ensuremath{\text{\rm Buff}}\xspace, can be typed \wrt the type environment \begin{equation} \label{eq:8cs} \env_\text{int} \;\deftxt\; \emap{\ctit{in}}{\chantypW{\tV}},\, \emap{\ctit{out}}{\chantypW{\tV}},\, \emap{b}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{d}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{c_1}{\chantypU{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}} \end{equation} where \tV\ is the type of the values stored in the buffer and \ensuremath{\tV_\text{\rm rec}}\xspace\ is a recursive type defined as $$\ensuremath{\tV_\text{\rm rec}}\xspace \;\deftxt\; \chanrecX{\chantypUU{\tV, X}{1}}.$$ This recursive type is used to type the internal channels $c_1,\ldots,c_{n+1}$ --- recall that in \eqref{eq:4cs} these channels carry channels of the same kind in order to link to one another as a chain of outputs. In particular, using the typing rules of \secref{sec:language} we can prove the following typing judgements: \begin{align} \label{eq:9cs} \tproc{\emap{\ctit{in}}{\chantypW{\tV}},\, \emap{b}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{c_1}{\chantypO{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}}\,}{}&\;\piIn{\ctit{in}}{y}{\;\piAll{z}{\;\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_1}{(y,z)}\bigr)}}\\ \label{eq:10cs} \tproc{\emap{\ctit{out}}{\chantypW{\tV}},\, \emap{d}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{c_1}{\chantypUU{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}{1}}\,}{}&\;\piIn{c_1}{(y,z)}{\;{\piOut{\ctit{out}}{y}{\;\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{align} From the perspective of a user of the unbounded buffer, \ensuremath{\text{\rm Buff}}\xspace implements the interface defined by the environment $$\ensuremath{\env_\text{ext}}\xspace\;\deftxt\; \emap{\ctit{in}}{\chantypW{\tV}}, \emap{\ctit{out}}{\chantypW{\tV}}$$ abstracting away from the implementation channels $b, d$ and $c_1$. \subsection{A resource-conscious Implementation of the Buffer} \label{sec:more-effic-impl} When the buffer implementation of \ensuremath{\text{\rm Buff}}\xspace\ retrieves values from the head of the internal queue, \eg (\ref{eq:4cs}), the channel holding the initial value, \ie $c_1$ in (\ref{eq:4cs}), is never reused again even though it is left allocated in memory. This fact will repeat itself for every value that is stored and retrieved from the buffer and amounts to the equivalent of a \emph{``memory leak''}. A more resource-conscious implementation of the unbounded buffer is \ensuremath{\text{\rm eBuff}}\xspace, defined in terms of the previous input process used for \ensuremath{\text{\rm Buff}}\xspace, and a modified output process, $\piIn{c_1}{(y,z)}{\piFree{c_1}{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)}}}$, which uses the tweaked recursive process, \ensuremath{\text{\rm eBk}}\xspace. \begin{align*} \ensuremath{\text{\rm eBuff}}\xspace & \deftxt \piIn{\ctit{in}}{y}{\piAll{z}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_1}{(y,z)}\bigr)}} \piParal \piIn{c_1}{(y,z)}{\piFree{c_1}{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)}}}\\ \ensuremath{\text{\rm eBk}}\xspace & \deftxt \piRec{w}{\;\piIn{d}{x}{\;\piIn{x}{(y,z)}{\;\piFree{x}{\;\piOut{\ctit{out}}{y}{\bigl(w \piParal \piOutA{d}{z}\bigr)}}}}} \end{align*} The main difference between \ensuremath{\text{\rm Buff}}\xspace and \ensuremath{\text{\rm eBuff}}\xspace is that the latter deallocates the channel at the head of the internal chain once it is consumed. We can typecheck \ensuremath{\text{\rm eBuff}}\xspace as safe since no other process uses the internal channels making up the chain after deallocation. More specifically, the typeability of \ensuremath{\text{\rm eBuff}}\xspace \wrt $\env_\text{int}$ of \eqref{eq:8cs} follows from \eqref{eq:9cs} and the type judgement below: \begin{equation*} \tproc{\emap{\ctit{out}}{\chantypW{\tV}},\, \emap{d}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{c_1}{\chantypUU{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}{1}}\,}{\;\piIn{c_1}{(y,z)}{\;\piFree{c_1}{\;{\piOut{\ctit{out}}{y}{\;\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}}}} \end{equation*} Note that by the typing rule \rtit{tIn} of \figref{fig:typingrules}, we need to typecheck the continuation of the input process, \piFree{c_1}{\;{\piOut{\ctit{out}}{y}{\;\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \wrt the type environment \begin{equation*} \emap{\ctit{out}}{\chantypW{\tV}},\, \emap{d}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{c_1}{\chantypU{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}},\, \emap{y}{\tV},\, \emap{z}{\ensuremath{\tV_\text{\rm rec}}\xspace} \end{equation*} where, in particular, $c_1$ is now assigned a \emph{unique} channel type. According to the typing rule \rtit{tFree}, this suffices to safely type the respective deallocation of $c_1$. \section{A Cost-Based Preorder} \label{sec:cost-bisim} We define our cost-based preorder as a \emph{bisimulation relation} that relates two systems $\sys{\sV}{P}$ and $\sys{\sVV}{Q}$ whenever they have equivalent behaviour and when, in addition, $\sys{\sV}{P}$ is more efficient than $\sys{\sVV}{Q}$. We are interested in reasoning about \emph{safe} computations, aided by the type system described in Section~\ref{sec:language}. For this reason, we limit our analysis to instances of $\sys{\sV}{P}$ and $\sys{\sVV}{Q}$ that are \emph{well-typed}, \ie that there exist (consistent) environments $\envv,\envv'$ such that $\tproc{\envv}{\sys{\sV}{P}}$ and $\tproc{\envv'}{\sys{\sVV}{Q}}$. In order to preserve safety, we also need to reason under the assumption of safe contexts. Again, we employ the type system described in Section~\ref{sec:language} and characterise the (safe) context through a type environment that typechecks it, $\env_\mathit{obs}$. Thus our bisimulation relations take the form of a typed relation, indexed by type environments \cite{hennessy04behavioural}: \begin{align}\label{eq:typed-relation} \env_\mathit{obs} & \vDash (\sys{\sV}{P}) \;\relR\; (\sys{\sVV}{Q}) \end{align} Behavioural reasoning for safe systems is achieved by ensuring that the overall type environment $(\env_\mathit{sys}, \env_\mathit{obs})$, consisting of the environment typing $\sys{\sV}{P}$ and $\sys{\sVV}{Q}$, say $\env_\mathit{sys}$, and the observer environment $\env_\mathit{obs}$, is \emph{consistent} according to Definition~\ref{def:consistency}. This means that there exists a global environment, $\env_\mathit{global}$, which can be decomposed into $\env_\mathit{obs}$ and $\env_\mathit{sys}$; it also means that the observer process, which is universally quantified by our semantic interpretation \eqref{eq:typed-relation}, typechecks when composed in parallel with $P$, \resp $Q$ (see \rtit{tPar} of \figref{fig:typingrules}). There is one other complication worth highlighting regarding \eqref{eq:typed-relation}: although both systems $\sys{\sV}{P}$ and $\sys{\sVV}{Q}$ are related \wrt the same \emph{observer}, $\env_\mathit{obs}$, they can each be typed under \emph{different} typing environments. For instance, consider the two clients $\ptit{C}_0$ and $\ptit{C}_1$ we would like to relate from the introduction: \begin{equation} \label{eq:1:bisim} \begin{split} \ptit{C}_0 &\deftri \piRecX{\;\piAll{x_1}{\piAll{x_2}\;\piOut{\ctit{srv}_1}{x_1}\,\piIn{x_1}{y}{\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\piOut{\ctit{c}}{(y,z)}{w}}}}} \\ \ptit{C}_1 &\deftri \piRecX{\;\piAll{x}{\;\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\piOut{\ctit{c}}{(y,z)}{w}}}}} \end{split} \end{equation} Even though, initially, they may be typed by the same type environment, after a few steps, the derivatives of $\ptit{C}_0$ and $\ptit{C}_1$ must be typed under different typing environments, because $\ptit{C}_0$ allocates two channels, while $\ptit{C}_1$ only allocates a single channel. Our typed relations allows for this by \emph{existentially quantifying} over the type environments typing the respective systems. All this is achieved indirectly through the use of \emph{configurations}. \begin{defi}[Configuration]\label{def:configuration} The triple \confESP\ is a configuration if and only if $\dom(\env) \subseteq \sV$ and there exist some \envv such that $(\env, \envv)$ is consistent and $\tproc{\envv}{\sysSP}$. \end{defi} \noindent Note that, in a configuration \confESP\ (where \env types some implicit observer): \begin{itemize} \item $c\in(\dom(\env)\cup \names(P))$ implies $c\in\sV$ \ie \sV\ is a global resource environment accounting for both $P$ and $\env$. \item $c \in \sV$ and $c\not\in(\dom(\env)\cup \names(P))$ denotes a resource leak for channel $c$. \item $c\not\in\dom(\env)$ implies that channel $c$ is not known to the observer; in some sense, this mimics name scoping in more standard \pic settings. \end{itemize} \begin{defi}[Typed Relation]\label{def:typed-relation} A type-indexed relation $\relR$ relates systems under a observer characterized by a context $\env$; we write \begin{equation*} \env \vDash \sys{M}{P} \; \relR \; \sys{N}{Q} \end{equation*} if $\relR$ relates $\confE{\sys{M}{P}}$ and $\confE{\sys{N}{Q}}$, and both $\confE{\sys{M}{P}}$ and $\confE{\sys{N}{Q}}$ are configurations. \end{defi} \subsection{Labelled Transition System} \label{sec:LTS} In order to be able to reason coinductively over our typed relations, we define a labelled transition system (LTS) over configurations. Apart from describing the behaviour of the system \sysSP in a configuration \confESP, the LTS also models interactions between the system and an observer typed under \env. Our LTS is also \emph{costed}, assigning a cost to each form of transition. The costed LTS, whose actions take the form $\piRedDecCost{\;\mu\;}{k}$, is defined in \figref{fig:LTS}, in terms of a top-level rule, \rtit{lRen}, and a pre-LTS, denoted as $\piRedDecCostPre{\;\mu\;}{k}$. The rule \rtit{lRen} allows us to rename channels for transitions derived in the pre-LTS, as long as this renaming is invisible to the observer, and is comparable to alpha-renaming of scoped bound names in the standard \pic. It relies on the renaming-modulo (observer) type environments given in Definition~\ref{def:renaming}. \begin{defi}[Renaming Modulo \env]\label{def:renaming} Let $\sigma_\env :\Names \mapsto \Names$ range over bijective name substitutions satisfying the constraint that \begin{math} c\in\dom(\env) \text{ implies } c\sigma_\env = c\sigma_\env^{-1} = c \end{math}. \end{defi} The renaming introduced by \rtit{lRen} allows us to relate the clients $\ptit{C}_0$ and $\ptit{C}_1$ of \eqref{eq:1:bisim} \wrt an observer environment such as $\ctit{srv}_1 : \chantypW{\chantypO{\tV_1}}, \ctit{srv}_2 : \chantypW{\chantypO{\tV_2}}$ of \eqref{eq:2a} and some appropriate common set of resources \sV\ even when, after the initial channel allocations, the two clients communicate potentially different (newly allocated) channels on $\ctit{srv}_1$. The rule is particularly useful when, later on, we need to also match the output of a new allocated channel on $\ctit{srv}_2$ from $\ptit{C}_0$ with the output on the previously allocated channel from $\ptit{C}_1$ on $\ctit{srv}_2$. The renaming-modulo observer environments function can be used for $\ptit{C}_1$ at that stage --- even though the client reuses a channel previously communicated to the observer --- because the respective observer information relating to that channel is lost, \ie it is not in the domain of the observer environment; see discussion for \rtit{lOut} and \rtit{lIn} below for an explanation of how observers lose information. This mechanism differs from standard scope-extrusion techniques for \pic which assume that, once a name has been extruded, it remains forever known to the observer. As a result, there are more opportunities for renaming in our calculus than there are in the standard \pic. \begin{display}{LTS Process Moves}{fig:LTS} \small \textbf{Costed Transitions and pre-Transitions}\\ \begin{mathpar} \begin{prooftree} \confE{\bigl(\sysSP\bigr)\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \justifiedBy{lRen} \confESP \;\piRedDecCost{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \end{prooftree} \\ \begin{prooftree} \justifiedBy{lOut} \conf{\env,\emap{c}{\chantypA{\tVlst}}\;}{\;\sys{\sV \;}{\;\piOut{c}{\vec{d}}{P}}} \;\piRedDecCostPre{\actout{c}{\vec{d}}}{0}\; \conf{\env,\emap{c}{\chantyp{\tVlst}{\aV-1}}, \envmap{\vec{d}}{\vec{\tV}}\;}{\;\sys{\sV \;}{\;P}} \end{prooftree} \\% [1.5em] \begin{prooftree} \justifiedBy{lIn} \conf{\env,\emap{c}{\chantypA{\tVlst}}, \emap{\vec{d}}{\vec{\tV}}\;}{\;\sys{\sV \;}{\;\piIn{c}{\vec{x}} {P}}} \;\piRedDecCostPre{\actin{c}{\vec{d}}}{0}\; \conf{\env,\emap{c}{\chantyp{\tVlst}{\aV-1}}\;}{\;\sys{\sV \;}{\;P \subC{\vec{d}}{\vec{x}}}} \end{prooftree} \\% [1.5em] \begin{prooftree} \conf{\env_1}{\sysS{P}} \;\piRedDecCostPre{\actout{c}{\vec{d}}}{0}\; \conf{\env'_1}{\sysS{P'}} \qquad \conf{\env_2}{\sysS{Q}} \;\piRedDecCostPre{\actin{c}{\vec{d}}}{0}\; \conf{\env'_2}{\sysS{Q'}} \justifiedBy{lCom-L} \conf{\env}{\sysS{P \piParal Q}} \;\piTauCostPre{0}\; \conf{\env} {\sysS{P' \piParal Q'}} \end{prooftree} \\ \begin{prooftree} \confESP \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \justifiedBy{lPar-L} \confE{\sysS{P \piParal Q}} \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P' \piParal Q}} \end{prooftree} \\ \begin{prooftree} \env \ensuremath{\mathrel{\prec}} \env' \justifiedBy{lStr} \conf{\env}{\sysSP} \;\piRedDecCostPre{\;\actenv\;}{0}\; \conf{\env'}{\sysSP} \end{prooftree} \qquad\qquad \begin{prooftree} \strut \justifiedBy{lRec} \confE{\sysS{\piRecX{P}}} \;\piTauCostPre{0}\; \confE{\sysS{P\subC{\piRecX{P}}{w}}} \end{prooftree} \\ \begin{prooftree} \strut \justifiedBy{lThen} \confE{\sys{\sV,c}{\piIf{c=c}{P}{Q}}} \;\piTauCostPre{0}\; \confEP{\sV,c } \end{prooftree} \\ \begin{prooftree} \strut \justifiedBy{lElse} \confE{\sys{\sV,c,d}{\piIf{c=d}{P}{Q}}} \;\piTauCostPre{0}\; \confE{\sys{\sV,c,d }{Q}} \end{prooftree}\\% [1.5em] \begin{prooftree} \strut \justifiedBy{lAll} \confE{\sys{\sV}{\piAll{x}{P}}} \;\piTauCostPre{+1}\; \confE{\sys{\sV,c}{P\subC{c}{x}}} \end{prooftree} \qquad\qquad \begin{prooftree} \strut \justifiedBy{lAllE} \conf{\env}{\sys{\sV}{P}} \;\piRedDecCostPre{\actall}{+1}\; \conf{\env, \envmap{c}{\chantypU{\tVlst}}}{\sys{\sV,c}{P}} \end{prooftree} \\ \begin{prooftree} \strut \justifiedBy{lFree} \confE{\sys{\sV,c}{\piFree{c}{P}}} \;\piTauCostPre{-1}\; \confE{\sys{\sV}{P}} \end{prooftree} \qquad\qquad \begin{prooftree} \strut \justifiedBy{lFreeE} \conf{\env, \envmap{c}{\chantypU{\tV}}}{\sys{\sV,c}{P}} \;\piRedDecCostPre{\actfree{c}}{-1}\; \conf{\env}{\sys{\sV}{P}} \end{prooftree} \\ \end{mathpar} \textbf{Weak (Cost-Accumulating) Transitions}\\ \begin{mathpar} \begin{prooftree} \confESP \piRedDecCostPad{\mu}{k} \conf{\envv}{\sys{\sVV}{Q}} \justifiedBy{wTra} \confESP \piRedWDecCostPad{\mu}{k} \conf{\envv}{\sys{\sVV}{Q}} \end{prooftree} \qquad\qquad \begin{prooftree} \confESP \piRedDecCostPad{\tau}{l} \conf{\env'}{\sys{\sV'}{P}} \piRedWDecCostPad{\mu}{k} \conf{\env''}{\sys{\sVV}{Q}} \justifiedBy{wLeft} \confESP \piRedWDecCostPad{\mu}{(l+k)} \conf{\env''}{\sys{\sVV}{Q}} \end{prooftree} \\ \begin{prooftree} \confESP \piRedWDecCostPad{\mu}{l} \conf{\env'}{\sys{\sV'}{P}} \piRedDecCostPad{\tau}{k} \conf{\env''}{\sys{\sVV}{Q}} \justifiedBy{wRight} \confESP \piRedWDecCostPad{\mu}{(l+k)} \conf{\env''}{\sys{\sVV}{Q}} \end{prooftree} \end{mathpar} \end{display} To ensure that only safe interactions are specified, the (pre-)LTS must be able to reason compositionally about resource usage between the process, $P$, and the observer, \env. We therefore imbue our type assumptions from \secref{sec:language} with a \emph{permission semantics}, in the style of \cite{TerauchiAiken08,FraRatSas11}. Under this interpretation, type assumptions constitute \emph{permissions} describing the respective usage of resources. Permissions are woven into the behaviour of configurations giving them an \emph{operational} role: they may either restrict usage or privilege processes to use resources in special ways. In a configuration, the observer and the process each \emph{own} a set of permissions and may \emph{transfer} them to one another during communication. The consistency requirement of a configuration ensures that the guarantees given by permissions owned by the observer are not in conflict with those given by permissions owned by the configuration process, and viceversa. To understand how the pre-LTS deals with permission transfer and compositional resource usage, consider the rule for output, (\rtit{lOut}). Since we employ the type system of \secref{sec:language} to ensure safety, this rule models the typing rule for output (\rtit{tOut}) on the part of the process, and the typing rule for input (\rtit{tIn}) on the part of the observer. Thus, apart from describing the communication of values $\vec{d}$ from the configuration process to the observer on channel $c$, it also captures permission transfer between the two parties, mirroring the type assumption usage in \rtit{tOut} and \rtit{tIn}. More specifically, rule (\rtit{lOut}) employs the operation $\envmap{c}{\chantyp{\vec{\tV}}{\aV-1}}$ of \figref{fig:typingrules} so as to concisely describe the three variants of the output rule: \begin{equation}\label{eq:7:bisim} \begin{split} &\begin{prooftree} \justifiedBy{lOutU} \conf{\env,\emap{c}{\chantypUU{\tVlst}{i+1}}\;}{\;\sys{\sV \;}{\;\piOut{c}{\vec{d}}{P}}} \quad\piRedDecCostPre{\;\actout{c}{\vec{d}}\;}{0}\quad \conf{\env,\emap{c}{\chantypUU{\tVlst}{i}}, \envmap{\vec{d}}{\vec{\tV}}\;}{\;\sys{\sV \;}{\;P}} \end{prooftree}\\[0.5em] &\begin{prooftree} \justifiedBy{lOutA} \conf{\env,\emap{c}{\chantypO{\tVlst}}\qquad}{\;\sys{\sV \;}{\;\piOut{c}{\vec{d}}{P}}} \quad\piRedDecCostPre{\;\actout{c}{\vec{d}}\;}{0}\quad \conf{\env,\phantom{\emap{c}{\chantypUU{\tVlst}{i}},} \envmap{\vec{d}}{\vec{\tV}}\;}{\;\sys{\sV \;}{\;P}} \end{prooftree}\\[0.5em] &\begin{prooftree} \justifiedBy{lOutW} \conf{\env,\emap{c}{\chantypW{\tVlst}}\quad\;\;}{\;\sys{\sV \;}{\;\piOut{c}{\vec{d}}{P}}} \quad\piRedDecCostPre{\;\actout{c}{\vec{d}}\;}{0}\quad \conf{\env,\emap{c}{\chantypW{\tVlst}},\;\; \envmap{\vec{d}}{\vec{\tV}}\;}{\;\sys{\sV \;}{\;P}} \end{prooftree}\\[0.5em] \end{split} \end{equation} The first output rule variant, \rtit{lOutU}, deals with the case where the observer owns a unique-after-$(i\!+\!1)$ permission for channel $c$. \defref{def:configuration} implies that the process in the configuration is well-typed (\wrt some environment) and, since the process is in a position to output on channel $c$, rule \rtit{tOut} must have been used to type it. This typing rule, in turn, states that the type assumptions relating to the values communicated, $\envmap{\vec{d}}{\vec{\tV}}$, must have been owned by the process and consumed by the output operation. Dually, since the observer is capable of inputting on $c$, rule \rtit{tIn} must have been used to type it,\footnote{More specifically, \rtit{tInU} of \eqref{eq:8:lang}.} which states that the continuation (after the input) assumes the use the assumptions $\envmap{\vec{d}}{\vec{\tV}}$. Rule \rtit{lOutU} models these two usages operationally as the \emph{explicit transfer} of the permissions $\envmap{\vec{d}}{\vec{\tV}}$ from the process to the observer. The rule also models the \emph{implicit transfer} of permissions between the observer and the output process. More precisely, \defref{def:configuration} requires that the process is typed \wrt an environment that \emph{does not conflict with} the observer environment, which implies that the process environment must have (necessarily) used an affine permission, $\emap{c}{\chantypO{\tVlst}}$, for outputting on channel $c$.\footnote{This implies that \rtit{tOutA} of \eqref{eq:7:lang} was used when typing the process} In fact, any other type of permission would conflict with the unique-after-$(i\!+\!1)$ permission for channel $c$ owned by the observer. Moreover, through the guarantee given by the permission used, \emap{c}{\chantypUU{\tVlst}{i+1}}, the observer knows that, after the communication, it is one step closer towards gaining exclusive permission for channel $c$. Rule \rtit{lOutU} models all this as the (implicit) transfer of the affine permission $\emap{c}{\chantypO{\tVlst}}$ from the process to the observer, updating the observer's permission for $c$ to $\chantypUU{\tVlst}{i}$ --- note that two permissions $\emap{c}{\chantypUU{\tVlst}{i+1}},\emap{c}{\chantypO{\tVlst}}$ can be consolidated as $\emap{c}{\chantypUU{\tVlst}{i}}$ using the structural rules \rtit{tJoin} and \rtit{pUnq} of \figref{fig:typingrules}. The second output rule variant of \eqref{eq:7:bisim}, \rtit{lOutA}, is similar to the first when modelling the explicit transfer of permissions $\envmap{\vec{d}}{\vec{\tV}}$ from the process to the observer. However, it describes a different implicit transfer of permissions, since the observer uses an affine permission to input from the configuration process on channel $c$. The rule caters for two possible subcases. In the first case, the process could have used a unique-after-$(i\!+\!1)$ permission when typed using \rtit{tOut}: this constitutes a dual case to that of rule \rtit{lOutU}, and the rule models the implicit transfer of the affine permission $\emap{c}{\chantypO{\tVlst}}$ in the \emph{opposite} direction, \ie from the observer to the process. In the second case, the process could have used an affine or an unrestricted permission instead, which does not result in any implicit permission transfer, but merely the consumption of affine permissions. Since the environment on the process side is existentially quantified in a configuration, this difference is abstracted away and the two subcases are handled by the same rule variant. Note that, in the extreme case where the observer affine permission is the only one relating to channel $c$, the observer loses all knowledge of channel $c$. The explicit permission transfer for \rtit{lOutW} of \eqref{eq:7:bisim}, is identical to the other two rule variants. The use of an unrestricted permission for $c$ from the part of the observer, $\emap{c}{\chantypW{\tVlst}}$, implies that the output process could have either used an affine or an unrestricted permission---see \eqref{eq:2:lang}. In either case, there is no implicit permission transfer involved. Moreover, the observer permission is not consumed since it is unrestricted. The pre-LTS rule \rtit{lIn} can also be expanded into three rule variants, and models analogous permission transfer between the observer and the input process. Importantly, however, the \emph{explicit} permission transfer described is \emph{in the opposite direction} to that of \rtit{lOut}, namely from the observer to the input process. As in the case of \rtit{lOutA} of \eqref{eq:7:bisim}, the permission transfer from the observer to the input process may result in the observer losing all knowledge relating to the channels communicated, $\vec{d}$. In order to allow an internal communication step through either \rtit{lCom-L}, or its dual \rtit{lCom-R} (elided), the left process should be considered to be part of the ``observer'' of the right process, and vice versa. However, it is not necessary to be quite so precise; we can follow \cite{Hennessy07} and consider an arbitrary observer instead. More explicitly, the rule states that if we can find observer environments ($\Gamma_1$ and $\Gamma_2$) to induce the respective input and output actions from separate constituent processes making up the system, we can then express these separate interactions as a single synchronous interaction; since this interaction is internal, it is independent of the environment representing the observer in the conclusion, $\Gamma$. See \cite{Hennessy07} for more justification. In our LTS, both the process (\rtit{lAll, \rtit{lFree}}) and the observer (\rtit{lAllE}, \rtit{lFreeE}) can allocate and deallocate memory. Finally, since the observer is modelled exclusively by the permissions it owns, we must allow the observer to split these permissions when necessary (\rtit{lStr}). The only rules that may alter the observer environment are those corresponding to external actions \ie \rtit{\rtit{lIn}}, \rtit{\rtit{lOut}}, \rtit{lAllE}, \rtit{lFreeE} and \rtit{lStr}. The remaining axioms in the pre-LTS model reduction rules from Figure~\ref{fig:reduction-semantics} and should be self-explanatory; note that, as in the reduction semantics, the only actions carrying a cost are those describing allocation and deallocation, where the respective costs associated are inherited directly from the reduction semantics of \secref{sec:language}. In \figref{fig:LTS} we also specify weak costed transitions for configurations, based on the transitions of our LTS (rule \rtit{wTra}). As is standard, the relation denotes actions padded by $\tau$-transitions to the left and right. However, it also \emph{accumulates} the costs of the respective transitions into one aggregate cost for the entire weak action (rules \rtit{wLeft} and \rtit{wRight}). Technically, the pre-LTS is defined over triples $\env, M, P$ rather than configurations $\confESP$, but we can prove that the pre-LTS rules preserve the requirements for such triples to be configurations; see Lemma~\ref{lem:subject-reduction}. \begin{lem}[Transition and Structure]\label{lem:transition-structure} \begin{math} \confESP \piRedDecCostPre{\;\;\mu\;\;}{k} \conf{\env'}{\sys{\sV'}{P'}} \text{ and for } \envv \text{ consistent } \\\tproc{\envv}{\sysSP} \;\text{ implies the cases:} \end{math} \begin{description} \item[If $\mu = \actout{c}{\vec{d}}$] $\sV=\sV'$, $k=0$, $P\piStruct \piOut{c}{\vec{d}}P_1 \piParal P_2$, $P'\piStruct P_1 \piParal P_2$\; and\; \begin{math} \env= (\env'', \emap{c}{\chantypA{\tVlst}}), \\ \env' = (\env'', \emap{c}{\chantyp{\tVlst}{\aV-1}}, \lst{\envmap{d}{\tV}}) \end{math} \;and\; \begin{math} \envv \ensuremath{\mathrel{\prec}} (\envv', \envmap{c}{\chantyp{\tVlst}{\aVV}}, \lst{\envmap{d}{\tV}}),\; \end{math} \begin{math} {\tproc{(\envv', \envmap{c}{\chantyp{\tVlst}{\aVV-1}})}{P' }} \end{math}\\ for some $P_1, P_2, \env'', b, \tVlst$ and $\envv'$. \item[If $\mu = \actin{c}{\vec{d}}$] $\sV=\sV'$, $k=0$, $P\piStruct \piIn{c}{\vec{x}}P_1 \piParal P_2$, $P'\piStruct P_1\subC{\vec{d}}{\vec{x}} \piParal P_2$ \; and\\ \begin{math} \env = (\env'', \emap{c}{\chantypA{\tVlst}}, \lst{\emap{d}{\tV}}), \env' = (\env'',\emap{c}{\chantyp{\tVlst}{\aV-1}}) \end{math} \; and\; \begin{math} \envv \ensuremath{\mathrel{\prec}} (\envv', \envmap{c}{\chantyp{\tVlst}{\aVV}}), \end{math}\\ \begin{math} {\tproc{(\envv', \envmap{c}{\chantyp{\tVlst}{\aVV-1}}, \lst{\envmap{d}{\tV}})}{P'}} \end{math} \quad for some $P_1, P_2, \env'', b, \tVlst$ and $\envv'$. \item[If $\mu = \tau$] Either of three cases hold : \begin{itemize} \item $\sV=\sV'$, $k=0$ \;and\; $\env=\env'$ \;and\; $\tproc{\envv}{P'}$ or; \item $\sV=(\sV',c)$,\; $k=-1$ and $P \piStruct \piFree{c}{P_1}\piParal P_2$, $P'\piStruct P_1\piParal P_2$, $\env=\env'$ and ${\envv\ensuremath{\mathrel{\prec}} \envv',\emap{c}{\chantypU{\tVlst}}}$ \; where\;$\tproc{\envv'}{P'}$ (for some $P_1, P_2, \tVlst$ and $\envv'$) or; \item$\sV'=(\sV,c)$,\; $k=+1$\; and\;$P \piStruct \piAll{x}{P_1}\piParal P_2$, $P'\piStruct P_1\subC{c}{x}\piParal P_2$ \; and\;$\env=\env'$ and $\envv\ensuremath{\mathrel{\prec}} \envv'$ and $\tproc{\envv',\emap{c}{\chantypU{\tVlst}}}{P'}$ (for some $P_1, P_2, \tVlst$ and $\envv'$) \end{itemize} \item[If $\mu = \actfree{c}$] $\sV=(\sV',c)$, $k=-1$ \; and\;$\env = \env',\emap{c}{\chantypU{\tVlst}}$ \;and\; $P=P'$ for some \tVlst. \item[If $\mu = \actall$] $\sV'=(\sV,c)$, $k=+1$ \; and\; $\env,\emap{c}{\chantypU{\tVlst}} = \env'$ \;and\; $P=Q$ for some \tVlst. \item[If $\mu=\actenv$] $\env \ensuremath{\mathrel{\prec}} \env'$, $\sV =\sV'$, $k=0$ and $P=P'$ \end{description} \end{lem} \begin{proof} By rule induction on $\confESP \piRedDecCostPre{\;\mu\;}{k} \conf{\env'}{\sys{\sV'}{P'}}$ \end{proof} \begin{lem}[Subject reduction] \label{lem:subject-reduction} If $\confESP$ is a configuration and $\confESP \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\envv}{\sys{\sVV}{Q}}$ then $\conf{\envv}{\sys{\sVV}{Q}}$ is also a configuration. \end{lem} \begin{proof} We assume that $\dom(\env)\subseteq \sV$ and that there exists $\envv$ such that $\env,\envv$ is consistent and that $\tproc{\envv}{\sysSP}$. The rest of the proof follows from Lemma~\ref{lem:transition-structure} (Transition and Structure), by case analysis of $\mu$. \end{proof} As a consistency check, we can also show that our LTS semantics is in accordance with the reduction semantics presented in \ref{sec:language}. In particular, $\tau$-transitions correspond to reductions modulo renaming and process structural equivalence. \begin{lem}[Reduction and Silent Transitions] \label{lem:reduc-and-transitions}\quad \begin{enumerate} \item $\sys{\sV}{P}\piRed_k \sys{\sV'}{P'}$ implies $\confE{\sys{\sV}{P}} \piRedDecCost{\tau}{k} \confE{\sys{\sV'}{P''}}$ for arbitrary $\env$ where $P''\piStruct P'$. \item $\confE{\sys{\sV}{P}} \piRedDecCost{\tau}{k} \conf{\envv}{\sys{\sV'}{P'}}$ implies $(\sys{\sV}{P})\sigma_\env \piRed_k \sys{\sV'}{P'}$ for some $\sigma_\env$. \end{enumerate} \end{lem} \begin{proof} By rule induction on $\sys{\sV}{P}\piRed_k \sys{\sV'}{P'}$ and $\confE{\sys{\sV}{P}} \piRedDecCostPad{\tau}{k} \conf{\envv}{\sys{\sV'}{P'}}$. \end{proof} \begin{exa}\label{ex:buff-trans} Recall the buffer implementation \ensuremath{\text{\rm Buff}}\xspace from \secref{sec:case-study} and the respective external environment \ensuremath{\env_\text{ext}}\xspace defined in \secref{sec:typab-behav-pbuf}. The transition rules of \figref{fig:LTS} allow us to derive the following behaviour for the configuration \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1}{\ensuremath{\text{\rm Buff}}\xspace}} (where $\ctit{in}, \ctit{out}, b, d \in \sV$): \begin{align} \label{eq:1cs} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1}{\ensuremath{\text{\rm Buff}}\xspace}} & \;\piDRedDecCost{\actin{\ctit{in}}{v_1}}{0}\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1}{ \left(\!\begin{array}{l} \piAll{z}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_1}{(v_1,z)}\bigr)} \\ \piParalS\; \piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }}\\ \label{eq:2cs} &\;\piTauTauCost{+1}\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1,c_2}{ \left(\!\begin{array}{l} \bigl(\ensuremath{\text{\rm Frn}}\xspace \piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v_1,c_2)}\bigr) \\ \piParalS\; \piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }}\\ \nonumber &\;\;=\;\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1,c_2}{ \left(\!\begin{array}{l} \piRec{w}{\;\piIn{b}{x}{\;\piIn{\ctit{in}}{y}{\piAll{z}{\;\bigl(w\piParal \piOutA{b}{z} \piParal \piOutA{x}{(y,z)}\bigr)}}}} \\ \piParalS \piOutA{b}{c_2} \piParalS \piOutA{c_1}{(v_1,c_2)} \\ \piParalS\; \piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }}\\ \label{eq:3cs} &\;\piTauTauCost{0}\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1,c_2}{ \left(\!\begin{array}{l} \piIn{b}{x}{\;\piIn{\ctit{in}}{y}{\piAll{z}{\;\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{x}{(y,z)}\bigr)}}} \\ \piParalS \piOutA{b}{c_2} \piParalS \piOutA{c_1}{(v_1,c_2)} \\ \piParalS\; \piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }}\\ \label{eq:5cs} &\;\piTauTauCost{0}\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1,c_2}{ \left(\!\begin{array}{l} \piIn{\ctit{in}}{y}{\piAll{z}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_2}{(y,z)}\bigr)}} \\ \piParalS\; \piOutA{c_1}{(v_1,c_2)} \\ \piParalS\; \piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }}\\ \label{eq:6cs} &\;\piRedWDecCost{\actin{\ctit{in}}{v_2}}{+1}\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1,c_2,c_3}{ \left(\!\begin{array}{l} \piIn{\ctit{in}}{y}{\piAll{z}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_3}{(y,z)}\bigr)}} \\ \piParalS\; \piOutA{c_1}{(v_1,c_2)} \piParalS \piOutA{c_2}{(v_2,c_3)} \\ \piParalS\; \piIn{c_1}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }}\\ \label{eq:7cs} & \;\piRedWDecCost{\actout{\ctit{out}}{v_1}}{0}\; \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_1,c_2,c_3}{ \left(\!\begin{array}{l} \piIn{\ctit{in}}{y}{\piAll{z}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{c_3}{(y,z)}\bigr)}} \\ \piParalS\; \piOutA{c_2}{(v_2,c_3)} \\ \piParalS\; \piIn{c_2}{(y,z)}{{\piOut{\ctit{out}}{y}{\bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}} \end{array}\!\right) }} \end{align} Transition \eqref{eq:1cs} describes an input from the user whereas \eqref{eq:2cs} allocates a new internal channel, $c_2$, followed by a recursive process unfolding,~\eqref{eq:3cs}, and the instantiation of the unfolded process with the newly allocated channel $c_2$, \eqref{eq:5cs}, through a communication on channel $b$. The weak transition \eqref{eq:6cs} is an aggregation of 4 analogous transitions to the ones just presented, this time relating to a second input of value $v_2$. This yields an internal output chain of length 2, \ie $ \piOutA{c_1}{(v_1,c_2)} \piParalS \piOutA{c_2}{(v_2,c_3)} $. Finally, \eqref{eq:7cs} is an aggregation of 4 transitions relating to the consumption of the first item in the chain, $\piOutA{c_1}{(v_1,c_2)}$, the subsequent output of $v_1$ on channel \ctit{out}, and the unfolding and instantiation of the recursive process \ensuremath{\text{\rm Bck}}\xspace with $c_2$ --- see definition for \ensuremath{\text{\rm Bck}}\xspace. \end{exa} \subsection{Costed Bisimulation} \label{sec:Bisimulation} We define a cost-based preorder over systems as a \emph{typed relation}, \cf Definition~\ref{def:typed-relation}, ordering systems that exhibit the same external behaviour at a less-than-or-equal-to cost. We require the preorder to consider client $\ptit{C}_1$ as more efficient than $\ptit{C}_0$ \wrt an appropriate resource environment \sV\ and observers characterised by the type environment stated in \eqref{eq:2a} but also that, \wrt the same resource and observer environments, client $\ptit{C}_3$ of \eqref{eq:clients-complicated} is more efficient than $\ptit{C}_1$. This latter ordering is harder to establish since client $\ptit{C}_1$ is at times \emph{temporarily} more efficient than $\ptit{C}_3$. In order to handle this aspect we define our preorder as an \emph{amortized} bisimulation \cite{Kiehn05}. Amortized bisimulation uses a \emph{credit} $n$ to compare a system $\sys{M}{P}$ with a less efficient system $\sys{N}{Q}$ while allowing $\sys{M}{P}$ to do a more expensive action than $\sys{N}{Q}$, as long as the credit can make up for the difference. Conversely, whenever $\sys{M}{P}$ does a cheaper action than $\sys{N}{Q}$, then the difference gets \emph{added} to the credit.\footnote{Stated otherwise, $\sys{M}{P}$ can do a more expensive action than $\sys{N}{Q}$ now, as long as it makes up for it later.} Crucially, however, the amortisation credit is \emph{never allowed} to become \emph{negative} \ie $n \in \Nats$. In general, we refine Definition~\ref{def:typed-relation} to amortized typed relations with the following structure: \begin{defi}[Amortised Typed Relation]\label{def:typed-relation-amm} An amortized type-indexed relation $\relR$ relates systems under an observer characterized by a context $\env$, with credit $n$ ($n \in \Nats$); we write \begin{equation*} \env \vDash \sys{M}{P} \; \relR^n \; \sys{N}{Q} \end{equation*} if $\relR^n$ relates $\confE{\sys{M}{P}}$ and $\confE{\sys{N}{Q}}$, and both $\confE{\sys{M}{P}}$ and $\confE{\sys{N}{Q}}$ are configurations. \end{defi} \begin{defi}[Amortised Typed Bisimulation] \label{def:amortized-typed-bisim} An amortized type-indexed relation over processes \relR\ is a bisimulation at $\env$ with credit $n$ if, whenever ${\env \vDash (\sys{M}{P}) \,\relR^n\, (\sys{N}{Q})}$, \begin{itemize} \item If $\confE{\sysS{P}} \piRedDecCostPad{\mu}{k} \conf{\env'}{\sys{\sV'}{P'}}$ then there exist $\sVV'$ and $Q'$ such that \\ $\confE{\sys{\sVV}{Q}} \piRedWDecCostPad{\hat{\mu}}{l} \conf{\env'}{\sys{N'}{Q'}}$ where $\env' \vDash (\sys{\sV'}{P'}) \,\relR^{n+l-k}\, (\sys{\sVV'}{Q'})$ \item If $\confE{\sys{\sVV}{Q}} \piRedDecCostPad{\mu}{l} \conf{\env'}{\sys{\sVV'}{Q'}}$ then there exist $\sV'$ and $P'$ such that \\ $\confE{\sys{\sV}{P}} \piRedWDecCostPad{\hat{\mu}}{k} \conf{\env'}{\sys{\sV'}{P'}}$ where $\env' \vDash (\sys{\sV'}{P'}) \,\relR^{n+l-k}\, (\sys{\sVV'}{Q'})$ \end{itemize} where $\hat{\mu}$ is the empty string if $\mu = \tau$ and $\mu$ otherwise. Bisimilarity at $\env$ with credit $n$, denoted $\env \vDash \sys{M}{P} \,\piCostBisAmm{n} \sys{N}{Q}$, is the largest amortized typed bisimulation at $\env$ with credit $n$. We sometimes existentially quantify over the credit and write ${\env \vDash \sys{M}{P} \,\piCostBis\, \sys{N}{Q}}$. We write $\env \vDash \sys{M}{P} \piCostBisEq \sys{N}{Q}$ to denote the kernel of the preorder (\ie whenever we have both $\env \vDash \sys{M}{P} \,\piCostBis\, \sys{N}{Q}$ and $\env \vDash \sys{N}{Q} \,\piCostBis\, \sys{M}{P}$), and write $\env \vDash \sys{M}{P} \piCostBiss \sys{N}{Q}$ whenever $\env \vDash \sys{M}{P} \,\piCostBis\, \sys{N}{Q}$ but $\env \vDash \sys{N}{Q} \,\not\!\piCostBis\, \sys{M}{P}$. \end{defi} \begin{exa}[Assessing Client Efficiency] \label{eg:Clients-Bisim} For the (observer) type environment \begin{equation}\label{eq:env-sys-example:bis} \env_1 \deftxt \envmap{\ctit{srv}_1}{\chantypW{\chantypO{\tV_1}}}, \; \envmap{\ctit{srv}_2}{\chantypW{\chantypO{\tV_2}}}, \;\envmap{c}{\chantypW{\tV_1,\tV_2}} \end{equation} and clients $\ptit{C}_0$ and $\ptit{C}_1$ defined earlier in \eqref{eq:clients}, we can show that $\env_1 \vDash (\sysS{\ptit{C}_1}) \; \piCostBisAmm \; (\sysS{\ptit{C}_0})$ by constructing the witness bisimulation (family of) relation(s) \relR\ for $\env_1 \vDash (\sysS{\ptit{C}_1}) \; \piCostBisAmm{0} \; (\sysS{\ptit{C}_0})$ stated below:\footnote{In families of relations ranging over systems indexed by type environments and amortisation credits, such as \relR, we represent $\env \vDash (\sysS{P}) \; \piCostBisAmm{n} \; (\sys{\envv}{Q})$ as the quadruple $\langle \env, n, (\sysS{P}), (\sys{\envv}{Q})\rangle$.} \begin{equation*} \relR \!\deftxt\!\left\{ \begin{array}{l|@{\;}l} \langle\env,\; n,\; \sys{\sV'}{\ptit{C}_1}, \;\sys{\sVV'}{\ptit{C}_0}\rangle & n \geq 0 \\[0.1em] \left\langle\!\! \begin{array}{l} \env, n,\sys{\sV'}{\piAll{x}{\;\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_1}}}}}\\ \qquad\quad , \sys{\sVV'}{\piAll{x_1}{\piAll{x_2}\;\piOut{\ctit{srv}_1}{x_1}\,\piIn{x_1}{y}{\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_0}}}}} \end{array} \!\!\right\rangle & d \!\not\in\! \dom(\env) \\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; n, \;\sys{(\sV',d)}{\piOut{\ctit{srv}_1}{d}\,\piIn{d}{y}{\piOut{\ctit{srv}_2}{d}\,\piIn{d}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_1}}}}\\ \qquad\quad , \sys{(\sVV',d')}{\piAll{x_2}\;\piOut{\ctit{srv}_1}{d'}\,\piIn{d'}{y}{\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_0}}}} \end{array} \!\!\right\rangle &d' \!\not\in\! \dom(\env) \\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; n+1, \;\sys{(\sV',d)}{\piOut{\ctit{srv}_1}{d}\,\piIn{d}{y}{\piOut{\ctit{srv}_2}{d}\,\piIn{d}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_1}}}}\\ \qquad\quad , \sys{(\sVV',d',d'')}{\piOut{\ctit{srv}_1}{d'}\,\piIn{d'}{y}{\piOut{\ctit{srv}_2}{d''}\,\piIn{d''}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_0}}}} \end{array} \right\rangle &d'' \!\not\in\! \dom(\env) \\[0.8em] \left\langle\!\! \begin{array}{l} (\env,\emap{d}{\chantypO{\tV_1}}),\; n+1, \;\sys{(\sV',d)}{\piIn{d}{y}{\piOut{\ctit{srv}_2}{d}\,\piIn{d}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_1}}}}\\ \qquad\qquad\qquad , \sys{(\sVV',d,d'')}{\piIn{d}{y}{\piOut{\ctit{srv}_2}{d''}\,\piIn{d''}{z}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_0}}}} \end{array} \right\rangle & \sV' \subseteq \sVV' \\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; n+1, \;\sys{(\sV',d)}{\piOut{\ctit{srv}_2}{d}\,\piIn{d}{z}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_1}}}\\ \qquad\quad , \sys{(\sVV',d,d'')}{\piOut{\ctit{srv}_2}{d''}\,\piIn{d''}{z}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_0}}} \end{array} \right\rangle & \dom(\env) \subseteq \sV'\\[0.4em] \langle (\env,\emap{d}{\chantypO{\tV_2}}),\; n+1, \;\sys{(\sV',d)}{\piIn{d}{z}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_1}}},\; \sys{(\sVV',d',d)}{\piIn{d}{z}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_0}}} \rangle & \\ \langle \env,\; n+1, \;\sys{(\sV',d)}{\piOut{\ctit{c}}{(v,v')}{\ptit{C}_1}},\; \sys{(\sVV',d',d)}{\piOut{\ctit{c}}{(v,v')}{\ptit{C}_0}} \rangle\\ \end{array} \right\} \end{equation*} It is not hard to see that \relR\ contains the quadruple $\langle \env_1, 0, \sysS{\ptit{C}_1}, \sysS{\ptit{C}_0}\rangle$. One can also show that it is closed \wrt the transfer property of Definition~\ref{def:amortized-typed-bisim}. The key moves are: \begin{itemize} \item a single channel allocation by $\ptit{C}_1$ is matched by two channel allocations by $\ptit{C}_0$ --- from the second up to the fourth quadruple in the definition of \relR. Since channel allocations carry a positive cost, the amortisation credit increases from $n$ to $n+2-1$, \ie $n+1$, but this still yields a quadruple that is in the relation. One thing to note is that the first channel allocated by both systems is allowed to be different, \eg $d$ and $d'$, as long as it is not allocated already. \item Even though the internal channels allocated may be different, rule \rtit{lRen} allows us to rename the \resp names of the allocated channels (not known to the observer) so as match the channels communicated on $\ctit{srv}_1$ by the other system (fourth and fifth quadruples). Since these channels are not known to the observer, \ie they are not in $\dom(\env)$, they all amount to \emph{fresh} names, akin to scope extrusion \cite{Milner99,Hennessy07}. \item Communicating on the previously communicated channel on $\ctit{srv}_1$ consumes all of the observer's permissions for that channel (fifth quadruple), which allows rule \rtit{lRen} to be applied again so as to match the channels communicated on $\ctit{srv}_2$ (sixth quadruple). \end{itemize} We cannot however prove that $\env_1 \vDash (\sysS{\ptit{C}_0}) \; \piCostBisAmm{n} \; (\sysS{\ptit{C}_1})$ for any $n$ because we would need an \emph{infinite} amortisation credit to account for additional cost incurred by $\ptit{C}_0$ when it performs the channel extra allocation at every iteration; recall that this credit cannot become negative, and thus no finite credit is large enough to cater for all the additional cost incurred by $\ptit{C}_0$ over sufficiently large transition sequences. Similarly, from \eqref{eq:clients}, we can show that $\env_1 \vDash (\sysS{\ptit{C}_2}) \; \piCostBiss \; (\sysS{\ptit{C}_1})$ but also, from \eqref{eq:clients-complicated}, that ${\env_1 \vDash (\sysS{\ptit{C}_3}) \; \piCostBiss \; (\sysS{\ptit{C}_1})}$. In particular, we can show $\env_1 \vDash (\sysS{\ptit{C}_3}) \; \piCostBis \; (\sysS{\ptit{C}_1})$ even though $\sysS{\ptit{C}_1}$ is temporarily more efficient than $\sysS{\ptit{C}_3}$, \ie during the course of the first iteration. Our framework handles this through the use of the amortisation credit whereby, in this case, it suffices to use a credit of value $1$ and show $\env_1 \vDash (\sysS{\ptit{C}_3}) \; \piCostBisAmm{1} \; (\sysS{\ptit{C}_1})$; we leave the details to the interested reader. Using an amortisation credit of $1$ we can also show $\env_1 \vDash (\sysS{\ptit{C}_3}) \; \piCostBisAmm{1} \; (\sysS{\ptit{C}_2})$ through the bisimulation family-of-relations $\relR'$ below --- it is easy to check that it observes the transfer property of Definition~\ref{def:amortized-typed-bisim}; by constructing a similar relation, one can also show that $\env_1 \vDash (\sysS{\ptit{C}_2}) \; \piCostBisAmm{0} \; (\sysS{\ptit{C}_3})$ which implies that $\env_1 \vDash (\sysS{\ptit{C}_2}) \; \piCostBisEq \; (\sysS{\ptit{C}_3})$. We just note that in $\relR'$, the amortisation credit $n$ can be capped $0 \leq n \leq 1$ and revisit this point again in Section~\ref{sec:bisim-results}. \begin{equation*} \relR' \deftxt\left\{ \begin{array}{l|@{\;}l} \langle\env,\; 1,\; \sysS{\ptit{C}_3}, \;\sys{\sV}{\ptit{C}_2}\rangle \\[0.1em] \left\langle\!\! \begin{array}{l} \env,\; 1, \; \sysS{ \left(\begin{array}{l} \piAll{x_1}{\piAll{x_2}\;\piOut{\ctit{srv}_1}{x_1}\,\piIn{x_1}{y}{}}\\ \quad\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\piFree{x_1}{\piFree{x_2}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_3}}}} \end{array}\right) },\\ \qquad\quad \sys{\sV}{\piAll{x}{\;\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\piFree{x}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_2}}}}}} \end{array} \right\rangle & \\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; 1, \; \sys{(\sV,d)}{ \left(\begin{array}{l} \piAll{x_2}\;\piOut{\ctit{srv}_1}{d}\,\piIn{d}{y}{}\\ \quad\piOut{\ctit{srv}_2}{x_2}\,\piIn{x_2}{z}{\piFree{d}{\piFree{x_2}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_3}}}} \end{array}\right) },\\ \qquad\quad \sys{(\sV,d')}{{\;\piOut{\ctit{srv}_1}{d'}\,\piIn{d'}{y}{\piOut{\ctit{srv}_2}{d'}\,\piIn{d'}{z}{\piFree{d'}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_2}}}}}} \end{array} \right\rangle &d \!\not\in\! \dom(\env) \\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; 0, \; \sys{(\sV,d,d'')}{ \left(\begin{array}{l} \piOut{\ctit{srv}_1}{d}\,\piIn{d}{y}{\piOut{\ctit{srv}_2}{d''}\,\piIn{d''}{z}{}}\\ \quad\piFree{d}{\piFree{d''}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_3}}} \end{array}\right) },\\ \qquad\quad \sys{(\sV,d')}{{\piOut{\ctit{srv}_1}{d'}\,\piIn{d'}{y}{\piOut{\ctit{srv}_2}{d'}\,\piIn{d'}{z}{\piFree{d'}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_2}}}}}} \end{array} \right\rangle &d' \!\not\in\! \dom(\env)\\[0.8em] \left\langle\!\! \begin{array}{l} (\env,\emap{d}{\chantypO{\tV_1}}),\; 0, \; \sys{(\sV,d,d'')}{ \left(\begin{array}{l} \piIn{d}{y}{\piOut{\ctit{srv}_2}{d''}\,\piIn{d''}{z}{}}\\ \quad\piFree{d}{\piFree{d''}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_3}}} \end{array}\right) },\\ \qquad\quad \sys{(\sV,d)}{{\piIn{d}{y}{\piOut{\ctit{srv}_2}{d}\,\piIn{d}{z}{\piFree{d}{\piOut{\ctit{c}}{(y,z)}{\ptit{C}_2}}}}}} \end{array} \right\rangle &d'' \!\not\in\! \dom(\env) \\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; 0, \; \sys{(\sV,d,d'')}{ \piOut{\ctit{srv}_2}{d''}\,\piIn{d''}{z}{}\piFree{d}{\piFree{d''}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_3}}} },\\ \qquad\quad \sys{(\sV,d)}{\piOut{\ctit{srv}_2}{d}\,\piIn{d}{z}{\piFree{d}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_2}}}} \end{array} \right\rangle & \\[0.8em] \left\langle\!\! \begin{array}{l} (\env,\emap{d'}{\chantypO{\tV_2}}),\; 0, \; \sys{(\sV,d,d')}{ \piIn{d'}{z}{}\piFree{d}{\piFree{d'}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_3}}} },\\ \qquad\quad \sys{(\sV,d')}{\piIn{d'}{z}{\piFree{d'}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_2}}}} \end{array} \right\rangle & \dom(\env) \subseteq \sV\\[0.8em] \left\langle\!\! \begin{array}{l} \env,\; 0, \; \sys{(\sV,d,d')}{ \piFree{d}{\piFree{d'}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_3}}} },\\ \qquad\qquad \sys{(\sV,d')}{{\piFree{d'}{\piOut{\ctit{c}}{(v,v')}{\ptit{C}_2}}}} \end{array} \right\rangle & \\ \left\langle\!\! \begin{array}{l} \env,\; 0, \; \sys{(\sV,d')}{ \piFree{d'}{\piOut{\ctit{c}}{(v,z)}{\ptit{C}_3}} },\; \sys{\sV}{{\piOut{\ctit{c}}{(v,v')}{\ptit{C}_2}}} \end{array} \right\rangle & \\ \left\langle\!\! \begin{array}{l} \env,\; 1, \; \sys{\sV}{ \piOut{\ctit{c}}{(v,z)}{\ptit{C}_3} },\; \sys{\sV}{{\piOut{\ctit{c}}{(v,v')}{\ptit{C}_2}}} \end{array} \right\rangle \end{array} \right\} \end{equation*} $\Box$ \end{exa} \subsection{Alternatives} \label{sec:alternatives} The cost model we adhere to in Section~\ref{sec:cost-bisim} is not the only plausible one, but is intended to follow that described by costed reductions of Section~\ref{sec:language}. There may however be other valid alternatives, some of which can be easily accommodated through minor tweaking to our existing framework. For instance, an alternative cost model may focus on assessing the runtime execution of programs, whereby operations that access memory such as \piAll{x}{P} and \piFree{c}{P} have a runtime cost that far exceeds that of other operations. We can model this by considering an LTS that assigns a cost of $1$ to both of these operations, which can be attained as a derived LTS from our existing LTS of Section~\ref{sec:LTS} through the rule \begin{equation*} \begin{prooftree} \confESP \;\piRedDecCost{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \justifiedBy{lDer1} \confESP \;\piRedDecCostPost{\;\mu\;}{|k|}\; \conf{\env'}{\sys{\sV'}{P'}} \end{prooftree} \end{equation*} where $|k|$ returns the absolute value of an integer. \defref{def:amortized-typed-bisim} extends in straightforward fashion to work with the derived costed LTS $\piRedDecCostPost{\;\mu\;}{k}$. This new preorder would allow us to conclude ${\env_1 \vDash (\sysS{\ptit{C}_1}) \; \piCostBis \; (\sysS{\ptit{C}_2})}$ because, according to the new cost model, for every server-interaction iteration, client $\ptit{C}_1$ uses less expensive memory operations than $\ptit{C}_2$. Another cost model may require us to refine our existing preorder. For instance, consider another client $\ptit{C}_4$, defined below, that creates a single channel and keeps on reusing it for all iterations: \begin{equation*} \ptit{C}_4 \deftri \piAll{x}{\;\piRecX{\,\piOut{\ctit{srv}_1}{x}\,\piIn{x}{y}{\;\piOut{\ctit{srv}_2}{x}\,\piIn{x}{z}{\piOut{\ctit{ret}}{(y,z)}{\;w}}}}} \end{equation*} At present, we are able to equate this client with $\ptit{C}_2$ and $\ptit{C}_3$ from \eqref{eq:clients} and \eqref{eq:clients-complicated} \resp, on the basis that neither client carries any memory leaks. \begin{equation*} \env_1 \vDash (\sysS{\ptit{C}_4}) \; \piCostBisEq \; (\sysS{\ptit{C}_3}) \; \piCostBisEq \; (\sysS{\ptit{C}_2}) \end{equation*} However, we may want a finer preorder where $\ptit{C}_4$ is considered to be (strictly) more efficient than $\ptit{C}_2$, which is in turn more efficient than $\ptit{C}_3$. The underlying reasoning for this would be that $\ptit{C}_4$ uses the least amount of expensive operations; by contrast $\ptit{C}_2$ keeps on allocating (and deallocating) new channels for each iteration, and $\ptit{C}_3$ allocates (and deallocates) two new channels for every iteration. We can characterise this preorder as follows. First we generate the derived costed LTS using the rule \rtit{lDer2} below --- $\lfloor k \rfloor$ maps all negative integers to $0$, leaving positive integers unaltered. \begin{equation*} \begin{prooftree} \confESP \;\piRedDecCost{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \justifiedBy{lDer2} \confESP \;\piRedDecCostPost{\;\mu\;}{\lfloor k \rfloor }\; \conf{\env'}{\sys{\sV'}{P'}} \end{prooftree} \end{equation*} Then, after adapting \defref{def:amortized-typed-bisim} to this derived LTS, denoting such a bisimulation relation as \piCostBisTwo, we can define the refined preorder, denoted as \piCostBisTre, as follows: \begin{equation*} \env \vDash \sysSP \;\piCostBisTre\; \sys{\sVV}{Q} \quad\deftxt\quad \begin{cases} \env \vDash \sysSP\, \piCostBis\, \sys{\sVV}{Q} \text{ and } \\ \env \vDash \sys{\sVV}{Q} \,\piCostBis\, \sysSP \text{ implies }\env \vDash \sysSP\, \piCostBisTwo\, \sys{\sVV}{Q} \end{cases} \end{equation*} The new refined preorder $\piCostBisTre$ above requires that \sysSP is at least as efficient as \sys{\sVV}{Q} (possibly more) when it comes to memory leaks, \ie \piCostBis, and moreover, whenever they are equally efficient \wrt these leaks, \sysSP must also be as efficient (possibly more) \wrt memory allocations, \ie \piCostBisTwo. \subsection{Properties of \,\texorpdfstring{\piCostBis}{piCostBis}} \label{sec:bisim-results} We show that our bisimulation relation of Definition~\ref{def:amortized-typed-bisim} observes a number of properties that are useful when reasoning about resource efficiency; see Example~\ref{eg:Clients-Bisim-continued} below. Lemmas~\ref{lem:reflexivity} and \ref{lem:transitivity} prove that the relation is in fact a preorder, whereas Lemma~\ref{lem:symmetry-bound} outlines conditions where symmetry can be recovered. Finally, Theorem~\ref{thm:costed-bisim-compositionality} shows that this preorder is preserved under (valid) context; this is the main result of the section. First off, we show that $\piCostBis$ is a preorder following Lemma~\ref{lem:reflexivity} (where $\sigma_\env$ would be the identity) and Lemma~\ref{lem:transitivity}. \begin{lem}[Reflexivity upto Renaming]\label{lem:reflexivity} Whenever the triple \confESP\ is a configuration, then ${\env \vDash (\sysSP)\sigma_\env \piCostBisEq \sysSP}$ \end{lem} \begin{proof} By coinduction, by showing that the family of relations \begin{displaymath} \sset{\langle\env,0,(\sysSP)\sigma_\env, \sysSP \rangle \;|\; \confESP \text{ is a configuration} } \end{displaymath} is a bisimulation. \end{proof} \begin{lem}[Transitivity]\label{lem:transitivity} Whenever $\env \vDash \sysSP \,\piCostBis\, \sys{\sV'}{P'}$ and $\env \vDash \sys{\sV'}{P'} \,\piCostBis\, \sys{\sV''}{P''}$ then ${\env \vDash \sysSP \,\piCostBis\, \sys{\sV''}{P''}}$ \end{lem} \begin{proof} $\env \vDash \sysSP \,\piCostBis\, \sys{\sV'}{P'}$ implies that there exists some $n\geq 0$ and corresponding bisimulation relation justifying $\env \vDash \sysSP \,\piCostBisAmm{n}\, \sys{\sV'}{P'}$. The same applies for $\env \vDash \sys{\sV'}{P'} \,\piCostBis\, \sys{\sV''}{P''}$ and some $m\geq 0$. From these two relations, one can construct a corresponding bisimulation justifying $\env \vDash \sysSP \,\piCostBisAmm{{n+m}}\, \sys{\sV''}{P''}$. \end{proof} \begin{cor}[Preorder]\label{cor:preorder} $\piCostBis$ is a preorder. \end{cor} \begin{proof} Follows from Lemma~\ref{lem:reflexivity} (for the special case where $\sigma_\env$ is the identity) and Lemma~\ref{lem:transitivity}. \end{proof} We can define a restricted form of amortised typed bisimulation, in analogous fashion to Definition~\ref{def:amortized-typed-bisim}, whereby the credit is \emph{capped at some upper bound}, \ie some natural number $m$. We refer to such relations as \emph{Bounded Amortised Typed-Bisimulations} and write $$\env \vDash^m \sysSP \,\piCostBisAmm{n}\, \sys{\sVV}{Q}$$ to denote that \confESP\ and \confE{\sys{\sVV}{Q}} are related by some amortised typed-indexed bisimulation at index \env and credit $n$, and where every credit in this relation is less than or equal to $m$; whenever the precise credit $n$ is not important we elide it and simply write $\env \vDash^m \sysSP \,\piCostBis\, \sys{\sVV}{Q}$. We can show that bounded amortised typed-bisimulations are symmetric. \begin{lem}[Symmetry]\label{lem:symmetry-bound} $\env \vDash^m \sysSP \,\piCostBis\, \sys{\sVV}{Q}$ implies $\env \vDash^m \sys{\sVV}{Q} \,\piCostBis\, \sysSP$ \end{lem} \begin{proof} If \relR\ is the bounded amortised typed relation justifying $\env \vDash^m \sysSP \,\piCostBis\, \sys{\sVV}{Q}$, we define the amortised typed relation \begin{displaymath} \relR_\text{sym} = \sset{ \langle \env, (m-n), \sys{\sVV}{Q}, \sysSP \rangle \;|\; \langle \env, n, \sysSP, \sys{\sVV}{Q} \rangle \in \relR } \end{displaymath} and show that it is a bounded amortised typed bisimulation as well. Consider an arbitrary pair of configurations $\env \vDash \sys{\sVV}{Q} \,\relR_\text{sym}^{\,m-n}\, \sysSP$: \begin{itemize} \item Assume $\confE{\sys{\sVV}{Q}} \piRedDecCost{\mu}{l} \conf{\env'}{\sys{\sVV'}{Q'}}$. From the definition of $\relR_\text{sym}$, it must be the case that $ \langle \env, n, \sysSP, \sys{\sVV}{Q} \rangle \in \relR$. Since \relR\ is a bounded amortised typed bisimulation, we know that $\confE{\sys{\sV}{P}} \piRedWDecCost{\hat{\mu}}{l} \conf{\env'}{\sys{\sV'}{P'}}$ where $ \langle \env', n+l-k, \sys{\sV'}{P'}, \sys{\sVV'}{Q'} \rangle \in \relR$. We however need to show that $ \langle \env', ((m - n) +k-l), \sys{\sVV'}{Q'}, \sys{\sV'}{P'} \rangle \in \relR_\text{sym}$, which follows from the definition of $\relR_\text{sym}$ and the fact that $\bigl(m - (n + l - k)\bigr) = (m-n) + k - l$. What is left to show is that $\relR_\text{sym}$ is an amortised typed bisimulation bounded by $m$, \ie we need to show that $0 \leq (m-n) + k - l \leq m$. Since \relR\ is an $m$-bounded amortised typed bisimulation, we know that $ 0 \leq (n + l -k) \leq m$ from which we can drive $- m \leq -(n + l -k) \leq 0$ and, by adding $m$ throughout we obtain $0 \leq \bigl( m -(n + l -k) = (m-n) + k - l\bigr) \leq m $ as required. \item The dual case for $\confE{\sys{\sV}{P}} \piRedDecCost{\mu}{l} \conf{\env'}{\sys{\sV'}{P'}}$ is analogous. \qedhere \end{itemize} \end{proof} \emph{Contextuality} is an important property for any behavioural relation. In our case, this means that two systems \sysSP and \sys{\sVV}{Q} related by \piCostBis under \env, remain related when extended with an additional process, $R$, whenever this process runs safely over the respective resource environments \sV\ and \sVV, and observes the type restrictions and guarantees assumed by \env (and dually, those of the respective existentially-quantified type environments for \sysSP and \sys{\sVV}{Q}). Following Definition~\ref{def:configuration}, for these conditions to hold, contextuality requires $R$ to typecheck \wrt a sub-environment of \env, say $\env_1$ where $\env = \env_1,\env_2$, and correspondingly strengthens the relation of \sysS{P\piParal R} and \sys{\sVV}{Q\piParal R} in \piCostBis under the remaining sub-environment, $\env_2$. Stated otherwise, contextuality requires the transfer of the respective permissions associated with the observer sub-process $R$ from the observer environment \env; this is crucial in order to preserve consistency, thus safety, in the respective configurations. The formulation of Theorem~\ref{thm:costed-bisim-compositionality}, proving contextuality for \piCostBis, follows this reasoning. It relies on a list of lemmas outlined below. \begin{lem}[Weakening]\label{lem:weakening} If $\confESP \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}}$ then $\conf{(\env,\envv)}{\sysSP} \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{(\env',\envv)}{\sys{\sV'}{P'}}$. (These may or may not be configurations.) \end{lem} \begin{proof} By rule induction on $ \confESP \piRedDecCostPre{\;\mu\;}{k} \conf{\env'}{\sys{\sV'}{P'}}$. Note that, in the case of $\actall$, the action can still be performed. \end{proof} \begin{lem}[Strengthening]\label{lem:strenghtening} If $\conf{(\env,\envv)}{\sysSP}\piRedDecCostPre{\;\mu\;}{k}\conf{(\env',\envv)}{\sys{\sV'}{P'}}$ then $\conf{\env}{\sysSP}\piRedDecCostPre{\;\mu\;}{k}\conf{\env'}{\sys{\sV'}{P'}}$. \end{lem} \begin{proof} By rule induction on $ \conf{\env,\envv}{P}\piRedDecCostPre{\;\mu\;}{k}\conf{\env',\envv}{P'}$. Note that strengthening is restricted to the part of the environment that remains unchanged ($\envv$ is the same on the left and right hand side) --- otherwise the property does not hold for actions \actout{c}{\vec{d}} and \actin{c}{\vec{d}}. \end{proof} \begin{lem}\label{prop:consistency-env-struct} If $\env,\envv$ is consistent and $\envv\ensuremath{\mathrel{\prec}} \envv'$ then $\env,\envv'$ is consistent and $\env,\envv\ensuremath{\mathrel{\prec}} \env,\envv'$ \end{lem} \begin{proof} As in \cite{EFH:uniqueness:journal:12}. \end{proof} \begin{lem}[Typing Preserved by \piStruct]\label{prop:steq-typing} $\env\vdash P$ and $P \piStruct Q$ implies $\env\vdash Q$ \end{lem} \begin{proof} As in \cite{EFH:uniqueness:journal:12}. \end{proof} \begin{lem}[Environment Structural Manipulation Preserves Bisimulation]\label{prop:env-struct-rules-bisim} \begin{displaymath} \env \vDash S\;\piCostBisAmm{n}\; T \text{ and }\env \ensuremath{\mathrel{\prec}} \env' \text{ implies } \env' \vDash S\;\piCostBisAmm{n}\; T \end{displaymath} \end{lem} \begin{proof} By coinduction. We define the quarternary relation $$\sset{\langle \env',n,S,T \rangle | \; \env \vDash S\;\piCostBisAmm{n}\; T \text{ and }\env \ensuremath{\mathrel{\prec}} \env' }$$ and show that it observes the transfer property of \defref{def:amortized-typed-bisim}. \end{proof} \begin{lem}[Bisimulation and Structural Equivalence]\label{lem:bisim-struct-equiv} \begin{displaymath} P\piStruct Q \text{ and }\confE{\sys{\sV}{P}} \piRedDecCostPad{\mu}{k} \conf{\envv}{\sys{\sV'}{P'}} \text{ implies } \confE{\sys{\sV}{Q}} \piRedDecCostPad{\mu}{k} \piCfg{\envv}{\sys{\sV'}{Q'}} \text{ and } P'\piStruct Q' \end{displaymath} \end{lem} \begin{proof} By rule induction on $P\piStruct Q$ and then a case analysis of the rules permitting $\confE{\sys{\sV}{P}} \piRedDecCostPad{\mu}{k} \conf{\envv}{\sys{\sV'}{P'}}$. \end{proof} \begin{cor}[Structural Equivalence and Bisimilarity]\label{cor:Struct-eq-implies-bisim} \begin{math} P\piStruct Q \text{ implies } \env \vDash \sys{\sV}{P} \,\piCostBisAmm{n}\, \sys{\sV}{Q} \end{math} for arbitrary $n$ and \env where \confE{\sys{\sV}{P}} and \confE{\sys{\sV}{Q}} are configurations. \end{cor} \begin{proof} By coinduction and Lemma~\ref{lem:bisim-struct-equiv}. \end{proof} \begin{lem}[Renaming]\label{lem:costed-bisim-renaming} If \begin{math} \env, \envv \vDash (\sys{M}{P}) \; \piCostBisAmm{n} \; (\sys{N}{Q}) \end{math} then \begin{math} \env, (\envv\sigma_\env) \vDash (\sys{M}{P})\sigma_\env \; \piCostBisAmm{n} \; (\sys{N}{Q})\sigma_\env \end{math} \end{lem} \begin{proof} By coinduction. \end{proof} \begin{thm}[Contextuality]\label{thm:costed-bisim-compositionality} If \begin{math} \env, \envv \vDash (\sys{M}{P}) \; \piCostBisAmm{n} \; (\sys{N}{Q}) \end{math} and $\Delta \vdash R$ then \begin{displaymath} \env \vDash (\sys{M}{P \piParal R}) \; \piCostBisAmm{n} \; (\sys{N}{Q \piParal R}) \quad\text{ and }\quad \env \vDash (\sys{M}{R \piParal P}) \; \piCostBisAmm{n} \; (\sys{N}{R \piParal Q}) \end{displaymath} \end{thm} \begin{proof} We define the family of relations $\relR^{\env,n}$ to be the least one satisfying the rules {\small \begin{equation*} \begin{prooftree} \env \vDash (\sys{M}{P}) \piCostBisAmm{n} (\sys{N}{Q}) \justifies \env \vDash (\sys{M}{P}) \; \relR^n \; (\sys{N}{Q}) \end{prooftree} \qquad \begin{prooftree} \env,\envv \vDash (\sys{M}{P}) \; \relR^n \; (\sys{N}{Q}) \quad \envv\vdash R \justifies \env \vDash (\sys{M}{P \piParal R}) \; \relR^n \; (\sys{N}{Q \piParal R}) \end{prooftree} \qquad \begin{prooftree} \env,\envv \vDash (\sys{M}{P}) \; \relR^n \; (\sys{N}{Q}) \quad \envv\vdash R \justifies \env \vDash (\sys{M}{R \piParal P}) \; \relR^n \; (\sys{N}{R \piParal Q}) \end{prooftree} \end{equation*} } and then show that $\relR^{\env,n}$ is a costed typed bisimulation at \env and $n$ (up to $\piStruct$). Note that the premise of the first rule implies that both $\conf{\env,\envv}{\sys{M}{P}}$ and $\conf{\env,\envv}{\sys{N}{Q}}$ are configurations. We consider only the transitions of the left hand configurations for second case of the relation; the first is trivial and the third is analogous to the second. Although the relation is not symmetric, the transition of the right hand configurations are analogous to those of the left hand configurations. There are three cases to consider. \begin{enumerate} \item Case the action was instigated by $P$, \ie we have: \begin{equation} \begin{prooftree} \[\confE{(\sys{\sV}{P})\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{l}\; \conf{\env'}{\sys{\sV'}{P'}} \justifiedBy{lPar-L} \confE{(\sys{\sV}{P\piParal R})\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{l}\; \conf{\env'}{\sys{\sV'}{P'\piParal R\sigma_\env}}\] \justifiedBy{lRen} \confE{\sys{\sV}{P\piParal R}} \;\piRedDecCost{\;\mu\;}{l}\; \conf{\env'}{\sys{\sV'}{P'\piParal R\sigma_\env}} \end{prooftree} \label{eq:27:bis} \end{equation} By Lemma~\ref{lem:weakening} (Weakening), \rtit{lRen} and \eqref{eq:27:bis} we obtain \begin{align} \label{eq:27:biss} \conf{\env,(\envv\sigma_\env)}{(\sysSP)\sigma_\env} \piRedDecCostPad{\mu}{l} \conf{\env',(\envv\sigma_\env)}{\sys{\sV'}{P'}} \end{align} Lemma~\ref{lem:costed-bisim-renaming} can be extended to $\relR^n$ is straightforward fashion, and from the case assumption $\env,\envv \vDash \,\sysSP\, \relR^n\, \sys{\sVV}{Q}$ (defining $\relR^{\env,n}$) and the extension of Lemma~\ref{lem:costed-bisim-renaming} to $\relR^n$ we obtain: \begin{equation}\label{eq:27a:bis} \env,(\envv\sigma_\env) \vDash (\sysSP)\sigma_\env \relR^n (\sys{\sVV}{Q})\sigma_\env \end{equation} Hence by \eqref{eq:27a:bis}, \eqref{eq:27:biss} and I.H. there exists a $\sys{N'}{Q'}$ such that \begin{align} \label{eq:28:bis} &\conf{\env,(\envv\sigma_\env)}{(\sys{\sVV}{Q})\sigma_{\env}} \;\piRedWDecCost{\hat{\mu}}{k}\; \conf{\env',(\envv\sigma_\env)}{\sys{\sVV'}{Q'}} \\ \label{eq:28b:bis} \quad\text{where}& \quad \env',(\envv\sigma_\env)\vDash (\sys{\sV'}{P'})\;\relR^{n+k-l}\; (\sys{\sVV'}{Q'}) \end{align} By \eqref{eq:28:bis} and \rtit{lRen}, were $\env_1 = \env,(\envv\sigma_\env)$, we obtain \begin{equation}\label{eq:28c:bis} \conf{\env,(\envv\sigma_\env)}{\bigl((\sys{\sVV}{Q})\sigma_{\env})\bigr)\sigma'_{\env_1}} \quad\bigl(\piRedTauDecCostPre{k_1}\bigr)\piRedDecCostPre{\hat{\mu}}{k_2}\bigl(\piRedTauDecCostPre{k_3}\bigr)\quad \conf{\env',(\envv\sigma_\env)}{\sys{\sVV'}{Q'}} \end{equation} where $k=k_1+k_2+k_3$. By \rtit{lPar} Lemma~\ref{lem:strenghtening} (Strengthening) and \eqref{eq:28c:bis} we deduce \begin{equation} \label{eq:41:bis} \conf{\env}{\bigl((\sys{\sVV}{Q})\sigma_{\env})\bigr)\sigma'_{\env_1} \piParal R\sigma_\env} \quad\bigl(\piRedTauDecCostPre{k_1}\bigr)\piRedDecCostPre{\hat{\mu}}{k_2}\bigl(\piRedTauDecCostPre{k_3}\bigr)\quad \conf{\env'}{\sys{\sVV'}{Q'\piParal R\sigma_\env}} \end{equation} From \tproc{\envv}{R} we know \begin{equation}\label{eq:41a:bis} \tproc{\envv\sigma_\env}{R\sigma_\env} \end{equation} and, from $\env_1 = \env,(\envv\sigma_\env)$ and \defref{def:renaming} (Renaming Modulo Environments), we know that $(R\sigma_\env)\sigma'_{\env_1} = R\sigma_\env$ since the renaming does not modify any of the names in the domain of $\env_1$, hence of $\envv\sigma_\env$. Also, from \defref{def:renaming}, $\sigma'_{\env_1}$ is also a substitution modulo $\env$ and can therefore refer to it as $\sigma'_\env$, thereby rewriting \eqref{eq:41:bis} as \begin{align} \label{eq:42:bis} \conf{\env}{\bigl(\sys{\sVV}{Q}\piParal R \bigr)\sigma_{\env}\sigma'_{\env} } \quad\bigl(\piRedTauDecCostPre{k_1}\bigr)\piRedDecCostPre{\hat{\mu}}{k_2}\bigl(\piRedTauDecCostPre{k_3}\bigr)\quad \conf{\env'}{\sys{\sVV'}{Q'\piParal R\sigma_\env}} \end{align} From \eqref{eq:42:bis} and \rtit{lRen} we thus obtain \begin{align*} \conf{\env}{\sys{\sVV}{Q \piParal R}} \quad\piRedWDecCost{\;\hat{\mu}\;}{k}\quad \conf{\env'}{\sys{\sVV'}{(Q'\piParal R\sigma_\env)}} \end{align*} This is our matching move since and by \eqref{eq:28b:bis}, \eqref{eq:41a:bis} and the definition of \relR\ we obtain $\env'\vDash (\sys{\sV'}{P'\piParal R\sigma_\env})\;\relR^{n+l-k}\; (\sys{\sVV'}{Q'\piParal R\sigma_\env})$. \item Case the action was instigated by $R$, \ie we have: \begin{equation} \begin{prooftree} \[\confE{(\sys{\sV}{R})\sigma_\env} \quad\piRedDecCostPre{\;\mu\;}{l}\quad \conf{\env'}{\sys{\sV'}{R'}} \justifiedBy{lPar-R} \confE{\bigl(\sys{\sV}{P\piParal R}\bigr)\sigma_\env} \quad\piRedDecCostPre{\;\mu\;}{l}\quad \conf{\env'}{\sys{\sV'}{P\piParal R'}}\] \justifiedBy{lRen} \confE{\sys{\sV}{P\piParal R}} \quad\piRedDecCost{\;\mu\;}{l}\quad \conf{\env'}{\sys{\sV'}{P\piParal R'}} \end{prooftree} \label{eq:62:bis} \end{equation} The proof proceeds by case analysis of $\mu$ whereby the most interesting cases are when $l=+1$ or $l=-1$. We here show the case for when $l=-1$ (the other case is analogous). By Lemma~\ref{lem:transition-structure} we know that either $\mu = \actfree{c} $ and \begin{align*} &\sV\sigma_\env=\sV',c & & R' \piStruct R\sigma_\env & & \env = \env',\emap{c}{\chantypUnq{\tVlst}} \end{align*} or else that $\mu = \tau$ and \begin{align} \label{eq:63:bis} &\sV\sigma_\env=\sV',c \\ \label{eq:76:bis} &R\sigma_\env \piStruct \piFree{c}{R_1}\piParal R_2 \;\text{ and }\; R'\piStruct R_1\piParal R_2 \\ \label{eq:77:bis} & \env=\env' \\ \label{eq:78:bis} &\envv\sigma_\env\ensuremath{\mathrel{\prec}} \envv',\emap{c}{\chantypU{\tVlst}} \; \text{ and }\;\tproc{\envv'}{R'}. \end{align} We here focus on the latter case, \ie when $\mu = \tau$. The main complication in finding a matching move for this subcase is that of inferring a pair of resultant systems (one of which is \conf{\env}{\sys{\sV'}{P\piParal R'}}) that are related by \relR\ by using the inductive nature of the relation definition. To be able to do so, we need to mimic the effect of $R$'s deallocation transition on \sV\ in the corresponding system \sys{\sVV}{Q}; we do this with the help of an appropriate external deallocation transition \actfree{c}. By the extension of Lemma~\ref{lem:costed-bisim-renaming} to $\relR^n$ we know ${\env,\envv\sigma_\env \vDash (\sysS{P})\sigma_\env\;\relR^n\; (\sys{\sVV}{Q})\sigma_\env}$, and by \eqref{eq:78:bis} and a straightforward extension of Lemma~\ref{prop:env-struct-rules-bisim} to $\relR$ we obtain \begin{align} \label{eq:79:bis} &\env,\envv',\emap{c}{\chantypU{\tVlst}} \vDash (\sysS{P})\sigma_\env\;\relR^n\; (\sys{\sVV}{Q})\sigma_\env \end{align} and by \eqref{eq:63:bis} and \rtit{lFreeE} we deduce \begin{align*} &\conf{\env,\envv',\emap{c}{\chantypU{\tVlst}}}{\bigl(\sys{\sV}{P}\bigr)\sigma_\env} \piRedDecCostPad{{\actfree{c}}}{-1} \conf{\env,\envv'}{\bigl(\sys{\sV'}{P}\bigr)\sigma_\env} \end{align*} and by \eqref{eq:79:bis} and I.H. there exists a matching move \begin{align} \label{eq:81:bis} &\conf{\env,\envv',\emap{c}{\chantypU{\tVlst}}}{(\sys{\sVV}{Q})\sigma_\env } \piRedWDecCostPad{\actfree{c}}{k} \conf{\env,\envv'}{\sys{\sVV'}{Q'}} \\ \label{eq:82a:bis} \text{ and }& \env,\envv' \vDash \sys{\sV'}{P'} \;\relR^{n+k-(-1)}\; \sys{\sVV'}{Q'} \end{align} By \eqref{eq:81:bis} and \rtit{lRen}, for $k = k_{1}-1+k_{2}$, we know \begin{align} \label{eq:82b:bis} &\conf{\env,\envv',\emap{c}{\chantypU{\tVlst}}}{\bigl((\sys{\sVV}{Q})\sigma_\env \bigr)\sigma'_{\env_2}} \quad\piRedTauDecCostPre{k_1}\quad \conf{\env,\envv',\emap{c}{\chantypU{\tVlst}}}{\sys{\sVV''}{Q''}}\\ \label{eq:82:bis} \text{where }& \env_2=\env,\envv',\emap{c}{\chantypU{\tVlst}} \text{ (used in $\sigma'_{\env_2}$ above)} \\ \label{eq:82c:bis} & \conf{\env,\envv',\emap{c}{\chantypU{\tVlst}}}{\sys{\sVV''}{Q''}}\quad\piRedDecCostPre{\actfree{c}}{-1}\quad\conf{\env,\envv'}{\sys{\sVV'''}{Q''}}\\ \label{eq:82d:bis} &\conf{\env,\envv'}{\sys{\sVV'''}{Q''}}\quad\piRedTauDecCostPre{k_2} \quad\conf{\env,\envv'}{\sys{\sVV'}{Q'}} \end{align} From \eqref{eq:82b:bis},~\eqref{eq:82d:bis}, \rtit{lPar-L} and Lemma~\ref{lem:strenghtening} (Strengthening) we obtain: \begin{align} \label{eq:83a:bis} &\conf{\env}{\bigl((\sys{\sVV}{Q})\sigma_\env \bigr)\sigma'_{\env_2}\piParal R\sigma_\env} \quad\piRedTauDecCostPre{k_1}\quad \conf{\env}{\sys{\sVV''}{Q'' \piParal (R\sigma_\env)}}\\ \label{eq:83b:bis} &\conf{\env}{\sys{\sVV'''}{Q''\piParal R'}}\quad\piRedTauDecCostPre{k_2} \quad\conf{\env}{\sys{\sVV'}{Q'\piParal R'}} \end{align} Also, from \eqref{eq:82c:bis} and Lemma~\ref{lem:transition-structure} (Transition and Structure) we deduce that $\sVV''=\sVV''',c$ and thus, from \eqref{eq:76:bis}, \rtit{lFree}, \rtit{lPar-R} we obtain: \begin{equation} \label{eq:83c:bis} \conf{\env}{\sys{\sVV''}{Q''\piParal R\sigma_\env}}\quad\piRedDecCostPre{\;\tau\;}{-1}\quad\conf{\env}{\sys{\sVV'''}{Q''\piParal R'}} \end{equation} By \eqref{eq:78:bis} and \eqref{eq:82:bis}, we know that we can find an alternative renaming function $\sigma''_{\env_3}$, where $\env_3 = \env,(\envv\sigma_\env)$, in a way that, from \eqref{eq:83a:bis}, we can obtain \begin{equation} \label{eq:83d:bis} \conf{\env}{\bigl((\sys{\sVV}{Q})\sigma_\env \bigr)\sigma''_{\env_3}\piParal R\sigma_\env} \quad\piRedTauDecCostPre{k_1}\quad \conf{\env}{\sys{\sVV''}{Q'' \piParal (R\sigma_\env)}} \end{equation} Now, by $\tproc{\envv}{R}$ we know $\tproc{\envv\sigma_\env}{R\sigma_\env}$ and subsequently, by \defref{def:renaming} and \eqref{eq:82:bis} we know $(R\sigma_\env)\sigma''_{\env_3} = R\sigma_\env$. Thus, we can rewrite $\bigl((\sys{\sVV}{Q})\sigma_\env \bigr)\sigma''_{\env_3}\piParal R\sigma_\env$ in \eqref{eq:83d:bis} as ${\bigl((\sys{\sVV}{Q}\piParal R)\sigma_\env \bigr)\sigma''_{\env_3}}$. Merging \eqref{eq:83d:bis},~\eqref{eq:83c:bis} and ~\eqref{eq:83b:bis} we obtain: \begin{equation*} \conf{\env}{\bigl((\sys{\sVV}{Q}\piParal R)\sigma_\env \bigr)\sigma''_{\env_3}} \quad\piRedTauDecCostPre{k_1}\piRedDecCostPre{\;\tau\;}{-1}\piRedTauDecCostPre{k_2} \quad\conf{\env}{\sys{\sVV'}{Q'\piParal R'}} \end{equation*} By \defref{def:renaming} we know that $\sigma''_{\env_3}$ can be rewritten as $\sigma''_{\env}$ and thus by \rtit{lRen} we obtain the matching move \begin{equation*} \conf{\env}{\sys{\sVV}{Q}\piParal R} \piRedWDecCostPad{\tau}{k} \conf{\env}{\sys{\sVV'}{Q'\piParal R'}} \end{equation*} because by \eqref{eq:82a:bis}, \eqref{eq:78:bis} and the definition of $\relR$ we know that $$\env \vDash \sys{\sV'}{P'\piParal R'} \;\relR^{n+k-(-1)}\; \sys{\sVV'}{Q'\piParal R'}.$$ \item Case the action resulted from an interaction between $P$ and $R$, \ie we have: \begin{equation} \begin{prooftree} \[ \conf{\env_1}{(\sysSP)\sigma_\env} \piRedDecCostPre{\actout{c}{\vec{d}}}{0} \conf{\env'_1}{\sys{\sV'}{P'}} \qquad \conf{\env_2}{(\sysS{R})\sigma_\env} \piRedDecCostPre{\actin{c}{\vec{d}}}{0} \conf{\env'_2}{\sys{\sV'}{R'}} \justifiedBy{lCom-L} \confE{(\sysS{P\piParal R})\sigma_\env} \piRedDecCostPre{\;\;\tau\;\;}{0} \confE{\sys{\sV'}{P'\piParal R'}} \] \justifiedBy{lRen} \confE{\sysS{P\piParal R}} \piRedDecCostPad{\;\;\tau\;\;}{0} \confE{\sys{\sV'}{P'\piParal R'}} \end{prooftree} \label{eq:29:bis} \end{equation} By the two top premises of \eqref{eq:29:bis} and Lemma~\ref{lem:transition-structure} we know \begin{align} \label{eq:31a:bis} \sV\sigma_\env &= \sV'\\ \label{eq:31:bis} P\sigma_\env & \piStruct \piOut{c}{\vec{d}}P_1 \piParal P_2 & P' & \piStruct P_1 \piParal P_2 \\ \label{eq:32:bis} R\sigma_\env & \piStruct \piIn{c}{\vec{x}}R_1 \piParal R_2 & R' & \piStruct R_1 \subC{\vec{d}}{\vec{x}} \piParal R_2 \end{align} From $\envv\vdash R$ we obtain $\envv\sigma_\env\vdash R\sigma_\env$, and by \eqref{eq:32:bis}, $\envv\vdash R$ and Inversion we obtain \begin{align} \label{eq:33:bis} &\envv\sigma_\env \ensuremath{\mathrel{\prec}} \envv_1,\envv_2,\emap{c}{\chantypA{\tVVlst}} \\ \label{eq:34:bis} &\envv_1,\emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{x}}{\tVVlst} \vdash R_1 \\ \label{eq:35:bis} &\envv_2 \vdash R_2 \end{align} Note that through~\eqref{eq:34:bis} we know that \begin{equation}\label{eq:30:bis} \envmap{c}{\chantyp{\tVVlst}{{\aV-1}}}\text{ is defined.} \end{equation} By \eqref{eq:34:bis}, the Substitution Lemma (Lemma 4.4 from \cite{EFH:uniqueness:journal:12}) and \eqref{eq:35:bis} we obtain \begin{align} \label{eq:36} &\envv_1,\envv_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst} \vdash R_1\subC{\vec{d}}{\vec{x}} \piParal R_2 \end{align} From the assumption defining $\relR$, and Lemma~\ref{lem:costed-bisim-renaming} we obtain \begin{equation}\label{eq:37a:bis} \env,(\envv\sigma_\env) \;\vDash\; (\sysSP)\sigma_\env \;\relR^n\; (\sys{\sVV}{Q})\sigma_\env, \end{equation} and by \eqref{eq:33:bis} and Proposition~\ref{prop:consistency-env-struct} we know that $\env,(\envv\sigma_\env)\ensuremath{\mathrel{\prec}} \env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}$ and also that $\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}$ is consistent. Thus by \eqref{eq:37a:bis} and Lemma~\ref{prop:env-struct-rules-bisim} we deduce \begin{align} \label{eq:37b:bis} &\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}} \;\vDash\; (\sysSP)\sigma_\env \;\relR^n\; (\sys{\sVV}{Q})\sigma_\env \end{align} Now by \eqref{eq:30:bis}, \eqref{eq:31:bis}, \eqref{eq:31a:bis}, \rtit{lOut}, \rtit{lPar-L}, \rtit{lRen} and Lemma~\ref{lem:bisim-struct-equiv} we deduce \begin{align} \label{eq:38:bis} &\conf{\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}}{(\sysSP)\sigma_\env} \piRedDecCostPad{\actout{c}{\vec{d}}}{0} \conf{\env,\envv'_1,\envv'_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst}}{\sys{\sV'}{P'}} \end{align} and hence by \eqref{eq:37b:bis} and I.H. we obtain \begin{align} \label{eq:39:bis} &\conf{\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}}{(\sys{\sVV}{Q})\sigma_{ \env}} \piRedWDecCostPad{\;\actout{c}{\vec{d}}\;}{k} \conf{\env,\envv'_1,\envv'_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst}}{\sys{\sVV'}{Q'}} \\ \label{eq:40:bis} &\text{such that } \env,\envv_1,\envv_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst} \vDash (\sys{\sV'}{P'})\;\relR^{n+k-0}\; (\sys{\sVV'}{Q'}) \end{align} From \eqref{eq:39:bis} and \rtit{lRen} we know \begin{align} \label{eq:92:bis} &\conf{\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}}{\bigl((\sys{\sVV}{Q})\sigma_{ \env}\bigr)\sigma'_{\env_4}} \piRedTauDecCostPre{k_1} \conf{\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}}{\sys{\sVV''}{Q''}} \\ \label{eq:93:bis} &\conf{\env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}}}{\sys{\sVV''}{Q''}} \piRedDecCostPad{\actout{c}{\vec{d}}}{0} \conf{\env,\envv_1,\envv_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst}}{\sys{\sVV''}{Q'''}} \\ \label{eq:94:bis} &\conf{\env,\envv_1,\envv_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst}}{\sys{\sVV''}{Q'''}} \piRedTauDecCostPre{k_2} \conf{\env,\envv_1,\envv_2, \emap{c}{\chantyp{\tVVlst}{{\aV-1}}},\emap{\vec{d}}{\tVVlst}}{\sys{\sVV'}{Q'}} \\ \label{eq:95:bis} &\text{ where } k=k_1 + k_2 \text{ and }\env_4 = \env,\envv_1,\envv_2, \emap{c}{\chantypA{\tVVlst}} \;\text{ ($\env_4$ is used in \eqref{eq:92:bis})} \end{align} From \eqref{eq:92:bis},~\eqref{eq:94:bis}, \rtit{lPar-L} and Lemma~\ref{lem:strenghtening} (Strengthening) we obtain: \begin{align} \label{eq:92a:bis} &\conf{\env}{\bigl((\sys{\sVV}{Q})\sigma_{ \env}\bigr)\sigma'_{\env_4}\piParal R\sigma_\env} \piRedTauDecCostPre{k_1} \conf{\env}{\sys{\sVV''}{Q''\piParal R\sigma_\env}}\\ \label{eq:94a:bis} &\conf{\env}{\sys{\sVV''}{Q'''\piParal R'}} \piRedTauDecCostPre{k_2} \conf{\env}{\sys{\sVV'}{Q'\piParal R'}} \end{align} By \eqref{eq:32:bis}, \rtit{lIn} and \rtit{lPar-L} we can construct (for some $\env_6,\env_7$) \begin{align} \label{eq:98:bis} & \conf{\env_6}{\sys{\sVV''}{R\sigma_\env}} \piRedDecCostPre{\actin{c}{\vec{d}}}{0} \conf{\env_7}{\sys{\sVV''}{R'}} \end{align} and by \eqref{eq:93:bis}, \eqref{eq:98:bis} and \rtit{lCom-L} we obtain \begin{align} \label{eq:99:bis} \conf{\env}{\sys{\sVV''}{Q'' \piParal R\sigma_\env}} \piRedDecCostPre{\;\;\tau\;\;}{0} \conf{\env}{\sys{\sVV''}{Q''' \piParal R'}} \end{align} By \eqref{eq:33:bis} and \eqref{eq:95:bis}, we know that we can find an alternative renaming function $\sigma''_{\env_5}$, where $\env_5 = \env,(\envv\sigma_\env)$, in a way that, from \eqref{eq:92a:bis}, we can obtain \begin{equation} \label{eq:92b:bis} \conf{\env}{\bigl((\sys{\sVV}{Q})\sigma_{ \env}\bigr)\sigma''_{\env_5}\piParal R\sigma_\env} \piRedTauDecCostPre{k_1} \conf{\env}{\sys{\sVV''}{Q''\piParal R\sigma_\env}} \end{equation} By \defref{def:renaming}, $\envv\sigma_\env\vdash R\sigma_\env$, \eqref{eq:33:bis}, \eqref{eq:95:bis} we know that $(R\sigma_\env)\sigma''_{\env_5} = R\sigma_\env$, and also that $\sigma''_{\env_5}$ is also a renaming modulo $\env$, so we can denote it as $\sigma''_\env$ and rewrite $\bigl((\sys{\sVV}{Q})\sigma_{ \env}\bigr)\sigma''_{\env_5}\piParal R\sigma_\env$ as $\bigl((\sys{\sVV}{Q\piParal R})\sigma_{ \env}\bigr)\sigma''_\env$ in \eqref{eq:92b:bis}. Thus, by \eqref{eq:92b:bis},~\eqref{eq:99:bis},~\eqref{eq:94a:bis},~\eqref{eq:95:bis} and \rtit{lRen} we obtain the matching move \begin{equation*} \conf{\env}{\sys{\sVV}{Q\piParal R}} \;\piRedWDecCostPad{\;\tau\;}{k}\; \conf{\env}{\sys{\sVV'}{Q'\piParal R'}} \end{equation*} since by \eqref{eq:40:bis},~\eqref{eq:36},~\eqref{eq:32:bis} and the definition of \relR\ we obtain \[ \env \vDash (\sys{\sV'}{P'\piParal R'})\;\relR^{n+k-0}\; (\sys{\sVV'}{Q'\piParal R'}) \] as required. \qedhere \end{enumerate} \end{proof} \begin{exa}[Properties of \piCostBis] \label{eg:Clients-Bisim-continued} From the proved statements $\env_1 \vDash (\sysS{\ptit{C}_1}) \; \piCostBis \; (\sysS{\ptit{C}_0})$ and $\env_1 \vDash (\sysS{\ptit{C}_2}) \; \piCostBis \; (\sysS{\ptit{C}_1})$ of Example~\ref{eg:Clients-Bisim}, and by Corollary~\ref{cor:preorder} (Preorder), we may conclude that \begin{equation} \label{eq:trans-statement:bis} \env_1 \vDash (\sysS{\ptit{C}_2}) \; \piCostBis \; (\sysS{\ptit{C}_0}) \end{equation} without the need to provide a bisimulation relation justifying \eqref{eq:trans-statement:bis}. We also note that $\relR'$ of Example~\ref{eg:Clients-Bisim}, justifying $\env_1 \vDash (\sysS{\ptit{C}_3}) \; \piCostBis \; (\sysS{\ptit{C}_2})$ is a \emph{bounded} amortised typed-bisimulation, and by Lemma~\ref{lem:symmetry-bound} we can also conclude \begin{equation*} \env_1 \vDash (\sysS{\ptit{C}_2}) \; \piCostBis \; (\sysS{\ptit{C}_3}) \end{equation*} and thus $\env_1 \vDash (\sysS{\ptit{C}_3}) \; \piCostBisEq \; (\sysS{\ptit{C}_2})$. Finally, by Theorem~\ref{thm:costed-bisim-compositionality}, in order to show that \begin{equation*} \envmap{c}{\chantypW{\tV_1,\tV_2}} \vDash (\sysS{\ptit{S}_1\piParal\ptit{S}_2\piParal\ptit{C}_1}) \; \piCostBiss \; (\sysS{\ptit{S}_1\piParal\ptit{S}_2\piParal\ptit{C}_0}) \end{equation*} it suffices to abstract away from the common code, $\ptit{S}_1\piParal\ptit{S}_2$, and show $\env_1 \vDash (\sysS{\ptit{C}_1}) \; \piCostBiss \; (\sysS{\ptit{C}_0}),$ as proved already in Example~\ref{eg:Clients-Bisim}. \end{exa} \section{Characterisation} \label{sec:characterisation} In this section we give a sound and complete characterization of bisimilarity in terms of the reduction semantics of Section~\ref{sec:language}, justifying the bisimulation relation and the respective LTS as a proof technique for reasoning about the behaviour of $\picr$ processes. Our touchstone behavioural preorder is based on a costed version of families of reduction-closed barbed congruences along similar lines to \cite{hennessy:buysell}. In order to limit behaviour to safe computations, these congruences are defined as typed relations (Definition~\ref{def:typed-relation}), where systems are subject to common observers typed by environments. The observer type-environment delineates the observations that can be made: the observer can only make distinctions for channels that it has a permission for, \ie at least an affine typing assumption. The observations that can be made in our touchstone behavioural preorder are described as \emph{barbs} \cite{HondaTokoro92:AsyncSemantics} that take into account the permissions owned by the observer. We require systems related by our behavioral preorder to exhibit the same barbs \wrt a common observer. \begin{defi}[Barb]\label{def:barb} $(\confE{\sys{M}{P}}) \piBarb{c} \;\deftxt\; (\sys{M}{P}) \piRedAst_{k}\piStruct (\sys{\sV'}{P'\piParal{\piOut{c}{\vec{d}}{P''}}}) \text{ and } c \in \dom(\env).$ \end{defi} \begin{defi}[Barb Preservation]\label{def:barb-preserving} A typed relation $\relR$ is barb preserving if and only if $$\env \vDash \sys{\sV}{P} \; \relR \; \sys{\sVV}{Q}\text{ implies } \left(\confE{\sys{\sV}{P}} \piBarb{c} \text{ iff }\confE{\sys{\sVV}{Q}} \piBarb{c}\right).$$ \end{defi} Our behavioural preorder takes cost into consideration; it is defined in terms of families of amortised typed relations that are closed under costed reductions. \begin{defi}[Cost Improving]\label{def:cost-improving} An amortized type-indexed relation $\relR$ is \emph{cost improving at credit} $n$ iff whenever $\env \vDash (\sys{\sV}{P}) \;\relR^n\; (\sys{\sVV}{Q})$ and \begin{enumerate} \item if $\sys{\sV}{P} \piRedCost{k} \sys{\sV'}{P'}$ then $\sys{\sVV}{Q} \piRedCost{l}^\ast \sys{\sVV'}{Q'}$ such that $\env \vDash (\sys{\sV'}{P'}) \; \relR^{n+l-k} \; (\sys{\sVV'}{Q'})$; \item if $\sys{\sVV}{Q} \piRedCost{l} \sys{\sVV'}{Q'}$ then $\sys{\sV}{P} \piRedCost{k}^\ast \sys{\sV'}{P'}$ such that $\env \vDash (\sys{\sV'}{P'}) \; \relR^{n+l-k} \; (\sys{\sVV'}{Q'})$. \end{enumerate} \end{defi} Related processes must be related under arbitrary (parallel) contexts; moreover, these contexts must be allowed to allocate new channels. We note that the second clause of our contextuality definition, Definition~\ref{def:contextuality}, is similar to that discussed earlier in Section~\ref{sec:bisim-results}, where we \emph{transfer} the respective permissions held by the observer along with the test $R$ placed in parallel with the processes. This is essential in order to preserve consistency (see \defref{def:consistency}) thus limiting our analysis to safe computations. Definition~\ref{def:contextuality} also requires an additional condition, when compared to the contextuality definition discussed in Section~\ref{sec:bisim-results}, namely that of \emph{resource extensions} where we consider systems in larger resource contexts (owned exclusively by the observer). This is described by the first clause in the definition; we recall the implicit condition for resource environment representations from \secref{sec:language}, requiring the channel $c$ not to be present (thus allocated) in \sV\ (\resp \sVV) for the resource environment to be well-formed --- $c$ is therefore fresh. In order to disambiguate between the different contextuality definitions, we refer to Definition~\ref{def:contextuality} as \emph{full contextuality}. \begin{defi}[Full Contextuality]\label{def:contextuality} An amortized type-indexed relation $\relR$ is contextual at environment $\env$ and credit $n$ iff whenever \begin{math} \env \vDash (\sysS{P}) \; \relR^n \; (\sys{\sVV}{Q}) \end{math}: \begin{enumerate} \item $\env,\emap{c}{\chantypU{\tVlst}} \vDash (\sys{\sV,c}{P}) \; \relR^n \; (\sys{\sVV,c}{Q}) $ \item If $\env\ensuremath{\mathrel{\prec}} \env_1,\env_2$ where $\env_2 \vdash R$ then \begin{itemize} \item $\env_1 \vDash (\sysS{P \piParal R}) \; \relR^n \; (\sys{\sVV}{Q \piParal R})$ and \item $\env_1 \vDash (\sysS{R \piParal P}) \; \relR^n \; (\sys{\sVV}{R \piParal Q})$ \end{itemize} \end{enumerate} \end{defi} We can now define the preorder defining our notion of observational system efficiency: \begin{defi}[Behavioral Contextual Preorder]\label{def:cost-preorder} $\piCost^{\env,n} $ is the largest family of amortized typed relations that is: \begin{itemize} \item Barb Preserving; \item Cost Improving at credit $n$; \item Full contextual at environment $\env$. \end{itemize} A system $\sysSP$ is said to be behaviourally as efficient as another system $\sys{\sVV}{Q}$ \wrt an observer \env, denoted as $\env \vDash \sysSP \piCost \sys{\sVV}{Q}$, whenever there exists an amortisation credit $n$ such that $\env \vDash \sysSP \piCost^n \sys{\sVV}{Q}$. Similarly, we can lift our preorder to processes: a process $P$ is said to be as efficient as $Q$ \wrt \sV\ and \env whenever there exists an $n$ such that $\env \vDash \sysSP \piCost^n \sysS{Q}$ \end{defi} \subsection{Soundness for \,\texorpdfstring{\piCostBis}{piCostBis}} \label{sec:soundness} Through Definition~\ref{def:cost-preorder} we are able to articulate why clients $\ptit{C}_2$ and $\ptit{C}'_2$ should be deemed to be behaviourally equally efficient \wrt $\env_1$ of \eqref{eq:env-sys-example:bis}: for an appropriate \sV, it turns out that we cannot differentiate between the two processes under any context allowed by \env. Unfortunately, the universal quantification of contexts of Definition~\ref{def:contextuality} makes it hard to verify such a statement. Through Theorem~\ref{thm:soundness} we can however establish that our bisimulation preorder of Definition~\ref{def:amortized-typed-bisim} provides a sound technique for determining behavioural efficiency. This Theorem, in turn, relies on the lemmas we outline below. In particular, Lemma~\ref{lem:-bisim-barb} and Lemma~\ref{lem:-bisim-reduc-closure} prove that bisimulations are barb-preserving and cost-improving, whereas Lemma~\ref{lem:-bisim-weakening} proves that bisimulations are preserved under resource extensions. The required result then follows from Theorem~\ref{thm:costed-bisim-compositionality} of Section~\ref{sec:bisim-results}. \begin{lem}[Reductions and Bijective Renaming]\label{lem:reduction-bijective-rename} \begin{math} \text{For any bijective renaming }\sigma,\\ (\sys{\sV}{P})\sigma \piRed_k (\sys{\sV'}{P'})\sigma \text{ implies } \sys{\sV}{P} \piRed_k \sys{\sV}{P} \end{math} \end{lem} \begin{proof} By rule induction on $ (\sys{\sV}{P})\sigma \piRed_k (\sys{\sV'}{P'})\sigma$. \end{proof} \begin{lem}[Barb Preservation]\label{lem:-bisim-barb} \begin{displaymath} \env \vDash \sys{\sV}{P} \; \piCostBis \; \sys{\sVV}{Q} \text{ and } \confE{\sys{\sV}{P}} \piBarb{c} \text{ implies } \confE{\sys{\sVV}{Q}} \piBarb{c} \end{displaymath} \end{lem} \begin{proof} By Definition~\ref{def:barb} we know $\sys{\sV}{P} \piRedAst_l\piStruct (\sys{\sV'}{P'\piParal{\piOut{c}{\vec{d}}{P''}}})$ where $c\in\dom(\env)$. By Lemma~\ref{lem:reduc-and-transitions}(1) we obtain \begin{math} \confE{\sys{\sV}{P}} \piRedWDecCostPad{\quad}{l} \confE{\sys{\sV'}{P'''}}\;\text{where}\; P''' \piStruct (P'\piParal{\piOut{c}{\vec{d}}{P''}}). \end{math} Moreover, by \rtit{lOut}, \rtit{lPar-R} and Lemma~\ref{lem:bisim-struct-equiv} we deduce \begin{math} \confE{\sys{\sV}{P} } \piRedWDecCost{\actout{c}{\vec{d}}}{l} \piStruct\conf{\env'}{\sys{\sV}{P'\piParal P''}} \end{math}. By $\env \vDash \sys{\sV}{P} \; \piCostBis \; \sys{\sVV}{Q}$ we know that there exists a move $\confE{\sys{\sVV}{Q}} \piRedWDecCost{\actout{c}{\vec{d}}}{k} \conf{\env'}{\sys{\sVV'}{Q'}}$ and from this matching move, Lemma~\ref{lem:reduc-and-transitions}(2) (for the initial $\tau$ moves of the weak action) and Lemma~\ref{lem:transition-structure} we obtain $(\sys{\sVV}{Q})\sigma_\env\piRedAst_{k_1}\piStruct (\sys{\sVV''}{Q''\piParal{\piOut{c}{\vec{d}}Q'''}})\sigma_\env$, which, together with $c \in \dom(\env)$ and Lemma~\ref{lem:reduction-bijective-rename}, implies $\sys{\sVV}{Q}\piRedAst_{k_1}\piStruct \sys{\sVV''}{Q''\piParal{\piOut{c}{\vec{d}}Q'''}}$ \ie $c$ is unaffected by the renaming $\sigma_\env$, and thus $\confE{\sys{\sVV}{Q}} \piBarb{c}$. \end{proof} \begin{lem}[Cost Improving]\label{lem:-bisim-reduc-closure} \begin{math} \env \vDash \sys{\sV}{P} \; \piCostBisAmm{n} \; \sys{\sVV}{Q} \text{ and } \sys{\sV}{P} \piRed_l \sys{\sV'}{P'} \end{math} then there exist some \begin{math} \sys{\sVV'}{Q'} \text{ such that } \sys{\sVV}{Q} \piRedAst_k \sys{\sVV'}{Q'} \text{ and } \env \vDash \sys{\sV'}{P'} \; \piCostBisAmm{n+k-l} \; \sys{\sVV'}{Q'} \end{math} \end{lem} \begin{proof} By $\sys{\sV}{P} \piRed_l \sys{\sV'}{P'}$ and Lemma~\ref{lem:reduc-and-transitions}(1) we know \begin{math} \confE{\sys{\sV}{P}} \piRedDecCost{\;\tau\;}{l} \confE{\sys{\sV}{P''}}\text{ where }P''\piStruct P' \end{math}. By \defref{def:amortized-typed-bisim} and assumption $\env \vDash \sys{\sV}{P} \; \piCostBisAmm{n} \; \sys{\sVV}{Q} $, this implies that \begin{math} {\confE{\sys{\sVV}{Q}}} \piRedWDecCost{\quad}{k} {\confE{\sys{\sV'}{Q'}}} \end{math} where \begin{equation}\label{eq:1:char} {\env \vDash \sys{\sV'}{P''}\piCostBisAmm{n+k-l} \sys{\sVV'}{Q'}}. \end{equation} By Lemma~\ref{lem:reduc-and-transitions}(2) we deduce \begin{math} {(\sys{\sVV}{Q})\sigma_\env} \piRedAst_k {\sys{\sVV'}{Q'}} \end{math} and by Lemma~\ref{lem:reduction-bijective-rename} we obtain \begin{math} {\sys{\sVV}{Q}} \piRedAst {\sys{\sVV''}{Q''}} \end{math} where $\sys{\sVV''}{Q''} = (\sys{\sVV'}{Q'})\sigma_\env$. The required result follows from $\env \vDash \sys{\sV'}{P'}\piCostBisAmm{0}\sys{\sV'}{P''}$, which we obtain from $P'\piStruct P''$ and Corollary~\ref{cor:Struct-eq-implies-bisim} (Structural Equivalence and Bisimilarity), \eqref{eq:1:char}, $\env \vDash \sys{\sVV''}{Q''} \piCostBisAmm{0} \sys{\sVV'}{Q'}$ which we obtain from Lemma~\ref{lem:reflexivity} (Reflexivity upto Renaming) and $\sys{\sVV''}{Q''} = (\sys{\sVV'}{Q'})\sigma_\env$, and Lemma~\ref{lem:transitivity}. \end{proof} \begin{lem}[Resource Extensions]\label{lem:-bisim-weakening} \begin{displaymath} \env \vDash \sys{\sV}{P} \; \piCostBisAmm{n} \; \sys{\sVV}{Q} \text{ implies } \env,\emap{c}{\chantypU{\tVlst}} \vDash \sys{(\sV,c)}{P} \; \piCostBisAmm{n} \; \sys{(\sVV,c)}{Q} \end{displaymath} \end{lem} \begin{proof} By coinduction. \end{proof} \begin{thm}[Soundness]\label{thm:soundness} \begin{math} \env \vDash (\sys{\sV}{P}) \; \piCostBisAmm{n} \; (\sys{\sVV}{Q}) \text{ implies } \env \vDash (\sys{\sV}{P}) \piCostAmm{n} (\sys{\sVV}{Q}) \end{math}. \end{thm} \begin{proof} Follows from Lemma~\ref{lem:-bisim-barb} (Barb Preservation), Lemma~\ref{lem:-bisim-reduc-closure} (Cost Improving), Lemma~\ref{lem:-bisim-weakening} (Resource Extensions) and Theorem~\ref{thm:costed-bisim-compositionality} (Contextuality). \end{proof} \begin{cor}[Soundness]\label{cor:soundness} \begin{math} \env \vDash (\sys{\sV}{P}) \; \piCostBis \; (\sys{\sVV}{Q}) \text{ implies } \env \vDash (\sys{\sV}{P}) \piCost (\sys{\sVV}{Q}) \end{math}. \end{cor} \subsection{Full Abstraction of \,\texorpdfstring{\piCost}{piCost}} \label{sec:completeness} To prove completeness, \ie that for every behavioural contextual preorder there exists a corresponding amortised typed-bisimulation, we rely on the adapted notion of \emph{action definability} \cite{Hennessy07,hennessy04behavioural}, which intuitively means that every action (label) used by our LTS can, in some sense, be simulated (observed) by a specific test context. For our specific case, two important aspects need to be taken into consideration: \begin{itemize} \item the \emph{typeability} of the testing context \wrt our substructural type system; \item the \emph{cost} of the action simulation, which has to correspond to the cost of the action being observed. \end{itemize} These aspects are formalised in Definition~\ref{def:cost-definability}, which relies on the functions definitions $\domm$ and $\codd$: \begin{align*} \domm(\epsilon) &\deftxt \epsilon & \codd(\epsilon) &\deftxt \epsilon\\ \domm(\env,\envmap{c}{\tV}) &\deftxt \domm(\env),c & \codd(\env,\envmap{c}{\tV}) &\deftxt \codd(\env),\tV \end{align*} These two meta-functions take a substructural type environment and returning respectively a \emph{list} of channel names and a \emph{list} of types. For example, for the environment $\env=\envmap{c}{\chantypO{\tV}},\envmap{d}{\chantypW{\tV'}}, \envmap{c}{\chantypUU{\tV}{1}}$, we have $\domm(\env) = c,d,c$ and $\codd(\env)=\chantypO{\tV},\chantypW{\tV'},\chantypUU{\tV}{1}$. Before stating cost-definability for actions, Definition~\ref{def:cost-definability}, we prove the technical Lemma~\ref{lem:actions-renaming} which allows us to express transitions in a convenient format for the respective definition without loss of generality. \begin{lem}[Transitions and Renaming]\label{lem:actions-renaming} $\conf{\env}{\sys{\sV}{P}} \piRedDecCostPad{\mu}{k} \conf{\env'}{\sys{\sV'}{P'}}$ if and only if $\;\conf{\env}{\sys{\sV}{P}} \piRedDecCostPad{\mu}{k}\bigl(\conf{\env''}{\sys{\sV''}{P''}}\bigr)\sigma_\env$ for some $\sigma_\env,\env'',\sV'',P''$ where $\env'=\env''\sigma_\env$, $\sV'=\sV''\sigma_\env$ and $P'=P''\sigma_\env$. \end{lem} \begin{proof} The \emph{if} case is immediate. The proof for the \emph{only-if} is complicated by actions that perform channel allocation (see \rtit{lAll} and \rtit{lAllE} from \figref{fig:LTS}) because, in such cases, the renaming used in \rtit{lRen}'s premise cannot be used directly. More precisely, from the premise we know: \begin{equation*} \begin{prooftree} \confE{\bigl(\sys{\sV}{P}\bigr)\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \justifiedBy{lRen} \confE{\sys{\sV}{P}} \;\piRedDecCost{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}} \end{prooftree} \end{equation*} and the required result follows if we prove the (slightly more cumbersome) sublemma: \begin{lemm}[Transition and Renaming] $\confE{\bigl(\sys{\sV}{P}\bigr)\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}}$ where $\fn(P) \subseteq \sV$ implies $\confE{\bigl(\sys{\sV}{P}\bigr)\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{k}\; \bigl(\conf{\env''}{\sys{\sV''}{P''}}\bigr)\sigma'_\env$ for some $\sigma'_\env,\env'',\sV'',P''$ where \begin{itemize} \item $\env'=\env''\sigma'_\env$, $\sV'=\sV''\sigma'_\env$ and $P'=P''\sigma'_\env$; \item $c\in\dom(\sV)$ implies $\sigma_\env(c) = \sigma'_\env(c)$ \end{itemize} \end{lemm} The above sublemma is proved by rule induction on $\confE{\bigl(\sys{\sV}{P}\bigr)\sigma_\env} \;\piRedDecCostPre{\;\mu\;}{k}\; \conf{\env'}{\sys{\sV'}{P'}}$. We show one of the main cases: \begin{description} \item[\rtit{lAll}] We have $\confE{\bigl(\sys{\sV}{\piAll{x}{P}}\bigr)\sigma_\env} \piRedDecCostPre{\;\tau\;}{+1} \conf{\env}{\sys{\bigl((\sV)\sigma_\env,c\bigr)}{\bigl((P)\sigma_\env\subC{c}{x}}\bigr)}$. From the fact that $c\not\in (\sV\sigma_\env)$ --- it follows because $\bigl((\sV)\sigma_\env,c\bigr)$ is defined --- we know that $\sigma^{-1}_\env(c) \not\in \sV$. We thus choose some fresh channel $d$, \ie $d\not\in\bigl(\sV \cup (\sV\sigma_\env) \cup \dom(\env)\bigr)$\footnote{The condition that $d\not\in\dom(\env)$ is required since we do not state whether the triple \confESP\ is a configuration; otherwise, it is redundant --- see comments succeeding Definition~\ref{def:configuration}.}, and define $\sigma'_\env$ as $\sigma_\env$, except that it maps $d$ to $c$ and also maps $\sigma^{-1}_\env(c)$ (\ie the channel name that mapped to $c$ in $\sigma_\env$) to $\sigma_\env(d)$, since this channel is not mapped to by $d$ anymore (in order to preserve bijectivity): \begin{equation*} \sigma'_\env(x) \deftxt \begin{cases} c & \text{if}\; x = d\\ \sigma_\env(d) & \text{if}\; x = \sigma^{-1}_\env(c)\\ \sigma_\env(x) & \text{otherwise} \end{cases} \end{equation*} We subsequently define \begin{itemize} \item $\env''$ as $\env$ since $\env\sigma'_\env = \env\sigma_\env= \env$; \item $\sV''$ as $\sV,d$ since $(\sV,d)\sigma'_\env = \bigl((\sV)\sigma_\env,c\bigr)$; and \item $P''$ as $P\subC{d}{x}$ since $P\subC{d}{x}\sigma'_\env = P\sigma_\env\subC{c}{x}$ \qedhere \end{itemize} \end{description} \end{proof} \begin{defi}[Cost Definable Actions]\label{def:cost-definability} An action $\mu$ is cost-definable iff for any pair of type environments\footnote{Cost Definability cannot be defined \wrt the first environment only in the case of action \actall, since it non-deterministically allocates a fresh channel name and adds it to the residual environment - see \rtit{lAllE} in \figref{fig:LTS}.} $\env$ and $\env'$, a corresponding substitution $\sigma_\env$, a set of channel names $C\in\Chans$, and channel names $\ctit{succ}, \ctit{fail}\not\in C$, there exists a test $R$ such that ${\tproc{\env,\emap{\ctit{succ}}{\chantypO{\codd(\env')}}, \emap{\ctit{fail}}{\chantypO{}}, \emap{\ctit{fail}}{\chantypO{}}}{R}}$ and whenever $\sV\in C$: \begin{enumerate} \item \conf{\env}{\sys{\sV}{P}} \piRedDecCostPad{\mu}{k} $\bigl(\conf{\env'}{\sys{\sV'}{P'}}\bigr)\sigma_\env$ \quad implies \\$\sys{\sV,\ctit{succ}, \ctit{fail}}{P \piParal R} \piRedCostAst{k} \sys{\sV',\ctit{succ}, \ctit{fail}}{P'\piParal \piOutA{\ctit{succ}}{\bigl(\domm(\env')\bigr)}}$. \item $\sys{\sV,\ctit{succ}, \ctit{fail}}{P\piParal R} \piRedAst_k \sys{\sV''}{P''}$ where $\conf{\envmap{\ctit{succ}}{\chantypA{\codd(\env')}},\envmap{\ctit{fail}}{\chantypA{}}}{\sys{\sV''}{P''}} \piBarbNot{\ctit{fail}}$ and $\conf{\envmap{\ctit{succ}}{\chantypA{\codd(\env')}},\envmap{\ctit{fail}}{\chantypA{}}}{\sys{\sV''}{P''}} \piBarb{\ctit{succ}}$ implies \conf{\env}{\sys{\sV}{P}} \piRedWDecCostPad{\mu}{k} $\bigl(\conf{\env'}{\sys{\sV'}{P'}}\bigr)\sigma_\env$ where $\sV''=\sV',\ctit{succ}, \ctit{fail}$ and $P'' \piStruct P'''\piParal \piOutA{\ctit{succ}}{\bigl(\domm(\env')\bigr)}$. \end{enumerate} \end{defi} \begin{lem}[Action Cost-Definability]\label{lem:action-def} External actions $\mu \in \sset{\actout{c}{\vec{d}},\actin{c}{\vec{d}}, \actall, \actfree{c} \,|\, c,\vec{d} \subset \Chans}$ are cost-definable. \end{lem} \begin{proof} The witness tests for \actout{c}{\vec{d}} and \actin{c}{\vec{d}} are reasonably standard (see \cite{Hennessy07}), but need to take into account permission transfer. For instance, for the specific case of the action \actout{c}{d} where $d \not\in \domm(\env)$, if the transition $\conf{\env}{\sys{\sV}{P}} \piRedDecCostPad{\mu}{k} \bigl(\conf{\env'}{\sys{\sV'}{P'}\bigr)\sigma_\env}$ holds then we know that, for some $\env_1$ and $\chantypA{\tV}$: \begin{itemize} \item $\env = \env_1,\envmap{c}{\chantypA{\tV}}$; \item $\env'\sigma_\env = \env_1, \envmap{c}{\chantyp{\tV}{\aV-1}}, \envmap{d}{\tV}$ \end{itemize} In particular, when $\aV = \affine$ (affine), using the permission to input on $c$ implicitly transfers the permission to process $P$ (see Section~\ref{sec:LTS}), potentially revoking the test's capability to perform name matching on channel name $c$ (see \rtit{tIf} in \figref{fig:typingrules}) --- this happens if $c\not\in\dom(\env_1)$. For this reason, when $\aV = \affine$ the test is defined as \begin{displaymath} \piOutA{\ctit{fail}}{} \piParal \piIn{c}{x}{\piIf{\bigl(x\in \domm(\env_1)\bigr)}{\piNil}{\piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env')\bigr)}}}} \end{displaymath} where $x \in \domm(\env_1)$ is shorthand for a sequence of name comparisons as in \cite{Hennessy07}. Otherwise, the respective type assumption is not consumed from the observer environment and the test is defined as \begin{displaymath} \piOutA{\ctit{fail}}{} \piParal \piIn{c}{x}{\piIf{\bigl(x\in \domm(\env)\bigr)}{\piNil}{\piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env')\bigr)}}}} \end{displaymath} Note that name comparisons on freshly acquired names are typeable since we also obtain the respective permissions upon input, \ie the explicit permission transfer (see Section~\ref{sec:LTS}). The reader can verify that these tests typecheck \wrt the environment $\env,\emap{\ctit{succ}}{\chantypO{\codd(\env')}}, \emap{\ctit{fail}}{\chantypO{}}, \emap{\ctit{fail}}{\chantypO{}}$ and that they observe clauses $(1)$ and $(2)$ of Definition~\ref{def:cost-definability}. In the case of clause $(2)$, we note that from the typing of the tests above, we know that $c\in\domm(\env)$ must hold (because both tests use channel $c$ for input); this is is a key requirement for the transition to fire --- see \rtit{lOut} of \figref{fig:LTS}. The witness tests for \actall\ and \actfree{c} involve less intricate permission transfer and are respectively defined as: \begin{displaymath} \piOutA{\ctit{fail}}{} \piParal \piAll{x}{\piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env),x\bigr)}}} \end{displaymath} and \begin{displaymath} \piOutA{\ctit{fail}}{} \piParal \piFree{c}{\piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env')\bigr)}}} \end{displaymath} We here focus on \actall\ and leave the analogous proof for \actfree{c} for the interested reader: \begin{enumerate} \item If $\conf{\env}{\sys{\sV}{P}} \piRedDecCostPad{\actall}{k} (\conf{\env'}{\sys{\sV'}{P'}})\sigma_\env$ we know that, for some $d\not\in\sV$ and $c\not\in\sV\sigma_\env$ where $\sigma_\env(d)=c$, we have $(\env')\sigma_\env=(\env,\envmap{d}{\chantypU{\tV}})\sigma_\env = \env,\envmap{c}{\chantypU{\tV}}$, $\sV'=(\sV,d)$ and $P'=P$. We can therefore simulate this action by the following sequence of reductions: \begin{align*} &\sys{\sV}{P\piParal \piOutA{\ctit{fail}}{} \piParal \piAll{x}{\piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env),x\bigr)}}}} \piRed\\ &\quad \sys{\sV,d}{P\piParal \piOutA{\ctit{fail}}{} \piParal \piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env),d\bigr)}}} \piRed \sys{\sV,d}{P\piParal \piOutA{\ctit{succ}}{\bigl(\domm(\env),d\bigr)}} \end{align*} \item From the structure of $R$ and the assumption that $\ctit{fail},\ctit{succ} \not\in\fn(P)$, we conclude that, if ${\conf{\envmap{\ctit{succ}}{\chantypA{\codd(\envv)}},\envmap{\ctit{fail}}{\chantypA{}}}{\sys{\sV'}{P'}} \piBarbNot{\ctit{fail}}}$ and $\conf{\envmap{\ctit{succ}}{\chantypA{\codd(\envv)}},\envmap{\ctit{fail}}{\chantypA{}}}{\sys{\sV'}{P'}} \piBarb{\ctit{succ}}$, then it must be the case that, for some $d\not\in\sV$, $P' = P'' \piParalL \piOutA{\ctit{succ}}{\bigl(\domm(\env),d\bigr)}$ where $\sV'' = (\sV',\ctit{succ},\ctit{fail},d)$ for some $\sV'$. Since $P$ and $R$ do not share common channels there could not have been any interaction between the two processes in the reduction sequence $\sys{\sV,\ctit{succ}, \ctit{fail}}{P\piParal R} \piRedAst_k \sys{\sV'}{P'}$. Within this reduction sequence, from every reduction $\sys{\sV_i}{P_i\piParal R'} \piRed_{k_i} \sys{\sV_{i+1}}{P_{i+1}\piParal R'}$ resulting from derivatives of $P$, \ie $\sys{\sV_i}{P_i} \piRed_{k_i} \sys{\sV_{i+1}}{P_{i+1}}$ that happened before the allocation of channel $d$, we obtain a corresponding silent transition \begin{equation} \conf{\env_i}{\sys{(\sV_i\setminus \sset{\ctit{succ},\ctit{fail}})}{P_{i}}} \piRedDecCostPad{\tau}{k_i} \conf{\env_i}{\sys{(\sV_{i+1}\setminus \sset{\ctit{succ},\ctit{fail}})}{P_{i+1}}}\label{eq:definab:1} \end{equation} by Lemma~\ref{lem:reduc-and-transitions}(1) and an appropriate lemma that uses the fact ${\sset{\ctit{succ},\ctit{fail}} \cap \fn(P) = \emptyset}$ to allows us to shrink the allocated resources from $\sV_i$ to $(\sV_i\setminus \sset{\ctit{succ},\ctit{fail}})$. A similar procedure can be carried out for reductions that happened after the allocation of $d$ as a result of reductions from $P$ derivatives, and by applying renaming $\sigma_\env$ we can obtain \begin{equation} \bigl(\conf{\env_i}{\sys{(\sV_i\setminus \sset{\ctit{succ},\ctit{fail}})}{P_{i}}}\bigr)\sigma_\env \piRedDecCostPad{\tau}{k_i} \bigl(\conf{\env_i}{\sys{(\sV_{i+1}\setminus \sset{\ctit{succ},\ctit{fail}})}{P_{i+1}}\bigr)\sigma_\env}\label{eq:definab:2} \end{equation} The reduction \begin{multline*} \qquad\qquad\sys{\sV_i,\ctit{succ}, \ctit{fail}}{P_i\piParal \piAll{x}{\piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env),x\bigr)}}}} \piRed_{+1} \\ \sys{\sV_{i},\ctit{succ}, \ctit{fail},d}{P_{i}\piParal \piIn{\ctit{fail}}{}{\piOutA{\ctit{succ}}{\bigl(\domm(\env),d\bigr)}}} \end{multline*} can be substituted by the transition \begin{equation} \conf{\env_i}{\sys{\sV_i}{P_{i}}} \piRedWDecCostPad{\actall}{+1} \conf{\env_i,\envmap{(d)\sigma_\env}{\chantypU{\tV}}}{\sys{\bigl((\sV_{i})\sigma_\env,(d)\sigma_\env\bigr)}{(P_{i})\sigma_\env}}\label{eq:definab:3} \end{equation} This follows from the fact that $d\not\in\sV_i$ and the fact that $\sigma_\env$ is a bijection, which implies that $(d)\sigma_\env\not\in(\sV_i)\sigma_\env$ (necessary for $\bigl((\sV_{i})\sigma_\env,(d)\sigma_\env\bigr)$ to be a valid resource environment). By joining together the transitions from \eqref{eq:definab:1},~\eqref{eq:definab:3} and \eqref{eq:definab:2} in the appropriate sequence we obtain the required weak transition. \qedhere \end{enumerate} \end{proof} The proof of Theorem~\ref{thm:completeness} (Completeness) relies on Lemma~\ref{lem:action-def} to simulate a costed action by the appropriate test and is, for the most part, standard. As stated already, one novel aspect is that the cost semantics requires the simulation to incur the same cost as that of the costed action. Through Reduction Closure, Lemma~\ref{lem:action-def} again, and then finally the Extrusion Lemma~\ref{lem:extrusion} we then obtain the matching bisimulation move which preserves the relative credit index. Another novel aspect of the proof for Theorem~\ref{thm:completeness} is that the name matching in the presence of our substructural type environment requires a reformulation of the Extrusion Lemma. More precisely, in the case of the output actions, the simulating test requires all of the environment permissions to perform all the necessary name comparisons. We then make sure that these permissions are not lost by communicating them all again on \ctit{succ}; this passing on of permissions then allows us to show contextuality in Lemma~\ref{lem:extrusion}. \begin{lem}[Extrusion]\label{lem:extrusion} Whenever $\confESP$ and $\conf{\env}{\sys{\sVV}{Q}}$ are configurations and $\vec{d} \not\in\dom(\env)$: \begin{displaymath} \emap{\ctit{succ}}{\chantyp{\codd(\env)}{\unique{1}}} \vDash \sys{\bigl(\sV,\ctit{succ},\vec{d}\bigr)}{P \piParal \piOutA{\ctit{succ}}{(\domm(\env))}} \piCostAmm{n} \sys{\bigl(\sVV,\ctit{succ},\vec{d}\bigr)}{Q \piParal \piOutA{\ctit{succ}}{(\domm(\env))}} \end{displaymath} implies \begin{math} \env \vdash \sys{\sV}{P} \;\piCostAmm{n}\; \sys{\sVV}{Q} \end{math} \end{lem} \begin{proof} By coinduction we show that a family of amortized typed relations \begin{math} {\env \vdash \sys{\sV}{P} \;\relR^n\; \sys{\sVV}{Q}} \end{math} observes the required properties of \defref{def:cost-preorder}. Note that the environment $\emap{\ctit{succ}}{\chantyp{\codd(\env)}{\unique{1}}}$ ensures that $\ctit{succ}\not\in \names(P,Q)$ since both $P \piParal \piOutA{\ctit{succ}}{(\domm(\env))}$ and $Q \piParal \piOutA{\ctit{succ}}{(\domm(\env))}$ must typecheck \wrt a type environment that is consistent with $\emap{\ctit{succ}}{\chantyp{\codd(\env)}{\unique{1}}}$. Cost improving is straightforward and Barb Preserving and Contextuality follow standard techniques; see \cite{Hennessy07}. For instance, for barb preservation we are required to show that $\confESP \piBarb{c}$ implies $\conf{\env}{\sys{\sVV}{Q}}\piBarb{c}$ (and viceversa). From $\confESP \piBarb{c}$ and Definition~\ref{def:barb} we know that $\envmap{c}{\chantypA{\vec{\tV}}}\in\env$ at some index $i$. We can therefore define the process $R\deftri\piIn{\ctit{succ}}{\vec{x}}{\piIn{x_i}{\vec{y}}{\piOutA{\textit{ok}}{}}}$ where $|\vec{\tV}| = |\vec{y}|$; this test process typechecks \wrt $\emap{\ctit{succ}}{\chantyp{\codd(\env)}{\unique{1}}}, \emap{\textit{ok}}{\chantypO{}}$. Now by Definition~\ref{def:contextuality}$(1)$ we know \begin{multline*} \emap{\ctit{succ}}{\chantyp{\codd(\env)}{\unique{1}}}, \emap{\textit{ok}}{\chantypU{}} \vDash \sys{\bigl(\sV,\ctit{succ},\vec{d},\textit{ok}\bigr)}{P \piParal \piOutA{\ctit{succ}}{(\domm(\env))}} \\\piCostAmm{n} \sys{\bigl(\sVV,\ctit{succ},\vec{d},\textit{ok}\bigr)}{Q \piParal \piOutA{\ctit{succ}}{(\domm(\env))}} \end{multline*} and thus, by Definition~\ref{def:contextuality}$(2)$ and $\tproc{\emap{\ctit{succ}}{\chantyp{\codd(\env)}{\unique{1}}}, \emap{\textit{ok}}{\chantypO{}}}{R}$ \begin{equation}\label{eq:7:char} \begin{split} &\emap{\textit{ok}}{\chantypUU{}{1}} \vDash \sys{\bigl(\sV,\ctit{succ},\vec{d},\textit{ok}\bigr)}{P \piParal \piOutA{\ctit{succ}}{(\domm(\env))}\piParal R} \\ &\qquad\qquad \qquad\qquad\qquad\qquad \qquad\qquad\piCostAmm{n} \sys{\bigl(\sVV,\ctit{succ},\vec{d},\textit{ok}\bigr)}{Q \piParal \piOutA{\ctit{succ}}{(\domm(\env))}\piParal R} \end{split} \end{equation} Clearly, if $\confESP \piBarb{c}$ then $\bigl(\conf{\emap{\textit{ok}}{\chantypUU{}{1}}}{\sys{\bigl(\sV,\ctit{succ},\vec{d},\textit{ok}\bigr)}{(P \piParal \piOutA{\ctit{succ}}{(\domm(\env))}\piParal R)}}\bigr) \piBarb{\textit{ok}}$. By \eqref{eq:7:char} and Definition~\ref{def:barb-preserving} we must have $\bigl(\conf{\emap{\textit{ok}}{\chantypUU{}{1}}}{\sys{\bigl(\sVV,\ctit{succ},\vec{d},\textit{ok}\bigr)}{(Q \piParal \piOutA{\ctit{succ}}{(\domm(\env))}\piParal R)}}\bigr) \piBarb{\textit{ok}}$ as well, which can only happen if $\sys{\sVV}{Q}\piRedAst\piStruct Q'\piParal \piOut{c}{\vec{d}}{Q''}$. This means that $\confE{\sys{\sVV}{Q}} \piBarb{c}$. \end{proof} \begin{lem}\label{lem:environment-strenghten} $\env \vDash \sysSP \piCostAmm{n} \sys{\sVV}{Q}$ and $\env \ensuremath{\mathrel{\prec}} \env'$ implies $\env' \vDash \sysSP \piCostAmm{n} \sys{\sVV}{Q}$ \end{lem} \begin{proof} By coinduction. \end{proof} \begin{lem}\label{lem:rbc-renaming} $\env \vDash \sysSP \piCostAmm{n} \sys{\sVV}{Q}$ and $\sigma$ is a bijective renaming implies $\env\sigma \vDash \bigl(\sysSP\bigr)\sigma \piCostAmm{n} \bigl(\sys{\sVV}{Q}\bigr)\sigma$ \end{lem} \begin{proof} By coinduction. \end{proof} \begin{thm}[Completeness]\label{thm:completeness} \begin{math} \env\vDash (\sys{\sV}{P}) \piCostAmm{n} (\sys{\sVV}{Q}) \end{math} implies \begin{math} \env\vDash (\sys{\sV}{P}) \; \piCostBisAmm{n} \; (\sys{\sVV}{Q}) \end{math}. \end{thm} \begin{proof} By coinduction, we show that for arbitrary $\env,n$, the family of relations included in \begin{math} \env\vDash \sys{\sV}{P} \piCostAmm{n} \sys{\sVV}{Q} \end{math} observes the transfer properties of \defref{def:amortized-typed-bisim} at $\env,n$. Assume \begin{equation} \confE{\sysSP} \piRedDecCostPad{\mu}{k} \bigl(\conf{\env'}{\sys{\sV'}{P'}}\bigr)\sigma_\env\label{eq:6:char} \end{equation} If $\mu = \tau$, the matching move follows from Lemma~\ref{lem:reduc-and-transitions}, Definition~\ref{def:cost-improving} and Definition~\ref{def:cost-preorder}. If $\mu \in \sset{\actout{c}{\vec{d}},\actin{c}{\vec{d}}, \actall, \actfree{c} \;|\; c,\vec{d} \in \Chans}$, by Lemma~\ref{lem:action-def} we know that there exists a test process that can simulate it; we choose one such test $R$ with channel names $\ctit{succ}, \ctit{fail} \not\in\sV,\sVV$. By Definition~\ref{def:contextuality}(1) we know \begin{equation*} \env, \envmap{\ctit{succ}}{\chantypU{\codd(\env)}}, \envmap{\ctit{fail}}{\chantypU{}}\vDash \sys{\sV,\ctit{succ},\ctit{fail}}{P} \piCostAmm{n} \sys{\sVV,\ctit{succ},\ctit{fail}}{Q} \end{equation*} and by Definition~\ref{def:contextuality}(2) and ${\tproc{\env,\emap{\ctit{succ}}{\chantypO{\codd(\envv)}}, \emap{\ctit{fail}}{\chantypO{}}, \emap{\ctit{fail}}{\chantypO{}}}{R}}$ (Definition~\ref{def:cost-definability}) we obtain \begin{equation} \label{eq:2:char} \envmap{\ctit{succ}}{\chantypUU{\codd(\env)}{1}}, \envmap{\ctit{fail}}{\chantypUU{}{2}}\vDash \sys{(\sV,\ctit{succ},\ctit{fail})}{P\piParal R} \piCostAmm{n} \sys{(\sVV,\ctit{succ},\ctit{fail})}{Q\piParal R} \end{equation} From \eqref{eq:6:char} and Definition~\ref{def:cost-definability}$(1)$, we know \begin{equation*} \sys{(\sV,\ctit{succ},\ctit{fail})}{P\piParal R} \piRedCostAst{k} \sys{(\sV',\ctit{succ},\ctit{fail})}{P' \piParal \piOutA{\ctit{succ}}{\domm(\env')}} \end{equation*} By \eqref{eq:2:char} and Definition~\ref{def:cost-improving} (Cost Improving) we know \begin{equation*} \sys{(\sVV,\ctit{succ},\ctit{fail})}{Q\piParal R} \piRedCostAst{l} \sys{\sVV''}{Q''} \end{equation*} where \begin{equation}\label{eq:5:char} \envmap{\ctit{succ}}{\chantypUU{\codd(\env)}{1}}, \envmap{\ctit{fail}}{\chantypUU{}{2}}\vDash \sys{(\sV',\ctit{succ},\ctit{fail})}{P' \piParal \piOutA{\ctit{succ}}{\domm(\env')}} \piCostAmm{n+l-k} \sys{\sVV''}{Q''} \end{equation} By Definition~\ref{def:barb-preserving} (Barb Preservation), this means that $\conf{\envmap{\ctit{succ}}{\chantypUU{\codd(\env)}{1}}, \envmap{\ctit{fail}}{\chantypUU{}{2}}}{\sys{\sVV'}{Q'}} \piBarbNot{\ctit{fail}}$ and also that $\conf{\envmap{\ctit{succ}}{\chantypUU{\codd(\env)}{1}}, \envmap{\ctit{fail}}{\chantypUU{}{2}}}{\sys{\sVV'}{Q'}} \piBarb{\ctit{succ}}$. By Definition~\ref{def:cost-definability}$(2)$ we obtain \begin{eqnarray} \label{eq:3:char} && Q''\piStruct Q'\piParal \piOutA{\ctit{succ}}{\domm(\env')} \text{ and } \sVV'' = (\sVV',\ctit{succ},\ctit{fail}) \\ \label{eq:4:char} && \conf{\env}{\sys{\sVV}{Q}} \piRedWDecCostPad{\mu}{l} \bigl(\conf{\env'}{\sys{\sVV'}{Q'}}\bigr)\sigma_\env \end{eqnarray} Transition \eqref{eq:4:char} is the matching move because by \eqref{eq:5:char} and Lemma~\ref{lem:environment-strenghten} we obtain \begin{equation*} \envmap{\ctit{succ}}{\chantypUU{\codd(\env)}{1}}\vDash \sys{(\sV',\ctit{succ},\ctit{fail})}{P' \piParal \piOutA{\ctit{succ}}{\domm(\env')}} \piCostAmm{n+l-k} \sys{\sVV''}{Q''} \end{equation*} By \eqref{eq:3:char}, and Lemma~\ref{lem:extrusion} we obtain $\env' \vDash \sys{\sV'}{P'} \piCostAmm{n+l-k} \sys{\sVV'}{Q'}$ and subsequently by Lemma~\ref{lem:rbc-renaming} we obtain \[ \env'\sigma_\env \vDash \bigl(\sys{\sV'}{P'}\bigr)\sigma_\env \piCostAmm{n+l-k} \bigl(\sys{\sVV'}{Q'}\bigr)\sigma_\env \] as required.\qedhere \end{proof} \section{Revisiting the Case Study} \label{sec:proofs-relat-effic} We can formally express that \ensuremath{\text{\rm eBuff}}\xspace is (strictly) more efficient than \ensuremath{\text{\rm Buff}}\xspace in terms of the reduction semantics outlined in Section~\ref{sec:language} through the following statements: \begin{eqnarray} \label{eq:11cs} \ensuremath{\env_\text{ext}}\xspace \;\vDash\;& \sysS{\ensuremath{\text{\rm eBuff}}\xspace} \,\;\piCost\; \sysS{\ensuremath{\text{\rm Buff}}\xspace}\\ \label{eq:12cs} \ensuremath{\env_\text{ext}}\xspace \;\vDash\;& \sysS{\ensuremath{\text{\rm Buff}}\xspace}\; \not\!\piCost\; \sysS{\ensuremath{\text{\rm eBuff}}\xspace} \end{eqnarray} In order to show that the second statement \eqref{eq:12cs} holds, we need to prove that\emph{ there is no amortisation credit} $n$ for which $\ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{\ensuremath{\text{\rm Buff}}\xspace}\; \piCost^{n}\; \sysS{\ensuremath{\text{\rm eBuff}}\xspace}$. By choosing the set of inductively defined contexts $R_n$ where:\footnote{Note that \tproc{\ensuremath{\env_\text{ext}}\xspace}{R_n} for any $n$.} \begin{align*} R_0 & \deftri \piNil & R_{n+1} & \deftri \piOut{\ctit{in}}{v} {\piIn{\ctit{out}}{x}{R_{n}}} \end{align*} we can argue by analysing the reduction graph of the respective systems that, for any $n\geq 0$: $$\ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{(\ensuremath{\text{\rm Buff}}\xspace\piParal R_{n+1})}\; \not\piCost^{n}\; \sysS{(\ensuremath{\text{\rm eBuff}}\xspace\piParal R_{n+1})}$$ since it violates the Cost Improving property of Definition~\ref{def:cost-preorder}. Another way how to prove \eqref{eq:12cs} is by exploiting completeness of our bisimulation proof technique \wrt our behavioural preorder, Theorem~\ref{thm:completeness}, and work at the level of the transition system of Section~\ref{sec:cost-bisim} showing that, for all $n \geq 0$, the following holds: \begin{equation} \ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{\ensuremath{\text{\rm Buff}}\xspace}\;\not\!\piCostBisAmm{n}\; \sysS{\ensuremath{\text{\rm eBuff}}\xspace} \label{eq:24:cs} \end{equation} We prove the above statement as Theorem~\ref{thm:strictly-ineffiscient} of Section~\ref{sec:prov-strict-ineff}. Property \eqref{eq:11cs}, \emph{prima facie}, seems even harder to prove than \eqref{eq:12cs}, because we are required to show that Barb Preservation and Cost Improving hold under every possible valid context interacting with the two buffer implementations. Once again, we use the transition system of Section~\ref{sec:cost-bisim} and show instead that: \begin{equation} \ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{\ensuremath{\text{\rm eBuff}}\xspace} \,\;\piCostBisAmm{0}\; \sysS{\ensuremath{\text{\rm Buff}}\xspace}\label{eq:25:cs} \end{equation} The required result then follows from Theorem~\ref{thm:soundness}. The proof for this statement is presented in Section~\ref{sec:prov-relat-effc}. In order to make the presentation of these proofs more manageable, we define the following macro definitions for sub-processes making up the derivatives of \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm eBuff}}\xspace}}. \begin{align*} \small \ensuremath{\text{\rm Frn'}}\xspace & \small \deftxt \piIn{b}{x}{ \piIn{\ctit{in}}{y}{\piAll{z}{ \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{x}{(y,z)}\bigr)}}} & \!\!\!\!\!\!\small \ensuremath{\text{\rm Bck'}}\xspace & \small \deftxt \piIn{d}{x}{ \piIn{x}{(y,z)}{ \piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}}\\ \small \pFff{x} & \small\deftxt \piIn{\ctit{in}}{y}{\piAll{z}{ \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{x}{(y,z)}\bigr)}} & \!\!\!\!\!\!\small\pBbb{x} &\small \deftxt \piIn{x}{(y,z)}{ \piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}}\\ \small\pFfff{x}{y} & \small\deftxt \piAll{z}{ \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{z} \piParal \piOutA{x}{(y,z)}\bigr)} & \!\!\!\!\!\!\small\pBbbb{y}{z} & \small\deftxt \piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}\\[5pt] \small \ensuremath{\text{\rm eBk'}}\xspace & \small \deftxt \piIn{d}{x}{ \piIn{x}{(y,z)}{ \piFree{x}{\piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)}}}} & \!\!\!\!\!\!\small \pBEee{x} & \small \deftxt \piIn{x}{(y,z)}{\piFree{x}{\piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)}}}\\ \small \pBEeee{x}{y}{z} & \small\deftxt \piFree{x}{\piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)}} & \!\!\!\!\!\!\small \pBEeeee{y}{z} &\small \deftxt \piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)} \end{align*} We can thus express the definitions for \ensuremath{\text{\rm Buff}}\xspace and \ensuremath{\text{\rm eBuff}}\xspace as: \begin{align}\label{eq:24cs} \ensuremath{\text{\rm Buff}}\xspace & \deftxt \pFff{c_1} \piParal \pBbb{c_1} & \ensuremath{\text{\rm eBuff}}\xspace & \deftxt \pFff{c_1} \piParal \pBEee{c_1} \end{align} \subsection{Proving Strict Inefficiency} \label{sec:prov-strict-ineff} In order to prove \eqref{eq:24:cs}, we do not need to explore the entire state space for \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm eBuff}}\xspace}}. Instead, it suffices to limit external interactions with the observer to traces of the form $\bigl(\piRedWDec{\actin{\ctit{in}}{v}\,\cdot\,\actout{\ctit{out}}{v}}\bigr)^\ast$, which simulate interactions with the observing processes $R_n$ discussed in Section~\ref{sec:proofs-relat-effic}. It is instructive to visualise the transition graphs for both \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm eBuff}}\xspace}} for a single iteration \piRedWDec{\actin{\ctit{in}}{v}\,\cdot\,\actout{\ctit{out}}{v}} as depicted in \figref{fig:TrnGrphBuff} and \figref{fig:TrnGrphBuff2}: due to lack of space, the nodes in these graphs abstract away from the environment \ensuremath{\env_\text{ext}}\xspace and appropriate resource environments $\sV, \sVV,\ldots$ containing internal channels $c_1, c_2, \ldots$ as required.\footnote{The transition graph also abstracts away from environment moves.} For instance the first node of the graph in \figref{fig:TrnGrphBuff}, $\pFff{c_1} \piParalL \pBbb{c_1}$, \ie \ensuremath{\text{\rm Buff}}\xspace, stands for $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{(\pFff{c_1} \piParalL \pBbb{c_1})}}$, where $c_1 \in \sV$, whereas the third node in the same graph, $\ensuremath{\text{\rm Frn}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}$, stands for $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\bigl(\ensuremath{\text{\rm Frn}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}\bigr)}}$, where $c_1,c_2\in\sVV$. For instance, the graph in \figref{fig:TrnGrphBuff} shows that after the input action and the channel allocation for $c_2$ $\tau$-action (with a cost of $+1$) the inefficient buffer implementation reaches a state where it can perform a number of internal transitions: either the subcomponent \ensuremath{\text{\rm Frn}}\xspace may take a recursion unfold step (the first right $\tau$-action) followed by an input on channel $b$ that instantiates the continuation with channel $c_2$ (the second right $\tau$-action), or else the subcomponent $\pBbb{c_1}$ reads from the head of the buffer $\piOutA{c_1}{(v,c_2)}$ (the first downwards $\tau$-action). These $\tau$-actions may be interleaved, but no other silent transitions are possible until an output action is performed, after which the backend subcomponent can perform an unfold $\tau$-action (the first downwards $\tau$-action following action $\actout{\ctit{out}}{v}$) followed by an instantiation communication on channel $d$ (the first downwards $\tau$-action following action $\actout{\ctit{out}}{v}$), When all of these actions are completed we reach again the starting process, instantiated with channel $c_2$ instead. The transitions in \figref{fig:TrnGrphBuff2} are analogous, but include a deallocation transition with a cost of $-1$. \begin{display}{Transition graph for \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Buff}}\xspace}} restricted to $\piRedWDecPad{\actin{\ctit{in}}{v}\,\cdot\,\actout{\ctit{out}}{v}}$}{fig:TrnGrphBuff} \begin{tikzpicture} \node at (0,0) (1) {\scriptsize \pFff{c_1} \piParal \pBbb{c_1} = \ensuremath{\text{\rm Buff}}\xspace}; \node at (-5.5,0) (2) {\scriptsize \pFfff{c_1}{v}\piParal \pBbb{c_1}}; \node at (-5.5,-1.5) (3) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}; \node at (0,-1.5) (4) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}; \node at (5.5,-1.5) (5) {\scriptsize \pFff{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}; \node at (-5.5,-3) (6) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}}; \node at (0,-3) (7) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}}; \node at (5.5,-3) (8) {\scriptsize \pFff{c_2} \piParal \pBbbb{v}{c_2}}; \node at (-5.5,-4.5) (9) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}}; \node at (0,-4.5) (10) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}}; \node at (5.5,-4.5) (11) {\scriptsize \pFff{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}}; \node at (-5.5,-6) (12) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck'}}\xspace \piParal \piOutA{d}{c_2}}; \node at (0,-6) (13) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck'}}\xspace \piParal \piOutA{d}{c_2}}; \node at (5.5,-6) (14) {\scriptsize \pFff{c_2} \piParal \ensuremath{\text{\rm Bck'}}\xspace \piParal \piOutA{d}{c_2}}; \node at (-5.5,-7.5) (15) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbb{c_2}}; \node at (0,-7.5) (16) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbb{c_2}}; \node at (5.5,-7.5) (17) {\scriptsize \pFff{c_2} \piParal \pBbb{c_2} }; \draw[->] (1) to node[above] {\scriptsize \actin{\ctit{in}}{v}} (2); \draw[->] (2) to node[right] {\scriptsize \acttau} node[left] {\scriptsize $+1$} (3); \draw[->] (3) to node[right] {\scriptsize \acttau} (6); \draw[->] (3) to node[above] {\scriptsize \acttau} (4); \draw[->] (4) to node[right] {\scriptsize \acttau} (7); \draw[->] (4) to node[above] {\scriptsize \acttau} (5); \draw[->] (5) to node[right] {\scriptsize \acttau} (8); \draw[->] (6) to node[right] {\scriptsize \actout{\ctit{out}}{v}} (9); \draw[->] (6) to node[above] {\scriptsize \acttau} (7); \draw[->] (7) to node[right] {\scriptsize \actout{\ctit{out}}{v}} (10); \draw[->] (7) to node[above] {\scriptsize \acttau} (8); \draw[->] (8) to node[right] {\scriptsize \actout{\ctit{out}}{v}} (11); \draw[->] (9) to node[right] {\scriptsize \acttau} (12); \draw[->] (9) to node[above] {\scriptsize \acttau} (10); \draw[->] (10) to node[right] {\scriptsize \acttau} (13); \draw[->] (10) to node[above] {\scriptsize \acttau} (11); \draw[->] (11) to node[right] {\scriptsize \acttau} (14); \draw[->] (12) to node[right] {\scriptsize \acttau} (15); \draw[->] (12) to node[above] {\scriptsize \acttau} (13); \draw[->] (13) to node[right] {\scriptsize \acttau} (16); \draw[->] (13) to node[above] {\scriptsize \acttau} (14); \draw[->] (14) to node[right] {\scriptsize \acttau} (17); \draw[->] (15) to node[above] {\scriptsize \acttau} (16); \draw[->] (16) to node[above] {\scriptsize \acttau} (17); \end{tikzpicture} \end{display} \begin{display}{Transition graph for \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm eBuff}}\xspace}} restricted to $\piRedWDecPad{\actin{\ctit{in}}{v}\,\cdot\,\actout{\ctit{out}}{v}}$}{fig:TrnGrphBuff2} \begin{tikzpicture} \node at (0,0) (1) {\scriptsize \pFff{c_1} \piParal \pBEee{c_1} = \ensuremath{\text{\rm eBuff}}\xspace}; \node at (-5.5,0) (2) {\scriptsize \pFfff{c_1}{v}\piParal \pBEee{c_1}}; \node at (-5.5,-1.5) (3) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBEee{c_1}}; \node at (0,-1.5) (4) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBEee{c_1}}; \node at (5.5,-1.5) (5) {\scriptsize \pFff{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBEee{c_1}}; \node at (-5.5,-3) (6) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEeee{c_1}{v}{c_2}}; \node at (0,-3) (7) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEeee{c_1}{v}{c_2}}; \node at (5.5,-3) (8) {\scriptsize \pFff{c_2} \piParal \pBEeee{c_1}{v}{c_2}}; \node at (-5.5,-4.5) (9) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEeeee{v}{c_2}}; \node at (0,-4.5) (10) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEeeee{v}{c_2}}; \node at (5.5,-4.5) (11) {\scriptsize \pFff{c_2} \piParal \pBEeeee{v}{c_2}}; \node at (-5.5,-6) (12) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c_2}}; \node at (0,-6) (13) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c_2}}; \node at (5.5,-6) (14) {\scriptsize \pFff{c_2} \piParal \ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c_2}}; \node at (-5.5,-7.5) (15) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c_2}}; \node at (0,-7.5) (16) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c_2}}; \node at (5.5,-7.5) (17) {\scriptsize \pFff{c_2} \piParal \ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c_2}}; \node at (-5.5,-9) (18) {\scriptsize\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEee{c_2}}; \node at (0,-9) (19) {\scriptsize \ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEee{c_2}}; \node at (5.5,-9) (20) {\scriptsize \pFff{c_2} \piParal \pBEee{c_2} }; \draw[->] (1) to node[above] {\scriptsize \actin{\ctit{in}}{v}} (2); \draw[->] (2) to node[right] {\scriptsize \acttau} node[left] {\scriptsize $+1$} (3); \draw[->] (3) to node[right] {\scriptsize \acttau} (6); \draw[->] (3) to node[above] {\scriptsize \acttau} (4); \draw[->] (4) to node[right] {\scriptsize \acttau} (7); \draw[->] (4) to node[above] {\scriptsize \acttau} (5); \draw[->] (5) to node[right] {\scriptsize \acttau} (8); \draw[->] (6) to node[right] {\scriptsize \acttau} node[left] {\scriptsize $-1$} (9); \draw[->] (6) to node[above] {\scriptsize \acttau} (7); \draw[->] (7) to node[right] {\scriptsize \acttau} node[left] {\scriptsize $-1$} (10); \draw[->] (7) to node[above] {\scriptsize \acttau} (8); \draw[->] (8) to node[right] {\scriptsize \acttau} node[left] {\scriptsize $-1$} (11); \draw[->] (9) to node[right] {\scriptsize \actout{\ctit{out}}{v}} (12); \draw[->] (9) to node[above] {\scriptsize \acttau} (10); \draw[->] (10) to node[right] {\scriptsize \actout{\ctit{out}}{v}} (13); \draw[->] (10) to node[above] {\scriptsize \acttau} (11); \draw[->] (11) to node[right] {\scriptsize \actout{\ctit{out}}{v}} (14); \draw[->] (12) to node[right] {\scriptsize \acttau} (15); \draw[->] (12) to node[above] {\scriptsize \acttau} (13); \draw[->] (13) to node[right] {\scriptsize \acttau} (16); \draw[->] (13) to node[above] {\scriptsize \acttau} (14); \draw[->] (14) to node[right] {\scriptsize \acttau} (17); \draw[->] (15) to node[right] {\scriptsize \acttau} (18); \draw[->] (15) to node[above] {\scriptsize \acttau} (16); \draw[->] (16) to node[right] {\scriptsize \acttau} (19); \draw[->] (16) to node[above] {\scriptsize \acttau} (17); \draw[->] (17) to node[right] {\scriptsize \acttau} (20); \draw[->] (18) to node[above] {\scriptsize \acttau} (19); \draw[->] (19) to node[above] {\scriptsize \acttau} (20); \end{tikzpicture} \end{display} \newcommand{\ensuremath{\text{Prc}_\text{\rm A}}\xspace}{\ensuremath{\text{Prc}_\text{\rm A}}\xspace} \newcommand{\ensuremath{\text{Prc}_\text{\rm B}}\xspace}{\ensuremath{\text{Prc}_\text{\rm B}}\xspace} \newcommand{\ensuremath{\text{Prc}_\text{\rm C}}\xspace}{\ensuremath{\text{Prc}_\text{\rm C}}\xspace} Theorem~\ref{thm:strictly-ineffiscient}, which proves \eqref{eq:24:cs}, relies on two lemmas. The main one is Lemma~\ref{lem:negative-induction}, which establishes that a number of derivatives from the configurations \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm eBuff}}\xspace}} cannot be related for \emph{any} amortisation credit. This Lemma, in turn, relies on Lemma~\ref{lem:Neg:implications}, which establishes that, for a particular amortisation credit $n$, if some pair of derivatives of the configurations \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm eBuff}}\xspace}} \resp cannot be related, then other pairs of derivatives cannot be related either. Lemma~\ref{lem:Neg:implications} is used again by Theorem~\ref{thm:strictly-ineffiscient} to derive that, from the unrelated pairs identified by Lemma~\ref{lem:negative-induction}, the required pair of configurations \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm eBuff}}\xspace}} cannot be related for any amortisation credit. Upon first reading, the reader who is only interested in the eventual result may safely skip to the statement of Theorem~\ref{thm:strictly-ineffiscient} and treat Lemma~\ref{lem:negative-induction} and Lemma~\ref{lem:Neg:implications} as black-boxes. In order to be able to state Lemma~\ref{lem:Neg:implications} and Lemma~\ref{lem:negative-induction} more succinctly, we find it convenient to delineate groups of processes relating to derivatives of \ensuremath{\text{\rm Buff}}\xspace and \ensuremath{\text{\rm eBuff}}\xspace. For instance, we can partition the processes depicted in the transition graph of \figref{fig:TrnGrphBuff2} (derivatives of \ensuremath{\text{\rm eBuff}}\xspace) into three sets: \begin{align*} \ensuremath{\text{Prc}_\text{\rm A}}\xspace & \deftxt \sset{ \begin{array}{l|l} \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBEee{c_1}\bigr),\\ \bigl(\ensuremath{\text{\rm Frn'}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBEee{c_1}\bigr),& c_1 \neq c_2 \in \\ \bigl(\pFff{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBEee{c_1}\bigr),\, \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \pBEeee{c_1}{v}{c_2}\bigr), & \Chans \setminus\sset{\ctit{in},\ctit{out},b,d}\\ \bigl(\ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEeee{c_1}{v}{c_2}\bigr),\, \bigl(\pFff{c_2} \piParal \pBEeee{c_1}{v}{c_2}\bigr) \end{array}\!\!}\\[5pt] \ensuremath{\text{Prc}_\text{\rm B}}\xspace & \deftxt \sset{ \begin{array}{l|l} \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \pBEeeee{v}{c_2}\bigr),\,\bigl(\ensuremath{\text{\rm Frn'}}\xspace\piParalL \piOutA{b}{c_2} \piParalL \pBEeeee{v}{c_2}\bigr),& c_2 \in\Chans \setminus\sset{\ctit{in},\ctit{out},b,d}\\ \bigl(\pFff{c_2} \piParalL \pBEeeee{v}{c_2}\bigr) \end{array} }\\[5pt] \ensuremath{\text{Prc}_\text{\rm C}}\xspace & \deftxt \sset{ \begin{array}{l|l} \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c_2}\bigr),\, \bigl(\ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c_2}\bigr),\\ \bigl(\pFff{c_2} \piParal \ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c_2}\bigr),\, \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c_2}\bigr),\\ \bigl(\ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c_2}\bigr),\, \bigl(\pFff{c_2} \piParal \ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c_2}\bigr),& c_2 \in \Chans \setminus\sset{\ctit{in},\ctit{out},b,d}\\ \bigl(\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEee{c_2}\bigr),\, \bigl(\ensuremath{\text{\rm Frn'}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBEee{c_2}\bigr),\\ \bigl(\pFff{c_2} \piParal \pBEee{c_2}\bigr) \end{array} } \end{align*} With respect to the transition graph of \figref{fig:TrnGrphBuff2}, \ensuremath{\text{Prc}_\text{\rm A}}\xspace groups the processes \emph{after the allocation} of an (arbitrary) internal channel $c_2$ but not before any deallocation, \ie the second and third rows of the graph. The set \ensuremath{\text{Prc}_\text{\rm B}}\xspace groups the processes \emph{after the deallocation} of the (arbitrary) internal channel $c_1$, \ie the fourth row of the graph. Finally, the set \ensuremath{\text{Prc}_\text{\rm C}}\xspace groups processes \emph{after the output action} \actout{\ctit{out}}{v} is performed (before an input action is performed), \ie the last three rows of the graph. \begin{lem}[Related Negative Results]\label{lem:Neg:implications} \quad \begin{enumerate} \item For any amortisation credit $n$ and appropriate $\sV,\sVV$, whenever: \begin{itemize} \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParalL \pBEee{c'_1}}$ \item For any $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ we have $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n+1}\; \sys{\sVV}{Q}$\; \item For any $Q \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$ we have $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ \end{itemize} then, for any $P \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$, we have\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$. \item For any amortisation credit $n$ and appropriate $\sV,\sVV$, and for any $Q \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$: \begin{enumerate} \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ \qquad for any $P \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn'}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn'}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ \hspace{1cm}for any $P \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ for any $P \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck'}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck'}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ for any $P \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$ \end{enumerate} \item For any amortisation credit $n$ and appropriate $\sV,\sVV$, and for any $R\in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$, $Q \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$: \begin{enumerate} \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_2}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ for any $P \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{R}$ implies \\ for any $P \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$\;$\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P}$ \end{enumerate} \item For any amortisation credit $n$ and appropriate $\sV,\sVV$, and for any $Q \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$: \begin{enumerate} \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ for any $P\in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$\; $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbbb{v}{c_1}} \; \not\!\piCostBisAmm{n+1}\; \sys{\sVV}{P}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbbb{v}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ implies \\ $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$ \end{enumerate} \end{enumerate} \end{lem} \begin{proof} Each case is proved by contradiction: \begin{enumerate} \item Assume the premises together with the inverse of the conclusion, \ie $$\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}} \; \piCostBisAmm{n}\; \sys{\sVV}{P}.$$ Consider the transition from the left-hand configuration: $$\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}}} \piRedDecCostPad{\actin{\ctit{in}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}}}.$$ For any $P \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$, this can only be matched by the right-hand configuration, $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}}$, through either of the following cases: \begin{enumerate} \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}} \piRedWDecCostPad{\actin{\ctit{in}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParalL \pBEee{c'_1}}}$, \ie a weak input action without trailing $\tau$-moves after the external action $\actin{\ctit{in}}{v}$ --- see first row of the graph in \figref{fig:TrnGrphBuff2}. But we know $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParalL \pBEee{c'_1}}$ from the first premise. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}} \piRedWDecCostPad{\actin{\ctit{in}}{v}}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$. However, from the second premise we know that $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n+1}\; \sys{\sVV}{Q}$ \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}} \piRedWDecCostPad{\actin{\ctit{in}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$. Again, from the third premise we know that $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ \end{enumerate} Since \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}} cannot perform a matching move, we obtain a contradiction. \item We here prove case $(a)$. The other cases are analogous. Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn'}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbb{c_1}} \; \piCostBisAmm{n}\; \sys{\sVV}{P}$ and consider the action \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn'}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbb{c_1}}} \piRedDecCostPad{\acttau}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}}}. \end{equation*} For our assumption to hold, $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}}$ would need to match this move by a (weak) silent action leading to a configuration that can match $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}}}$. The only matching move can be \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \qquad\text{for some } Q \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace. \end{equation*} However, from our premise we know $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q'}$ for any amortisation credit $n$ and $Q' \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$ and therefore conclude that the move cannot be matched, thereby obtaining a contradiction. \item We here prove case $(a)$. Case $(b)$ is analogous. Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}} \; \piCostBisAmm{n}\; \sys{\sVV}{P}$ and consider the action \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}}} \piRedDecCostPad{\actout{\ctit{out}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_2}}} \end{equation*} This action can only be matched by a transition of the form \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{P}} \piRedWDecCostPad{\actout{\ctit{out}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}}\qquad\text{for some } Q \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace. \end{equation*} However, from our premise we know $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_2}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ for any amortisation credit $n$ and $Q\in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$. Thus we conclude that the move cannot be matched, thereby obtaining a contradiction. \item Cases $(a)$ and $(b)$ are analogous to $3(a)$ and $3(b)$. We here outline the proof for case $(c)$. First we note that from the premise $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \ensuremath{\text{\rm Bck}}\xspace \piParalL \piOutA{d}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ (for any $Q \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$) and Lemma~$\ref{lem:Neg:implications}.3(a)$, Lemma~$\ref{lem:Neg:implications}.4(a)$ and Lemma~$\ref{lem:Neg:implications}.4(b)$ \resp we obtain: \begin{align}\label{eq:13cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{P} \qquad\text{for any } P\in \ensuremath{\text{Prc}_\text{\rm B}}\xspace \\ \label{eq:14cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbbb{v}{c_1}} \; \not\!\piCostBisAmm{n+1}\; \sys{\sVV}{P} \qquad\text{for any } P\in \ensuremath{\text{Prc}_\text{\rm A}}\xspace\\ \label{eq:15cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_1} \piParalL \pBbbb{v}{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}} \end{align} We assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \piCostBisAmm{n}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$ and then showing that this leads to a contradiction. Consider the move \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}}} \piRedDecCostPad{\acttau}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}}} \end{equation*} This can be matched by $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}}$ using either of the following moves: \begin{itemize} \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}}$. But \eqref{eq:15cs} prohibits this from being the matching move. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\phantom{\acttau}}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$. But \eqref{eq:14cs} prohibits this from being the matching move. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$. But \eqref{eq:13cs} prohibits this from being the matching move. \end{itemize} This contradicts our earlier assumption. \qedhere \end{enumerate} \end{proof} \begin{lem}\label{lem:negative-induction} For all $n\in\Nats$ and appropriate $\sV,\sVV$: \begin{enumerate} \item For any $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ we have $\;\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ \item For any $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ we have $\;\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{n}\; \sys{\sVV}{Q}$ \item For any $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ we have $\;\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{n+1}}\; \sys{\sVV}{Q}$ \item For any $Q \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$ we have $\;\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{n}}\; \sys{\sVV}{Q}$ \item $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{n}}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$ \end{enumerate} \end{lem} \begin{proof} We prove statements $(1)$ to $(5)$ simultaneously, by induction on $n$. \begin{description} \item[$n=0$] We prove each clause by contradiction: \begin{enumerate} \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}} \; \piCostBisAmm{0}\; \sys{\sVV}{Q}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ and consider the transition \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}}} \piRedDecCostPad{\actout{\ctit{out}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}}} \end{equation*} For any $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$, this cannot be matched by any move from $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}}$ since output actions must be preceded by a channel deallocation, which incurs a \emph{negative} cost --- see second and third rows of the graph in \figref{fig:TrnGrphBuff2}. Stated otherwise, every matching move can only be of the form $$\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\actout{\ctit{out}}{v}}{-1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV'}{Q'}}$$ where $\sVV=\bigl(\sVV',c'_1\bigr)$ for some $c'_1$ and $Q'\in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$. However, since the amortisation credit cannot be negative, we can never have $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}} \; \piCostBisAmm{-1}\; \sys{\sVV'}{Q'}$. We therefore obtain a contradiction. \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \piCostBisAmm{0}\; \sys{\sVV}{Q}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ and consider the transition \begin{align*} \qquad\qquad&\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}} \piRedDecCostPad{\acttau}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}}} \end{align*} Since the amortisation credit can never be negative, the matching move can only be of the form $$\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q'}}$$ for some $Q'\in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$. But then we get a contradiction since, from the previous clause, we know that $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{0}\; \sys{\sVV}{Q'}$. \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \piCostBisAmm{{1}}\; \sys{\sVV}{Q}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ and consider the transition \begin{align*} \qquad\qquad&\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}}} \piRedDecCostPad{\acttau}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_2}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}} \end{align*} for some newly allocated channel $c_2$. As in the previous case, since the amortisation credit can never be negative, the matching move can only be of the form $$\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q'}}$$ for some $Q'\in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$. But then we get a contradiction since, from the previous clause, we know that $\;\ensuremath{\env_\text{ext}}\xspace \vDash \sys{(\sV,c_2)}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{0}\; \sys{\sVV}{Q'}$. \item Analogous to the previous case. \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \piCostBisAmm{{0}}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$ and consider the transition \begin{align*} \qquad\qquad&\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}}} \piRedDecCostPad{\acttau}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_2}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}} \end{align*} Since the transition incurred a cost of $+1$ and the current amortisation credit is $0$, the matching weak transition must also incur a cost of $+1$ and thus $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}}$ can only match this by the move \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\acttau}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV,c'_2}{Q}} \end{equation*} for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$. But then we still get a contradiction since, from clause $(2)$, we know $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{0}\; \sys{\sVV}{Q}$.\\ \end{enumerate} \item[$n=k+1$] We prove each clause by contradiction. However before we tackle each individual clause, we note that from clauses $(3)$, $(4)$ and $(5)$ of the I.H. we know \begin{align*} & \text{For any }Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace \text{ we have }\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{k+1}}\; \sys{\sVV}{Q}\\ & \text{For any } Q \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace \text{ we have }\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{k}}\; \sys{\sVV}{Q} \\ & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{k}}\; \sys{\sVV}{\pFfff{c_1}{v}\piParal \pBEee{c_1}} \end{align*} By Lemma~$\ref{lem:Neg:implications}.1$ we obtain, for any $Q'\in\ensuremath{\text{Prc}_\text{\rm C}}\xspace$ and appropriate $\sVV'$: \begin{align*} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFff{c_1} \piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{k}\; \sys{\sVV'}{Q'} \end{align*} and by Lemma~$\ref{lem:Neg:implications}.2(a)$, Lemma~$\ref{lem:Neg:implications}.2(b)$, Lemma~$\ref{lem:Neg:implications}.2(c)$ and Lemma~$\ref{lem:Neg:implications}.2(d)$ we obtain, for any $Q'\in\ensuremath{\text{Prc}_\text{\rm C}}\xspace$ and appropriate $\sVV'$: \begin{align}\label{eq:20cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}} \; \not\!\piCostBisAmm{k}\; \sys{\sVV}{Q'} \end{align} Also, by \eqref{eq:20cs}, Lemma~$\ref{lem:Neg:implications}.3(a)$ and Lemma~$\ref{lem:Neg:implications}.3(b)$ we obtain, for any $Q''\in \ensuremath{\text{Prc}_\text{\rm B}}\xspace$: \begin{align} \label{eq:16cs} &\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{k}\; \sys{\sVV}{Q''}\\ \label{eq:17cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{k}\;\sys{\sVV}{Q''} \end{align} Moreover, by \eqref{eq:20cs}, Lemma~$\ref{lem:Neg:implications}.4(a)$, Lemma~$\ref{lem:Neg:implications}.4(b)$ and Lemma~$\ref{lem:Neg:implications}.4(c)$ we obtain: \begin{align} \label{eq:18cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{k}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}} \end{align} The proofs for each clause are as follows: \begin{enumerate} \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}} \; \piCostBisAmm{{k+1}}\; \sys{\sVV}{Q}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ and consider the transition \begin{equation*} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}}} \piRedDecCostPad{\actout{\ctit{out}}{v}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}}} \end{equation*} For any $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$, this can (only) be matched by any move of the form \begin{align*} \qquad\qquad & \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\actout{\ctit{out}}{v}}{-1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV'}{Q'}} \end{align*} where $\sVV=\bigl(\sVV',c'_1\bigr)$ for some $c'_1$, $Q'\in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$, and the external action \actout{\ctit{out}}{v} is preceded by a $\tau$-move deallocating $c'_1$. For our initial assumption to hold we need to show that \emph{at least one} of these configurations $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV'}{Q'}}$ satisfies the property $$\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c_2}} \; \piCostBisAmm{k}\; \sys{\sVV'}{Q'}. $$ But by \eqref{eq:20cs} we know that no such configuration exists, thereby contradicting our initial assumption. \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \piCostBisAmm{{k+1}}\; \sys{\sVV}{Q}$ for some $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$ and consider the transition \begin{align*} \qquad\qquad&\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}} \piRedDecCostPad{\acttau}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}}} \end{align*} This transition can be matched by \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} through either of the following moves: \begin{enumerate} \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q'}}$ for some $Q'\in\ensuremath{\text{Prc}_\text{\rm A}}\xspace$. However, from the previous clause, \ie clause $(1)$ when $n=k+1$, we know that this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{{k+1}}\; \sys{\sVV}{Q'}$. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\phantom{\acttau}}{-1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV'}{Q'}}$ for some $Q'\in\ensuremath{\text{Prc}_\text{\rm B}}\xspace$ and $\sVV=\bigl(\sVV',c'_1\bigr)$. However, from \eqref{eq:16cs}, we know that this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \pBbbb{v}{c_2}} \; \not\!\piCostBisAmm{k}\; \sys{\sVV'}{Q'}$. \end{enumerate} Thus, we obtain a contradiction. \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \piCostBisAmm{{k+2}}\; \sys{\sVV}{Q}$, where $Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace$, and consider the transition: \begin{align*} \qquad\qquad&\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}}} \piRedDecCostPad{\acttau}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_2}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}} \end{align*} for some newly allocated channel $c_2$. This can be matched by \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} through either of the following moves: \begin{enumerate} \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q'}}$ for some $Q'\in\ensuremath{\text{Prc}_\text{\rm A}}\xspace$. However, from the previous clause, \ie clause $(2)$ when $n=k+1$, we know that this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV,c_2}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{k+1}}\; \sys{\sVV}{Q'}$. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{Q}} \piRedWDecCostPad{\phantom{\acttau}}{-1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV'}{Q'}}$ for some $Q'\in\ensuremath{\text{Prc}_\text{\rm B}}\xspace$ and $\sVV=\bigl(\sVV',c'_1\bigr)$. However, from \eqref{eq:17cs}, we know that this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{k}\;\sys{\sVV'}{Q'}$. \end{enumerate} Thus, we obtain a contradiction. \item Analogous to the proof for the previous clause and relies on \eqref{eq:17cs} again. \item Assume $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \piCostBisAmm{{k+1}}\; \sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$ and consider the transition \begin{align*} \qquad\qquad&\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}}} \piRedDecCostPad{\acttau}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sV,c_2}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}}} \end{align*} for some newly allocated channel $c_2$. This can be matched by the right-hand configuration \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} through either of the following moves: \begin{enumerate} \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\phantom{\acttau}}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}}$, \ie no transitions. However, from \eqref{eq:18cs}, this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{k}\;\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}$. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\acttau}{+1} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV,c'_2}{Q'}}$ for some $Q'\in\ensuremath{\text{Prc}_\text{\rm A}}\xspace$ and $c'_2 \not\in \sVV$. However, from clause $(2)$ when $n=k+1$, this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV,c_2}{\ensuremath{\text{\rm Frn}}\xspace\piParal \piOutA{b}{c_2} \piParal \piOutA{c_1}{(v,c_2)}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{k+1}}\; \sys{\sVV,c'_2}{Q'}$. \item $\conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\sVV}{\pFfff{c'_1}{v}\piParal \pBEee{c'_1}}} \piRedWDecCostPad{\acttau}{0} \conf{\ensuremath{\env_\text{ext}}\xspace}{\sys{\bigl(\sVV',c'_2\bigr)}{Q'}}$ for some $Q'\in\ensuremath{\text{Prc}_\text{\rm B}}\xspace$, $\sVV = \bigl(\sVV',c'_1\bigr)$ and $c'_2 \not\in \sVV$. However, from \eqref{eq:17cs}, this cannot be the matching move since $\ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\ensuremath{\text{\rm Frn}}\xspace \piParalL \piOutA{b}{c_2} \piParalL \piOutA{c_1}{(v,c_2)}\piParalL \pBbb{c_1}} \; \not\!\piCostBisAmm{k}\;\sys{\bigl(\sVV',c'_2\bigr)}{Q'}$. \qedhere \end{enumerate} \end{enumerate} \end{description} \end{proof} \begin{thm}[Strict Inefficiency]\label{thm:strictly-ineffiscient} For all $ n \geq 0 \text{ and appropriate }\sV\text{ we have }$ $$\ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{\ensuremath{\text{\rm Buff}}\xspace}\;\not\!\piCostBisAmm{n}\; \sysS{\ensuremath{\text{\rm eBuff}}\xspace}$$ \end{thm} \begin{proof} Since: \begin{align*} \ensuremath{\text{\rm Buff}}\xspace & \deftxt \pFff{c_1} \piParal \pBbb{c_1} & \ensuremath{\text{\rm eBuff}}\xspace & \deftxt \pFff{c_1} \piParal \pBEee{c_1} \end{align*} we need to show that $$\ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{\pFff{c_1} \piParal \pBbb{c_1}}\;\not\!\piCostBisAmm{n}\; \sysS{\pFff{c_1} \piParal \pBEee{c_1}}$$ for any arbitrary $n$. By Lemma~$\ref{lem:negative-induction}.3$, Lemma~$\ref{lem:negative-induction}.4$ and Lemma~$\ref{lem:negative-induction}.5$ we know that for any $n$: \begin{align} \label{eq:21cs} & \text{For any } Q \in \ensuremath{\text{Prc}_\text{\rm A}}\xspace \text{ we have } \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{n+1}}\; \sys{\sV}{Q}\\ \label{eq:22cs} & \text{For any }Q \in \ensuremath{\text{Prc}_\text{\rm B}}\xspace \text{ we have } \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{n}}\; \sys{\sV}{Q}\\ \label{eq:23cs} & \ensuremath{\env_\text{ext}}\xspace \vDash \sys{\sV}{\pFfff{c_1}{v}\piParal \pBbb{c_1}} \; \not\!\piCostBisAmm{{n}}\; \sys{\sV}{\pFfff{c_1}{v}\piParal \pBEee{c_1}} \end{align} Since $\bigl(\pFff{c_1} \piParal \pBEee{c_1}\bigr) \in \ensuremath{\text{Prc}_\text{\rm C}}\xspace$, by Lemma~$\ref{lem:Neg:implications}.1$, \eqref{eq:21cs},~\eqref{eq:22cs} and \eqref{eq:23cs} we conclude \begin{displaymath} \ensuremath{\env_\text{ext}}\xspace \;\vDash\; \sysS{\pFff{c_1} \piParal \pBbb{c_1}}\;\not\!\piCostBisAmm{n}\; \sysS{\pFff{c_1} \piParal \pBEee{c_1}} \end{displaymath} as required. \end{proof} \subsection{Proving Relative Efficiency} \label{sec:prov-relat-effc} \newcommand{\ensuremath{\env_\text{Frn}}}{\ensuremath{\env_\text{Frn}}} As opposed to Theorem~\ref{thm:strictly-ineffiscient}, the proof for \eqref{eq:25:cs} requires us to consider the entire state-space of \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm Buff}}\xspace}} and \conf{\ensuremath{\env_\text{ext}}\xspace}{\sysS{\ensuremath{\text{\rm eBuff}}\xspace}}. Fortunately, we can apply the compositionality result of Theorem~\ref{thm:costed-bisim-compositionality} to prove \eqref{eq:11cs} and focus on a subset of this state-space. More precisely, we recall from \eqref{eq:24cs} that \begin{align*} \ensuremath{\text{\rm Buff}}\xspace & \deftxt \pFff{c_1} \piParal \pBbb{c_1} & \ensuremath{\text{\rm eBuff}}\xspace & \deftxt \pFff{c_1} \piParal \pBEee{c_1} \end{align*} where both buffer implementation share the common sub-process $\pFff{c_1}$. We also recall from \eqref{eq:9cs} that this common sub-process was typed \wrt the type environment $$\ensuremath{\env_\text{Frn}} = \emap{\ctit{in}}{\chantypW{\tV}},\, \emap{b}{\chantypW{\ensuremath{\tV_\text{\rm rec}}\xspace}}, \,\emap{c_1}{\chantypO{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}}.$$ Theorem~\ref{thm:costed-bisim-compositionality} thus states that in order to prove \eqref{eq:11cs}, it suffices to abstract away from this common code and prove Theorem~\ref{thm:relat-eff} \begin{thm}[Relative Efficiency]\label{thm:relat-eff} $\bigl(\ensuremath{\env_\text{ext}}\xspace,\ensuremath{\env_\text{Frn}}\bigr) \vDash \sysS{\pBEee{c_1}} \;\piCostBisAmm{0}\; \sysS{\pBbb{c_1}}$ \end{thm} \begin{proof} We prove $\ensuremath{\env_\text{ext}}\xspace,\ensuremath{\env_\text{Frn}} \vDash \sysS{\pBEee{c_1}} \;\piCostBisAmm{0}\; \sysS{\pBbb{c_1}}$ through the family of relations \relR\ defined below, which includes the required quadruple $\langle (\ensuremath{\env_\text{ext}}\xspace,\ensuremath{\env_\text{Frn}}), 0, \bigl(\sysS{\pBEee{c_1}}\bigr), \bigl(\sysS{\pBbb{c_1}}\bigr)\rangle$. \begin{equation*} \relR \deftxt\left\{ \begin{array}{@{\langle\,}l@{,\;}l@{,\;}l@{,\;}l@{\,\rangle\;}|@{\quad}l} \bigl(\env,\envv\bigr) & n & \bigl(\sys{\sV'}{\pBEee{c}}\bigr) & \bigl(\sys{\sVV'}{\pBbb{c}}\bigr) \\ \bigl(\env,\envv\bigr) & n & \bigl(\sys{\sV'}{\pBEeee{c}{v}{c'}}\bigr) & \bigl(\sys{\sVV'}{\pBbbb{v}{c'}}\bigr) & \bigl(\ensuremath{\env_\text{ext}}\xspace,\ensuremath{\env_\text{Frn}}\bigr) \ensuremath{\mathrel{\prec}} \env\\ \bigl(\env,\envv\bigr) & n & \bigl(\sys{\sV''}{\pBEeeee{v}{c'}}\bigr) & \bigl(\sys{\sVV'}{\pBbbb{v}{c'}}\bigr) & n \geq 0,\; \sV' \subseteq \sVV'\\ \bigl(\env,\envv\bigr) & n & \bigl(\sys{\sV''}{\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c'}}\bigr) & \bigl(\sys{\sVV'}{\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{c'}}\bigr) & c \not\in \sV'',\; \sV'' \subset \sVV'',\; c \in \sVV'' \\ \bigl(\env,\envv\bigr) & n & \bigl(\sys{\sV''}{\ensuremath{\text{\rm eBk'}}\xspace \piParal \piOutA{d}{c'}}\bigr) & \bigl(\sys{\sVV'}{\ensuremath{\text{\rm Bck'}}\xspace \piParal \piOutA{d}{c'}}\bigr) \end{array} \right\} \end{equation*} Note that, in the quadruples of $\relR$ our observer environment is not limited to derived environments $\env$ obtained from restructurings of $\ensuremath{\env_\text{ext}}\xspace,\ensuremath{\env_\text{Frn}}$, but may include also additional entries, denoted by the environment \envv; these originate from observer channel allocations and uses through the transition rules \rtit{lAllE} and \rtit{lStr} from \figref{fig:LTS}. \relR\ observes the transfer property of Definition~\ref{def:amortized-typed-bisim}. We here go over some key transitions: \begin{itemize} \item Consider a tuple from the first clause of the relation, for some $\env,\envv, n$ and $c$ \ie $$ \bigl(\env,\envv\bigr) \vDash \bigl(\sys{\sV'}{\pBEee{c}}\bigr) \;\relR^n\; \bigl(\sys{\sVV'}{\pBbb{c}}\bigr) $$ We recall from the macros introduced in Section~\ref{sec:proofs-relat-effic} that \begin{align*} \pBEee{c} &= \piIn{c}{(y,z)}{\piFree{c}{\piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{z}\bigr)}}}\\ \pBbb{c} &=\piIn{c}{(y,z)}{ \piOut{\ctit{out}}{y}{ \bigl(\ensuremath{\text{\rm Bck}}\xspace \piParal \piOutA{d}{z}\bigr)}} \end{align*} Whenever $\bigl(\env,\envv\bigr)$ allows it, the left hand configuration can perform an input transitions $$\conf{\bigl(\env,\envv\bigr)}{\sys{\sV'}{\pBEee{c}}} \piRedDecCostPad{\actin{c}{(v,c')}}{0} \conf{\bigl(\env',\envv'\bigr)}{\sys{\sV'}{\pBEeee{c}{v}{c'}}} $$ where $\env = \env',\envmap{c}{\chantypO{\tV,\ensuremath{\tV_\text{\rm rec}}\xspace}}$ and $\envv = \envv', \envmap{v}{\tV},\envmap{c'}{\ensuremath{\tV_\text{\rm rec}}\xspace}$. This can be matched by the transition $$\conf{\bigl(\env,\envv\bigr)}{\sys{\sVV'}{\pBbb{c}}} \piRedDecCostPad{\actin{c}{(v,c')}}{0} \conf{\bigl(\env',\envv'\bigr)}{\sys{\sVV'}{\pBbbb{v}{c'}}} $$ where we have $ \bigl(\env',\envv'\bigr) \vDash \bigl(\sys{\sV'}{\pBEeee{c}{v}{c'}}\bigr) \;\relR^n\; \bigl(\sys{\sVV'}{\pBbbb{v}{c'}}\bigr) $ from the second clause of \relR. The matching move for an input action from the right-hand configuration is dual to this. Matching moves for \actenv, \actall\ and \actfree{c} actions are analogous. \item Consider a tuple from the first clause of the relation, for some $\env,\envv, n, c, v$ and $c'$ \ie $$ \bigl(\env,\envv\bigr) \vDash \bigl(\sys{\sV'}{\pBEeee{c}{v}{c'}}\bigr) \;\relR^n\; \bigl(\sys{\sVV'}{\pBbbb{v}{c'}}\bigr) $$ Since $\pBEeee{c}{v}{c'}= \piFree{c}{\piOut{\ctit{out}}{v}{ \bigl(\ensuremath{\text{\rm eBk}}\xspace \piParal \piOutA{d}{c'}\bigr)}}$, a possible transition by the left-hand configuration is the deallocation of channel $c$: $$\conf{\bigl(\env,\envv\bigr)}{\sys{\sV'}{\pBEeee{c}{v}{c'}}} \piRedDecCostPad{\acttau}{-1} \conf{\bigl(\env,\envv\bigr)}{\sys{\sV''}{\pBEeeee{v}{c'}}} $$ where $\sV'=\sV'',c$. In this case, the matching move is the empty (weak) transition, since we have $ \bigl(\env,\envv\bigr) \vDash \bigl(\sys{\sV''}{\pBEeeee{v}{c'}}\bigr) \;\relR^{n+1}\; \bigl(\sys{\sVV'}{\pBbbb{v}{c'}}\bigr) $ by the third clause of \relR. Dually, if $\bigl(\env,\envv\bigr)$ allows it, the right hand configuration may perform an output action $$\conf{\bigl(\env,\envv\bigr)}{\sys{\sVV'}{\pBbbb{v}{c'}}} \piRedDecCostPad{\actout{\ctit{out}}{v}}{0} \conf{\bigl(\env,\envv,\envmap{v}{\tV}\bigr)}{\sys{\sVV'}{\ensuremath{\text{\rm Bck}}\xspace\piParal\piOutA{d}{c'}}}$$ This can be matched by the weak output action $$\conf{\bigl(\env,\envv\bigr)}{\sys{\sV'}{\pBEeee{c}{v}{c'}}} \piRedWDecCostPad{\actout{\ctit{out}}{v}}{-1} \conf{\bigl(\env,\envv,\envmap{v}{\tV}\bigr)}{\sys{\sV''}{\ensuremath{\text{\rm eBk}}\xspace\piParal\piOutA{d}{c'}}}$$ where $\sV'=\sV'',c$; by the fourth clause of \relR, we know that this a matching move because $ \bigl(\env,\envv,\envmap{v}{\tV}\bigr) \vDash \bigl(\sys{\sV''}{\ensuremath{\text{\rm eBk}}\xspace\piParal\piOutA{d}{c'}}\bigr) \;\relR^{n+1}\; \bigl(\sys{\sVV'}{\ensuremath{\text{\rm Bck}}\xspace\piParal\piOutA{d}{c'}}\bigr)$. \qedhere \end{itemize} \end{proof} \section{Related Work} \label{sec:RelatedWork} \emph{A note on terminology:} From a logical perspective, a \emph{linear} assumption is one that cannot be weakened nor contracted, while an \emph{affine} assumption cannot be contracted but can be weakened. This leads to a reading of linear as ``used exactly once'' and of affine as ``used at most once''. However, in the presence of divergence or deadlock, most linear type systems do not in fact guarantee that a linear resource will be used exactly once. In the discussion below, we will classify such type systems as affine instead. Linear logic was introduced by Girard \cite{girard:linearlogic}; its use as a type system was pioneered by Wadler \cite{wadler:use}. Uniqueness typing was introduced by Barendsen and Smetsers \cite{barendsen:functional}; the relation to linear logic has since been discussed in a number of papers (see \cite{hage:usageanalysis}). Although there are many substructural (linear or affine) type systems for process calculi \cite[and others]{Acciai:typeabstraction,Acciai:responsiveness,Amadio:receptive,Igarashi:generic,Kobayashi:hybrid,Nobuko:dependent}, some specifically for resources \cite{Kobayashi:resourceusage}, the literature on \emph{behaviour} of processes typed under such type systems is much smaller. Kobayashi \emph{et al.} \cite{KobayashiPT:linearity} introduce an affine type system for the \pic. Their channels have a polarity (input, output, or input/output) as well as a multiplicity (unrestricted or affine), and an affine input/output can be split as an affine input and an affine output channel. Communication on an affine input/affine output channel is necessarily deterministic, like communication on an affine/unique-after-1 channel in our calculus; however, both processes lose the right to use the channel after the communication, limiting reuse. Although the paper gives a definition of reduction closed barbed congruence, no compositional proof methods are presented. Yoshida \emph{et al} \cite{Yoshida07:linearity,Honda:processlogic} define a linear type system, which uses ``action types'' to rule out deadlock. The use of action types means that the type system can provide some guarantees that we cannot; this is however an orthogonal aspect of the type system and it would be interesting to see if similar techniques can be applied in our setting. Their type system does not have any type that corresponds to uniqueness; instead, the calculus is based on $\pi$I to control dynamic sharing of names syntactically, thereby limiting channel reuse. The authors give compositional proof techniques for their behavioural equivalence, but give no complete characterization. Teller \cite{teller:resourcespi} introduces a \pic variant with ``finalizers'', processes that run when a resource has been deallocated. The deallocation itself however is performed by a garbage collector. The calculus comes with a type system that provides bounds on the resources that are used, although the scope of channel reuse is limited in the absence of some sort of uniqueness information. Although the paper defines a bisimulation relation, this relation does not take advantage of type information, and no compositionality results or characterization is given. Hoare and O'Hearn \cite{Hoare:seplogicCSP} give a trace semantics for a variant of CSP with point-to-point communication and explicit allocation and deallocation of channels, which relies on separation of permissions. However, they do not consider any behavioural theories. Pym and Tofts \cite{Pym:calculus} similarly give a semantics for SCCS with a generic notion of resource, based on separation of permissions; they do however consider behaviour. They define a bisimulation relation, and show that it can be characterized by a modal logic. These approaches do not use a type system but opt for an operational interpretation of permissions, where actions may block due to lack of permissions. Nevertheless, our consistency requirements for configurations (\defref{def:configuration}) can be seen as separation criteria for permission environments. A detailed comparison between this untyped approach and our typed approach would be worthwhile. Apart from the Clean programming language \cite{journals/mscs/BarendsenS96}, from where uniqueness types originated, static analysis relating to uniqueness has recently been applied to (more mainstream) Object-Ori\-en\-ted programming languages \cite{Gordon:unique:12} as well. In such cases, it would be interesting to investigate whether the techniques developed in this work can be applied to a behavioural setting such as that in \cite{JeffreyR:JavaJr:05}. Our unique-after-$i$ type is related to fractional permissions, introduced in \cite{boyland:03fractions} and used in settings such as separation logic for shared-state concurrency \cite{Bornat:05Separation}. A detailed survey of this field is however beyond the scope of this paper. The use of substitutions in our LTS (\defref{def:renaming}) is reminiscent of the name-bijections carried around in spi-calculus bisimulations \cite{boreale:crypto}. In the spi-calculus however this substitution is carried through the bisimulation, and must remain a bijection throughout. Since processes may lose the permission to use channels in our calculus, this approach is too restrictive for us. Finally, amortisation for coinductive reasoning was originally developed by Keihn \etal, \cite{Kiehn05} and L\"uttgen \etal \cite{LuttgenV06}. It is investigated further by Hennessy in \cite{hennessy:buysell}, whereby a correspondence with (an adaptation of) reduction-barbed congruences is established. However, neither work considers aspects of resource misuse nor the corresponding use of typed analysis in their behavioural and coinductive equivalences. \section{Conclusion} \label{sec:conclusion} We have presented a compositional behavioural theory for \picr, a \pic variant with mechanisms for explicit resource management; a preliminary version of the work appeared in \cite{DevFraHen09}. The theory allows us to compare the efficiency of concurrent channel-passing programs \wrt their resource usage. We integrate the theory with a substructural type system so as to limit our comparisons to safe programs. In particular, we interpret the type assertions of the type system as permissions, and use this to model (explicit and implicit) permission transfer between the systems being compared and the observer during compositional reasoning. Our contributions are as follows: \begin{enumerate} \item We define a costed semantic theory that orders systems of safe \picr programs, based on their costed extensional behaviour when deployed in the context of larger systems; Definition~\ref{def:cost-preorder}. Apart from cost, formulations relating to contextuality are different from those of typed congruences such as \cite{hennessy04behavioural}, because of the kind of type system used \ie substructural. \item We define a bisimulation-based proof technique that allows us to order \picr programs coinductively, without the need to universally quantify over the possible contexts that these programs may be deployed in; Definition~\ref{def:amortized-typed-bisim}. As far as we are aware, the combination of actions-in-context and costed semantics, used in unison with implicit and explicit transfer of permissions so as to limit the efficiency analysis to safe programs, is new. \item We prove a number of properties for our bisimulation preorder of Definition~\ref{def:amortized-typed-bisim}, facilitating the proof constructions for related programs. Whereas Corollary~\ref{cor:preorder} follows \cite{Kiehn05,hennessy:buysell}, Theorem~\ref{thm:costed-bisim-compositionality} extends the property of compositionality for amortised bisimulations to a typed setting. Lemma~\ref{lem:symmetry-bound}, together with the concept of bounded amortisation, appears to be novel altogether. \item We prove that the bisimulation preorder of Definition~\ref{def:amortized-typed-bisim} is a sound and complete proof technique for the costed behavioural preorder of Definition~\ref{def:cost-preorder}; Theorem~\ref{thm:soundness} and Theorem~\ref{thm:completeness}. In order to obtain completeness, the LTS definitions employ non-standard mechanisms for explicit renaming of channel names not known to the context. Also, the concept of (typed) action definability \cite{hennessy04behavioural,Hennessy07} is different because it needs to take into consideration cost and typeability \wrt a substructural type system; the latter aspect also complicated the respective Extrusion Lemma --- see Lemma~\ref{lem:extrusion}. \item We demonstrate the utility of the semantic theory and its respective proof technique by applying them to reason about the client-server systems outlined in the Introduction and a case study, discussed in Section~\ref{sec:case-study}. \end{enumerate} \subsection*{Future Work} \label{sec:future-work} The extension of our framework to a higher-order and distributed setting seems worthwhile. Also, the amalgamation of our uniqueness types with modalities for input and output \cite{PierceS96} would give scope for richer notions of subtyping involving covariance and contravariance, affecting the respective behavioural theory; it would be interesting to explore how our notions of permission transfer extend to such a setting. It is also worth pursuing the applicability of the techniques developed in this work to nominal automata such as Variable Automata \cite{LATA10} and Finite-Memory Automata \cite{Kaminski1994329}. \end{document}
arXiv
\begin{document} \title{Existence of k-ary Trees: Subtree Sizes, Heights and Depths} \begin{abstract} The rooted tree is an important data structure, and the subtree size, height, and depth are naturally defined attributes of every node. We consider the problem of the existence of a k-ary tree given a list of attribute sequences. We give polynomial time ($O(n\log(n))$) algorithms for the existence of a k-ary tree given depth and/or height sequences. Our most significant results are the Strong NP-Completeness of the decision problems of existence of k-ary trees given subtree size sequences. We prove this by multi-stage reductions from \textsc{Numerical Matching with Target Sums}. In the process, we also prove a generalized version of the \textsc{3-Partition} problem to be Strongly NP-Complete. By looking at problems where a combination of attribute sequences are given, we are able to draw the boundary between easy and hard problems related to existence of trees given attribute sequences and enhance our understanding of where the difficulty lies in such problems. \end{abstract} \section{Introduction} Rooted trees are important data structures that are encountered extensively in Computer Science, especially in the form of self-balancing binary trees. Attributes of nodes, like its subtree size, height, and depth are invariant under isomorphism of rooted trees and are ubiquitous in the study of data structures and algorithms. Heights are used in self-balancing trees (AVL~\cite{AVL} or Red-Black~\cite{Red-Black}), depths in analyzing complexity of computation trees, recursion trees and decision trees and subtree sizes for finding order statistics in dynamic data sets~\cite{CLRS}. The subtree size, height and depth of every node in a rooted tree can be computed in time linear in the number of nodes. In this paper, we discuss the computational complexity of the converse problem -- the existence (realization) of a rooted k-ary tree given some of these attribute sequences. The problems that we address are similar in flavor to those studied in Aigner and Triesch~\cite{graph-invariants} who discuss realizability and uniqueness of graphs given invariants. The most famous of such problems is the Erdos-Gallai graph realization problem~\cite{Erdos-Gallai} (a variant also addresses the realization problem for trees) which asks whether a given set of natural numbers occur as the degree sequence of some graph; polynomial time algorithms are known for this problem~\cite{Havel,Hakimi}. Our problem can be considered a part of the category of well researched problems of reconstruction of a combinatorial structure from some form of partial information. Apart from the Erdos-Gallai theorem we already mentioned, a lot of work has been done on reconstruction of graphs from subgraphs~\cite{Ulam, Kelly, Harary1, Harary2, Harary3, Nash-Williams, Lovasz, ManvelGraphs}. Beyond graphs, the problem of reconstruction of combinatorial structures like matrices~\cite{ManvelMatrices} and trees~\cite{ManvelTrees} have also been studied. More recent work has been done on reconstruction of sequences~\cite{Dudik-Schulman} and on reconstruction of strings from substrings~\cite{Acharya}. Bartha and Bursci~\cite{bartha-bursci} have addressed the problem of reconstruction of trees using frequencies of subtree sizes. While their paper focuses on the reconstruction of unrooted trees given subtree sizes, we look at the existence of rooted trees with given attribute sequences. \sloppy Given a rooted tree \text{T}{}, information \text{I}(\text{T}) about the attributes of the tree \text{T}{} can be constructed using various combination of attribute sequences. Given some such information \text{I}, we look at the existence problem \text{E}(\text{I}), which asks whether there is a k-ary tree \text{T}{} such that $\text{I}=\text{I}(\text{T})$. We use the letters \text{S}{}, \text{H}{}, and \text{D}{} to refer to subtree size, height and depth attributes respectively. These attributes can be used individually or in combination as in these examples: \begin{example}\label{subtree} We use the notation like $\text{E}(\text{I}_\text{S})$ for the existence problem given only, say, the subtree sizes sequence\footnote{We use the term sequence (borrowed from the terminology in the Erdos-Gallai theorem) throughout the paper since these attributes are generally computed and used in either non-decreasing or non-increasing order. It should be noted that when such an order is not imposed, these are multisets. Nonetheless, we maintain the use of the term sequence for consistency.}. For example: Given a sequence of subtree sizes, $\text{I}_\text{S}=\{1,2,3,1,1,3,7,1,9\}$, does there exist a tree T such that $\text{I}_\text{S}=\text{I}_\text{S}(T)$?\end{example} \begin{example}\label{setOfTuples} We use the notation like $\text{E}(\text{I}_{\text{S},\text{D}})$ to refer to the existence problem given, say, synchronized subtree sizes and depths (all attributes of a node are associated with each other as tuples). For example: Given synchronized information of (subtree size, depth), $\text{I}_{\text{S},\text{D}}=\{(1,5),(2,4),(3,2),(1,3),(1,3),(3,2),(7,1),(1,1),(9,0)\}$, does there exist a tree T such that $\text{I}_{\text{S},\text{D}}=\text{I}_{\text{S},\text{D}}(T)$?\end{example} \begin{example}\label{tupleOfSets} We use the notation like $\text{E}(\text{I}_\text{S},\text{I}_\text{D})$ when there is no synchronization and just two (or more) sequences are given. For example: Given asynchronized information list of subtree sizes and depths, $\text{I}_\text{S}=\{1,2,3,1,1,3,7,1,9\}, \text{I}_\text{D}=\{5,4,2,3,3,2,1,1,0\}$, does there exist a tree T such that $\text{I}_\text{S}=\text{I}_\text{S}(T)$ and $\text{I}_\text{D}=\text{I}_\text{D}(T)$?\end{example} Problems containing only height and/or depth sequences are shown to have $O(n\log(n))$ algorithms for deciding the existence of k-ary trees. Our most significant results are the proof that all existence problems containing subtree size sequences are Strongly NP-Complete for k-ary trees. We prove the Strong NP-Completeness using reductions from the \textsc{Numerical Matching with Target Sums} problem. The reduction is performed in multiple stages during which we also prove a generalized version of the \textsc{3-partition} problem to be Strongly NP-Complete. We then proceed to provide slightly modified yet similar existence problems (for example, existence of certain sub-classes of trees) which are polynomially solvable. We attempt to draw the boundaries separating the NP-Complete problems from the easy problems, focusing on how changing the attribute, adding restrictions or providing more information change the computational complexity. Section \ref{sect:def} contains the basic definitions, notation and conventions. The most important results are presented in Section \ref{sect:proofs}, which contains the proofs for the Strong NP-Completeness of problems related to subtree sizes. Section \ref{sect:sssk-positive} continues further discussion on subtree sizes and contains algorithms for some sub-classes of trees for which the problem can be solved in polynomial time. This is followed by the Section \ref{sect:HDstuff}, detailing the analyses of the height and depth sequences. Section \ref{sect:combined} contains details of sequences given in combination and some discussion about the difficulty of these problems. Section \ref{sect:conclusion} has concluding remarks and possible directions for future work. \section{Preliminaries}\label{sect:def} \begin{definition}[k-ary Tree]\label{def:kary} A rooted tree in which every node can have at most K children.\end{definition} \begin{definition}[Subtree size]\label{def:subtreeSize} The number of nodes in the subtree rooted at a node (including itself) is known as the subtree size of that node. The subtree size of a node can also be defined recursively as being one greater than the sum of the subtree sizes of its children.\end{definition} \begin{definition}[Height of a node]\label{def:height} The height of a leaf node is zero. The height of every other node is one more than the maximum of the heights of its children.\end{definition} \begin{definition}[Depth of a node]\label{def:depth} The depth of a node is the number of edges in the path from that node to the root.\end{definition} \begin{definition}[Levels in a tree]\label{def:level} The set of nodes at a particular depth forms the level at that depth.\end{definition} \begin{definition}[Complete Trees]\label{def:complete} $K$-ary trees in which every level except possibly the last are filled and all nodes in the last level are filled from the left. Given the number of nodes $n$, this is a unique tree and is represented as $T^c(n)$.\end{definition} \begin{definition}[Full Trees]\label{def:full} $K$-ary trees in which every node has exactly $K$ or $0$ children.\end{definition} The remaining definition are due to Gary and Johnson~\cite{garyjohn, strongNP}. \begin{definition}[\Max{I}] \Max{I} is the magnitude of the maximum number present in an instance $I$ of any decision problem. \end{definition} \begin{definition}[\Length{I}] \Length{I} is the number of symbols required to represent an instance $I$ of any decision problem. \end{definition} \begin{definition}[Strongly NP-Completeness]\label{def:snpc} For a decision problem $\Pi$, we define $\Pi_p$ to denote the subproblem of $\Pi$ obtained by restricting $\Pi$ to only those instances that satisfy $\Max{I} \leq p(\Length{I})$, where $p$ is a polynomial function. The problem $\Pi$ is said to be NP-Complete in the strong sense or Strongly NP-Complete (SNPC) if $\Pi$ belongs to NP and there exists $p$ for which $\Pi_p$ is NP-complete. \end{definition} \begin{definition}[Pseudo-polynomial transformation]\label{def:pseudo} A pseudo-polynomial transformation from a source problem to a target problem is a transformation \textit{f}{} from any instance of the source problem to an arbitrary instance of the target problem satisfying the following conditions: \begin{itemize} \item $f$ should be computable in a time polynomial in \Max{I} and \Length{I}. \item $\Length{I} \leq p(\Length{\textit{f}(I)})$ for some polynomial $p$. \item $\Max{\textit{f}(I)} \leq p'(\Max{I}, \Length{I})$ for some polynomial $p'$. \end{itemize} \end{definition} \begin{myremark}\label{remark-for-proof} To prove problems to be Strongly \npc{}{}, one needs to have a pseudo-polynomial transformation{} from a problem already known to be Strongly \npc{}{}. In practice this is the same as a polynomial transformation used to prove problems NP-Complete, with the added condition that the maximum integer in the constructed instance needs to be polynomially bounded in the maximum integer in and the length of, the instance from which we are making the transformation. \end{myremark} \section{Existence of k-ary Trees given Subtree Sizes Sequence: $\text{E}(\text{I}_{\text{S}})${}} Given a tree, finding the subtree sizes of all its nodes is a linear time problem. One might intuitively expect that the converse problem is also easy. After all, by definition, all that needs to be assured is that the sum of the children's subtree sizes is one less than the parent's subtree size. While intuitively this may seem so, the $\text{E}(\text{I}_{\text{S}})${} problem has been found to be difficult to be solve for binary trees~\cite{manuByKS}. Top-down, bottom-up and dynamic programming approaches were tried but all yielded exponential time algorithms. This difficulty prompted a search for an NP-Completeness reduction. We prove the $\text{E}(\text{I}_{\text{S}})${} problem Strongly \npc{}{} in a series of reductions starting from the \textsc{Numerical Matching with Target Sums} (\textsc{NMTS}{}) problem (Strongly \npc{}{} by Theorem~\ref{theorem:nmts}) to the \textsc{Numerical Matching with Target Sums using K-sets} (\textsc{NMTS-K}{}) problem (in Section~\ref{sect:prove-nmtsk}) to the \textsc{K-Partition with Targets} (\textsc{K-PwT}{}) problem (in Section~\ref{sect:prove-kpwt}) to finally the $\text{E}(\text{I}_{\text{S}})${} problem (in Section~\ref{sect:prove-sssk}). \subsection{Proofs of Strong NP-Completeness}\label{sect:proofs} In this section we prove the Strong NP-Completeness of $\text{E}(\text{I}_{\text{S}})${} via a series of reductions. \begin{theorem}[Due to Garey and Johnson~\cite{garyjohn}]\label{theorem:nmts} The \textsc{NMTS}{} problem stated below is strongly NP complete: Given disjoint sets $X$ and $Y$ each containing $m$ elements, a size function $s:~X\cup Y \mapsto \mathbb{Z^{+}}$, and a target vector $B=(b_1,\dots,b_m) \;\in \mathbb{N}^m$ with positive integer entries, can $X \cup Y$ be partitioned into $m$ disjoint sets $A_1,A_2,\dots,A_m$, each containing exactly one element from each of $X$ and $Y$, such that, $\sum_{a \in A_i}s(a) = b_i$, for $1 \le i \le m$? \end{theorem} \noindent \textsc{NMTS}{}~$(X, Y, s, B, m)$ refers to an instance of the \textsc{NMTS}{} problem characterized by the sets $X$ and $Y$, a size function $s$, the target vector $B$ and the cardinality of the target vector $m$. \subsubsection{\textsc{NMTS-K}{} is Strongly \npc{}}\label{sect:prove-nmtsk} The \textsc{NMTS-K}{} problem is proved Strongly \npc{}{} by reduction from \textsc{NMTS}{}. \begin{problem}[\textsc{NMTS-K}{}]\label{prob:nmtsk} Given $K \geq 2$ disjoint sets $X_i$ each containing $m$ elements, a size function $s:~\bigcup X_i \mapsto \mathbb{Z^{+}}$, and a target vector $B=(b_1,\dots,b_m) \;\in \mathbb{N}^m$ with positive integer entries, can $\bigcup X_i$ be partitioned into $m$ disjoint sets $A_1,A_2,\dots,A_m$, each containing exactly one element from each of $X_i$, such that, $\sum_{a \in A_i}s(a) = b_i$, for $1 \le i \le m$? \end{problem} \noindent \textsc{NMTS-K}{}~$(K, X_i, s, B, m)$ is an instance of the \textsc{NMTS-K}{} problem characterized by the integer $K$, the $K$ sets $X_i$, a size function $s$, the target vector $B$ and the cardinality of the target vector $m$. \begin{proof} The \textsc{NMTS-K}{} problem is in NP since given a candidate partition $A_i$, we only need to verify that, $\sum_{a \in A_i}s(a) = b_i$, for $1 \le i \le m$. We now construct an instance \textsc{NMTS-K}{}$(K, X_i, s', B', m')$ of \textsc{NMTS-K}{} problem from an instance \textsc{NMTS}{}$(X, Y, s, B, m)$ of the \textsc{NMTS}{} problem using the following transformation for $K \geq 3$ since for $K=2$, the \textsc{NMTS-K}{} problem is the \textsc{NMTS}{} problem. Note that this is a polynomial transformation since computing Equations \ref{eq:nmtsk-xi}, \ref{eq:nmtsk-size} and \ref{eq:nmtsk-bi} can be done in polynomial time. \begin{gather} m' = m,\; X_1 = X,\; X_2 = Y\\ X_i \text{ are disjoint sets such that } |X_i| = m' \text{ for } 3 \leq i \leq K\label{eq:nmtsk-xi}\\ s'(x)=\begin{cases}s(x) & x \in X \cup Y\\ 1 & \text{otherwise}\\ \end{cases}\label{eq:nmtsk-size}\\ B'=(b'_1, b'_2, \dots, b'_m) \text{ where } b'_i=b_i+ K-2, \text{ } \forall b_i \in B\label{eq:nmtsk-bi} \end{gather} We now prove that a YES instance of the \textsc{NMTS-K}{} problem occurs iff a YES instance of \textsc{NMTS}{} occurs. Every partition for the \textsc{NMTS}{} problem is associated with a partition for the \textsc{NMTS-K}{} problem. We denote the elements of $X_i \text{ for } i \geq 3$ as $x_{ij}, 1 \leq j \leq m$ and let $A_i$ be the partition for the \textsc{NMTS}{} problem. The associated partition for the \textsc{NMTS-K}{} problem $A'_i$, is defined as follows: $A'_i=A_i \cup \{x_{ji}|3 \leq j \leq K\}$. This association immediately provides us with the equality: $\sum_{x \in A'_i} s'(x) = (\sum_{x \in A_i} s(x)) + (K-2)$ which we compare with the relation $b'_i=b_i+(K-2)$ from Eq.~\ref{eq:nmtsk-bi}. We get that $\sum_{x \in A'_i} s'(x)=b'_i$ and $\sum_{x \in A_i} s(x)=b_i$ either happen simultaneously or not at all. Thus, this association ensures that this is a valid transformation. The maximum number in the constructed instance is either the maximum size from $X$ and $Y$ or $K-2$ added to the maximum number from the target $B$; both of which are polynomially bounded in the maximum integer in, and the length of, the \textsc{NMTS}{} instance. This, along with Remark~\ref{remark-for-proof} proves that \textsc{NMTS-K}{} is Strongly \npc{}{}. \end{proof} \subsubsection{\textsc{K-PwT}{} is Strongly \npc{}{}}\label{sect:prove-kpwt} The \textsc{K-PwT}{} problem is Strongly \npc{}{} by reduction from the \textsc{NMTS-K}{} problem. This problem can be regarded as a generalization of the \textsc{3-Partition} problem where we are looking for a partition into K-sets and there are multiple targets to be reached instead of a single target. \begin{problem}[\textsc{K-PwT}{}] Given a set $X$ with $|X|=Km$, $K\geq 2$, a size function $s: X \mapsto \mathbb{Z^{+}}$ and a target vector $B=(b_1,\dots,b_m) \;\in \mathbb{N}^m$ with positive integer entries, can $X$ be partitioned into $m$ disjoint sets $A_1,A_2,\dots,A_m$, each containing exactly $K$ elements, such that, $\sum_{a \in A_i}s(a) = b_i$, for $1 \le i \le m$? \end{problem} \noindent \textsc{K-PwT}{}~$(K, X, s, B, m)$ is an instance of the \textsc{K-PwT}{} problem characterized by the set $X$, an integer $K$, a size function $s$, the target vector $B$ and the cardinality of the target vector $m$. \begin{proof} The \textsc{K-PwT}{} problem is in NP since given a particular candidate partition $A_i$, we only need to verify that, $\sum_{a \in A_i}s(a) = b_i$, for $1 \le i \le m$. We now construct an instance \textsc{K-PwT}{}~$(K', X, s', B', m')$ of \textsc{K-PwT}{} problem from an instance \textsc{NMTS-K}{}~$(K, X_i, s, B, m)$ of the \textsc{NMTS-K}{} problem in the following manner. $M$ is polynomially bounded by the maximum integer in the \textsc{NMTS-K}{} instance. \begin{gather} M =KmM' \;\text{ where, } M'=\max\big(\{s(x_i) | x_i \in X_i\} \cup \{b_i|b_i \in B\}\big)\label{eq:kpwt-max}\\ K'=K, \text{ } X= \bigcup X_i\label{eq:kpwt-sets}\\ \text{For}\; 1 \leq i \leq K \text{ and } \forall x_j \in X_i: \; s'(x_j)=s(x_j)+M^i\label{eq:kpwt-size}\\ B'=(b'_1, b'_2, \dots, b'_m) \text{ where } b'_j=b_j+\sigma, \; \forall b_j \in B \; \text{ and } \sigma =\sum_{i=1}^K M^{i}\label{eq:kpwt-target} \end{gather} The transformation is polynomial since equations \ref{eq:kpwt-max} to \ref{eq:kpwt-target} are polynomial-time computable. Now we show that a YES instance of the \textsc{K-PwT}{} problem occurs iff a YES instance of \textsc{NMTS-K}{} occurs. For ease of exposition, for the rest of the proof, we write all the numbers in \textsc{K-PwT}{}~$(K', X, s', B', m')$ in base $M$. We make three remarks, the first: $\sigma$ is a $K+1$ digit number with a 1 in all its digits except the rightmost or $0^{th}$ digit (Eq.~\ref{eq:kpwt-target}). The second, that every number $s'(x)$ has a 1 as its $i^{th}$ digit, $s(x)$ in its rightmost digit\footnote{Follows from Eq.~\ref{eq:kpwt-size} and $M$ being much greater in magnitude than any number in the \textsc{NMTS-K}{} instance.} and 0 elsewhere. The third, a partition $A_j$ for the \textsc{NMTS-K}{} instance ($\bigcup X_i$) is also a partition for the \textsc{K-PwT}{} instance ($X$), irrespective of whether either of them solve the respective problems or not. We'll prove first that if $A_j$ is a partition that solves the \textsc{NMTS-K}{} problem, then it also solves the \textsc{K-PwT}{} problem. Let $A_j$ be a partition that solves the \textsc{NMTS-K}{} problem. Using the same partition and Eq.~\ref{eq:kpwt-size} and \ref{eq:kpwt-target}, we get that $\sum_{x \in A_j} s'(x)=\sum_{x \in A_j} s(x) + \sum_{i=1}^{K}M^i=\sum_{x \in A_j} s(x) + \sigma=b_j + \sigma=b'_j$. This proves that if $A_j$ solves the \textsc{NMTS-K}{} problem, then it also solves the \textsc{K-PwT}{} problem. To prove the converse, let $A_j$ be a partition that solves the \textsc{K-PwT}{} problem. We know that $\sum_{x \in A_j} s'(x) = b_j + \sigma$, which implies that $\sum_{x \in A_j} s(x) + \sum_i\sum_{x\in{}X_i\cap{}A_j}M^i = b_j + \sigma$ (from Eq.~\ref{eq:kpwt-size}). This in turn implies that $\sum_{x \in A_j} s(x) = b_j$ and $\sum_i\sum_{x\in{}X_i\cap{}A_j}M^i = \sigma = \sum_{i=1}^{K}M^i$ since $s(x)$ does not contribute to $\sigma$ (from the first two remarks and Eq.~\ref{eq:kpwt-target}). Given $\sum_i\sum_{x\in{}X_i\cap{}A_j}M^i = \sum_{i=1}^{K}M^i$, equating the coefficients of the powers of $M$, we get that $|X_i\cap{}A_j|=1 \text{, } \forall i,j$ which says that every set in the partition contains exactly one element from each of the sets $X_i$. We already know from the earlier equations that $\sum_{x \in A_j} s(x) = b_j$. Thus, the partition $A_j$ is a solution to the \textsc{NMTS-K}{} problem as well. The maximum integer in the \textsc{K-PwT}{} instance created by the transformation, $\sigma + \max(b_1,\dots,b_m)$, is bounded (from Eq. \ref{eq:kpwt-max}) by a polynomial in the maximum integer in, and the length of, the \textsc{NMTS}{} instance, which by Remark~\ref{remark-for-proof} makes this is a pseudo-polynomial transformation{}. Thus, \textsc{K-PwT}{} is Strongly \npc{}{}. \end{proof} \subsubsection{$\text{E}(\text{I}_{\text{S}})${} is Strongly \npc{}{}}\label{sect:prove-sssk} We prove that the $\text{E}(\text{I}_{\text{S}})${} problem is Strongly \npc{}{} by reduction from the \textsc{K-PwT}{} problem. We use a subclass of the \textsc{K-PwT}{} problem, $\Pi_p$, such that $\Max{I} \leq p(\Length{I}),\; \forall I \in \Pi_p$. By definition \ref{def:snpc}, $\Pi_p$ is NP-Complete{}. The $\text{E}(\text{I}_{\text{S}})${} problem: \begin{problem}[$\text{E}(\text{I}_{\text{S}})${}] Given a sequence $\text{I}_\text{S}$ does there exist a $k$-ary tree $\text{T}$, such that $\text{I}_\text{S}=\text{I}_\text{S}(\text{T})$? \end{problem} The $\text{E}(\text{I}_{\text{S}})${} problem is characterized by the set $S$ and the integer $k$. We refer to such an instance as $\text{E}(\text{I}_{\text{S}})${}$(S, k)$. \begin{proof} It is easy to see that the $\text{E}(\text{I}_{\text{S}})${} problem is in NP. Given $\text{T}$, one only needs to compare $\text{I}_\text{S}$ with $\text{I}_\text{S}(T)$ to see if the given tree realizes that sequence. We now construct an instance $\text{E}(\text{I}_{\text{S}})${}$(S, k)$ from an instance of the \textsc{K-PwT}{} problem, \textsc{K-PwT}{}~$(K, X, s, B, m)$, with $k=K$. We define a number $M$ which is a power of K, is much greater in magnitude than any of the other numbers in the problem, and is polynomially bounded by the maximum integer in the \textsc{K-PwT}{} instance\footnote{$K^{\lceil\log_{K}\alpha}\rceil \leq K^{1+\log_{K}\alpha} = K\alpha. \therefore$ it is polynomially bounded.} (Eq.~\ref{eq:sssk-max} and \ref{eq:sssk-M}). We also define $m'$ and $m''$ such that $m+m'$ and $m+m''$ are powers of $K$.(Eq.~\ref{eq:sssk-m'm''}). \begin{gather} M_1=\max\big(\{s(x_i) | x_i \in X\} \cup \{b_i|b_i \in B\}\big)\label{eq:sssk-max}\\ M=K^{\lceil\log_k M_2 \rceil}, \text{ where } M_2 =KmM_1\label{eq:sssk-M}\\ m'=K^d-Km,\; m''=K^{d-1}-m=m'/K \text{, where } d=\lceil\log_K(Km)\rceil\label{eq:sssk-m'm''} \end{gather} We make the sequence $S$ for the $\text{E}(\text{I}_{\text{S}})${} instance using four ``component'' sequences, namely the ``child component'' C, the ``parent component'' P, the ``grandparent component'' G and the ``descendant component'' D: \begin{gather} S=C \cup P \cup G \cup D \text{ where,}\label{eq:sssk-S}\\ C=C' \cup C'',\; C'=\big\{s(x)+M \big|\, x \in X\big\},\; C''=\big\{\overbrace{M,\, \dots,\, M}^{m' \mathrm{times}}\big\}\label{eq:sssk-C}\\ P=P'\cup P'',\; P'=\big\{b_i+KM+1 \big|\, b_i \in B \big\},\; P''=\big\{\overbrace{KM+1,\, \dots,\, KM+1}^{m'' \mathrm{times}}\big\}\label{eq:sssk-P}\\ G=\bigcup_{i=0}^{d-2}l_i, \text{ where } l_i \text{ are ``levels'' defined later in the text.} \label{eq:sssk-G}\\ D=\bigcup_{i=1}^{K^d}D_i, \text{ where } D_i=\bigcup \text{I}_\text{S}(T^c(c_i)),\; \forall c_i\in C\label{eq:sssk-D} \end{gather} The ``child component'' $C$ is the union of the sets $C'$ and $C''$. $C'$ is in one-to-one correspondence with the set $X$, using the sizes of elements from $X$ with $M$ added to them. The set $C''$ is used to make the cardinality of set $C$ to be a power of $K$ using elements of value $M$. The ``parent component'' $P$ is the union of the sets $P'$ and $P''$. $P'$ is in one-to-one-correspondence with $B$ but has been modified to accommodate the changes made to sizes of elements of $X$ while making $C'$. $P''$ is used to make the cardinality of the set $P$ to be a power of $K$ using elements of value $KM+1$. We construct the ``grandparent component'' in ``levels''. The lowest level $l_{d-2}$ is constructed from $P$, by arbitrarily taking blocks of $K$ elements, adding them all up and incrementing the result by one. Formally, we order the elements in $P$ arbitrarily as $P_1, P_2, \dots, P_{K^{d-1}}$ and then let $l_{d-2}=\{l_{d-2,i} \;|\; l_{d-2,i}=1+\sum_{j=1}^K P_{(i-1)K+j}, \; 1 \leq i \leq K^{d-2} \}$. Other levels $l_{d-i}$ are constructed in a similar manner from levels $l_{d-i+1}$. This is continued until $l_0$ which has only one element\footnote{The ``parent component'' was padded with elements until the number of elements became a power of $K$. Since at each level the number of elements gets reduced by exactly a factor of $K$, eventually exactly one element will remain.}. The element in $l_0$ would be the largest number in the final instance. The ``descendant component'' is constructed by using $T^c(c_i)$, the subtree sizes sequences of complete trees on each of the elements $c_i \in C$ (Refer to definition \ref{def:complete}). That is, for each such $c_i$, we make a complete k-ary tree on $c_i$ nodes and find its subtree size sequence $\text{I}_\text{S}(T^c(c_i))$. Let these sequences be labeled $D_i$. The descendant component is $D= \bigcup_{i}D_i=\bigcup_i \text{I}_\text{S}(T^c(c_i))$. This is a polynomial transformation since each element from the \textsc{NMTS-K}{} instance is being used only once and each time a simple addition is done to get $P$ and $C$. There are a logarithmic number ($O(d)$) of $l \in G$ and each is computed in polynomial time. $D$ is made up of $\text{I}_\text{S}(T^c(c_i))$ for each $c_i \in C$ which has a polynomial number of elements and computing this for each element can be done in polynomial time\footnote{We are reducing from $\Pi_p$, which ensures that the elements in the $C$ are polynomially bounded by $\Max{I}\; \forall I \in \Pi_p$ and thus, the subtree sizes sequences on complete trees on these number of nodes can be computed polynomially.}. Thus, every component can be computed in polynomial time and so the whole $\text{E}(\text{I}_{\text{S}})${} instance can be constructed in polynomial time. Now we show that a YES instance of the \textsc{NMTS-K}{} problem occurs iff a YES instance of $\text{E}(\text{I}_{\text{S}})${} occurs. By construction, all elements in $G$ and $P$, together, will form a k-ary tree with the elements in $P$ as the leaves. Also $C$ and $D$ will make a forest of k-ary trees with each element from $C$ as a root of one of the trees in the forest. Since $C'$ is in one-to-one correspondence with $X$ and $P'$ is in one-to-one correspondence with $B$, if there is a partition, $C'$ gets partitioned accordingly and these become the children of elements of $P'$. $C''$ can be arbitrarily partitioned and made the children of elements of $P''$. This will provide the remaining edges to make a tree. Now we need to prove that if there is a tree, then there is also a partition. For this, we only need to prove that the set of children of $P'$ is equal to $C'$. To prove this, it is sufficient to show that the elements of $P'$ and the elements of $C'$ occur in consecutive levels in any tree. We use equations \ref{eq:sssk-C} to \ref{eq:sssk-D} to prove this. The element in $l_0$ is the largest element and will necessarily have to be the root. This will be followed by the elements from $l_1$ since no other elements are large enough to reach the element in $l_0$. Continuing this argument it is clear that the $l_i \in G$ will always appear in consecutive levels in any tree and that $P$ will follow immediately below these levels. Now, since no element from $p \in P$ will be a child of any $p' \in P$, elements from either $C$ or $D$ will be needed to make child nodes of elements in $P$. But we note that elements in $D$ will all be less than $M/K$ in value which will not be enough to reach elements in $P$ thus necessitating that all children of elements of $P$ come from $C$. We note that elements from $C'$ can not be children of elements from $P''$ and so the set of children of $P''$ will be equal to $C''$. Since a value of the order of $KM$ has to be reached for elements in $P'$ and all elements in $C'$ are of the order of $M$, all the elements from $C'$ will be used. Thus, if there is a tree, then it will have the elements from $P'$ and $C'$ in consecutive levels and therefore have a partition. This transformation is sufficient to prove that $\text{E}(\text{I}_{\text{S}})${} is NP-Complete{}. For it to be Strongly \npc{}{}, a subproblem of $\text{E}(\text{I}_{\text{S}})${} in which the maximum integer is bounded by the length of the instance has to be proven NP-Complete{}. We note that the maximum integer in this reduction is the element in $l_0$. We have already argued how this integer is polynomially bounded by the length of the problem. This in turn proves that this reduction is also sufficient to prove that $\text{E}(\text{I}_{\text{S}})${} is Strongly \npc{}{}. \end{proof} \begin{corollary}[$\text{E}(\text{I}_{\text{S}})${} for full k-ary trees]\label{coro:full-sssk} The $\text{E}(\text{I}_{\text{S}})${} problem for full k-ary trees (existence of full k-ary trees given $\text{I}_\text{S}$) is also Strongly \npc{}. \end{corollary} In the proof (Ref. Section \ref{sect:prove-sssk}) of the $\text{E}(\text{I}_{\text{S}})${} problem for k-ary trees, we are reducing (when it is a YES instance) the \textsc{K-PwT}{} instance to a full k-ary tree, except possibly in $D$. $D$ is being made of complete trees on elements of $C$. A simple inductive proof is enough to show that changing every element $c \in C$ to the form $Kc + 1$ would be enough to make all of the complete trees in $D$ to also be full trees. This (along with similar changes to $P$) would allow the same reduction to be used to prove $\text{E}(\text{I}_{\text{S}})${} to be Strongly \npc{}{} for full k-ary trees as well. \subsection{Sub-classes realizable in polynomial time}\label{sect:sssk-positive} While the existence problems for a k-ary tree and for the full k-ary are Strongly \npc{}{}, some sub-classes can be realized in polynomial time. \begin{enumerate} \item Complete Trees: The complete tree $\text{T}^{c}(n)$ on a given number of nodes $n$ has a unique structure. Given a sequence $\text{I}_\text{S}$, we construct $\text{T}^{c}(|\text{I}_\text{S}|)$ and simply check if $\text{I}_\text{S} = \text{I}_\text{S}(\text{T}^{c})$. \item Degenerate Trees (A tree which is just a path): $\text{I}_\text{S}$ must contain exactly one instance of every number from 1 to $|\text{I}_\text{S}|$. \end{enumerate} These along with corollary \ref{coro:full-sssk} show that the structure of the tree we are trying to realize plays an important role in deciding the complexity of the problem. The full k-ary tree which allows flexibility in terms of the structure of the tree is Strongly \npc{}, while for the more rigidly structured complete and degenerate trees, the problem is trivial. \section{Height and Depth}\label{sect:HDstuff} Depths, heights and subtree sizes can be recursively defined on the basis of the attribute values of some neighbor nodes and given a tree, each of the sequences $\text{I}_\text{S}$, $\text{I}_\text{H}$, $\text{I}_\text{D}$ can be computed in linear time. We have seen that realizing trees given the subtree sizes sequence is NP-Complete but when we consider sequences of other attributes, we see that they are much easier to solve. We now provide $O(n\log(n))$ algorithms for determining the existence of trees given height or depth sequences. \subsection{Depth}\label{sect:depth} We know that in $K$-ary trees, every node can have at most $K$ children. And by definition, depth values give the level at which a node is present. Let us define $C_d$ to represent the number of times the value $d$ occurs in the sequence $\text{I}_\text{D}$. If we have built a k-ary tree down to $d$ levels then the next level can accommodate at most $KC_d$ nodes. Hence, if for every $d$ from $0$ to $d_{max}-1$, the condition $C_{d+1} \le KC_d$ holds, then a tree can be constructed. If the condition fails at any point then we get a proof that no tree exists. \begin{gather}\label{eq:depth-condition} C_{d+1} \leq KC_d, \text{ for } 0 \leq d \leq d_{max}-1 \end{gather} \subparagraph{Sub-classes} \begin{itemize} \item Complete tree: Eq.~\ref{eq:depth-condition} is modified to be $C_{d+1}=KC_d$ for all $d \neq d_{max}-1$. \item Full $K$-ary tree: Along with the condition in Eq.~\ref{eq:depth-condition} we add the extra condition that $K$ divides $C_d$ for all $d$. That is, $K|C_d \quad \forall d$. \item Degenerate Tree (A path): Exactly one count of each value must be present. \end{itemize} It is easy to see that these will all result in single pass algorithms once we have sorted the sequence and thus their running time is of the order of $O(n\log(n))$ where $n$ is the number of elements in the input sequence. \subsection{Height}\label{sect:height} Given a height sequence $\text{I}_\text{H}$, to solve the existence problem, we use the fact that, from the definition of height of a node, if a particular value $h$ exists in the sequence then so would at least one instance of each value less than that ($h-1, h-2, \dots, 2, 1$). \begin{definition}[Strand] The strand of a node is a maximal path from the node to a leaf. \end{definition} We divide the given sequence $\text{I}_\text{H}$ into maximal strands by choosing greedily from the given sequence. We ensure that the maximal length strands are made before we make any of the smaller strands. So, first the root (the largest $h\in\text{I}_\text{H}$) will get a strand of its own. Then, the next biggest remaining height value will get a strand and so on until no elements remain in the sequence. This division is going to be unique because for a node to get a height value of $h$, it has to have have at least one child whose height is $h-1$, which in turn will need $h-2$ and so on. Thus, it gives us a necessary condition for the existence of a tree: if while making these strands, we get stuck at some point then no tree can be constructed. Once these strands are constructed, then we connect them together to get a tree, if possible. Any strand with its root's height value as $h$ can only be attached as a child of a node whose height value is at least $h+1$. Thus if we just check if there are enough places where strands can be joined then we could answer whether a tree exists. This translates into a simple polynomial time algorithm stated in the following pseudo-code: \begin{verbatim} places:=0 count[i]:= number of nodes with height i for i from h_max to 0: places := places + K*count[i] - count[i-1] if places < 0: No tree exists Tree exists \end{verbatim} The \verb|places| variable tells us how many places are left at higher levels after each strand is added to the tree. If at some point, \verb|places| becomes less than zero then the last strand we added was an invalid addition. Hence there would be no tree possible. If it is non-negative throughout then a tree would be possible. \subparagraph{Sub-classes} \begin{itemize} \item Complete $K$-ary: Compare if $\text{I}_\text{H} = \text{I}_\text{H}(T^{c})$, where $T^{c}$ is the complete tree on $|\text{I}_\text{H}|$ nodes. \item Full $K$-ary tree: If \verb|places| is zero at the end of the loop, then a full $K$-ary tree is possible. \item Degenerate Tree: Exactly one count of each height value must be present. \end{itemize} \section{Combining sequences}\label{sect:combined} In this section we look at the problem of existence given sequences in combination. \subsection{Synchronized Height and Depth: $\text{E}(\text{I}_{\text{H}\text{D}})$} The algorithm for solving the $\text{E}(\text{I}_{\text{H}\text{D}})$ problem combines ideas from the methods to solve the depth and height problems (Sections \ref{sect:depth} and \ref{sect:height} respectively) and is easy to verify. \begin{enumerate} \item While finding maximal strands, ensure that depth values are also assigned in order. \item Before computing \verb|places|, put the roots of the strands at the correct depth. \item At every level, check if dedicated possible parent exists for each strand rooted at that level. Do this in descending order of the root's height values. \item If one gets stuck at any point in the algorithm, then no tree exists, otherwise, a tree exists. \end{enumerate} \subparagraph{Asynchronized Height and Depth: $\text{E}(\text{I}_\text{H},\text{I}_\text{D})$} We do not know any solution to the asynchronized version of the height and depth sequences nor do we have a proof for NP-Complete{}ness. We have seen that realization given height and depth sequences be accomplished in polynomial time. We now attempt to solve the existence problem given subtree sizes by additionally providing depth and/or height sequences. Note that the existence of complete and degenerate trees can still be polynomially answered given any combination of depth and height sequences along with subtree sizes sequence (Section \ref{sect:sssk-positive}, \ref{sect:depth} and \ref{sect:height}). \begin{corollary}\label{coro:SD} $\text{E}(\text{I}_\text{S},\text{I}_\text{D})$ and $\text{E}(\text{I}_{\text{S}\text{D}})$ for k-ary as well as full k-ary trees are Strongly \npc{}{}. \end{corollary}\begin{proof} We show that the $\text{E}(\text{I}_\text{S})$ instance created during the reduction in Section~\ref{sect:prove-sssk} implicitly constructs instances of the $\text{E}(\text{I}_\text{S},\text{I}_\text{D})$ and $\text{E}(\text{I}_{\text{S}\text{D}})$ problems. We note that the depth of the levels in $G$, are equal to the subscripts (0 to $d-2$) that are used. For $P$ and $C$ the depths are $d-1$ and $d$, respectively. The depths for the complete trees created in $D$ can be computed along with the subtree sizes. Adding this information along with the subtree sizes information during reduction allows one to prove the $\text{E}(\text{I}_\text{S},\text{I}_\text{D})$ and $\text{E}(\text{I}_{\text{S}\text{D}})$ problems to be Strongly \npc{}{}. Similar modifications to corollary~\ref{coro:full-sssk} prove the Strongly \npc{}{}ness for full k-ary trees as well. \end{proof} \begin{corollary}\label{coro:HS} $\text{E}(\text{I}_\text{S},\text{I}_\text{H})$ and $\text{E}(\text{I}_{\text{S}\text{H}})$ for k-ary as well as full k-ary trees are Strongly \npc{}{}. \end{corollary} \begin{proof} We show that the $\text{E}(\text{I}_\text{S})$ instance created during the reduction in Section~\ref{sect:prove-sssk} implicitly constructs instances of the $\text{E}(\text{I}_\text{S},\text{I}_\text{H})$ and $\text{E}(\text{I}_{\text{S}\text{H}})$ problems. To know the height of a node, the heights of the children need to be known. Due to the way in which the child component is constructed (Eq.~\ref{eq:sssk-C}), a child component element $c$ will always satisfy the following inequality $K^h \leq c < K^{h+1}$ where $h=\lceil\log(M_2)\rceil$. This along with the fact that the descendant component attaches a complete subtree to $c_i$ ensures that the heights of all $c_i$ will be fixed at $h$. The parent and grandparent components' heights are decided based on the heights of the $c_i$, hence the heights of all the nodes in $S$ can be computed during the transformation. Similarly with corollary~\ref{coro:full-sssk} for full k-ary versions. \end{proof} \begin{corollary}\label{coro:HDS} Both the synchronized and asynchronized sequences of heights, depths and subtree sizes are NP-Complete. (Obvious from Corollaries \ref{coro:SD} and \ref{coro:HS}.) \end{corollary} Since the height and depth sequences are not enough to provide a solutions, we look at providing the inorder traversal rank (the position of a node during inorder traversal) synchronized with subtree sizes; we denote this as $\text{I}_{\text{S},\text{ITR}}$ and the problem as $\text{E}(\text{I}_{\text{S},\text{ITR}})$. While this can be extended to k-ary trees, it is most easily illustrated in binary trees where the inorder traversal rank is the rank of the node as it would have been in a binary search tree. Since we know the root, it allows for trivial partitioning of the remaining elements into either the left or the right subtree. After that step, one is left with two smaller problems. This would allow one to use a divide-and-conquer method to get a polynomial time solution; giving us the following corollary: \begin{corollary}\label{coro:itr} $\text{E}(\text{I}_{\text{S},\text{ITR}})$ can be solved in polynomial time. \end{corollary} The $\text{E}(\text{I}_{\text{S},\text{ITR}})$ problem throws light into the structural difficulty in the $\text{E}(\text{I}_{\text{S}})${} problem. In the $\text{E}(\text{I}_{\text{S}})${} problem, say for binary trees, deciding the root node's subtree size is obvious (the largest value in the sequence, say $r$). After that, the second largest number, say $l$, is necessarily one of the children (let it be the left child) and $r-l$ will be the remaining child of the root node. So, the root and its two children can be easily decided (if they exist) but after that, partitioning the remaining nodes into the left or right subtree of the root is a difficult task. This is like the NP-Complete \textsc{partition} problem but is more complex since not only does the partition have to sum up to a particular value, the partition must also be realizable as a tree. Adding the inorder traversal ranks allows definitive partitioning of the nodes into left/right subtrees. This is then no longer similar to the \textsc{partition} problem, leading to a polynomial time solution. This shows that the \textsc{partition} like nature of the $\text{E}(\text{I}_{\text{S}})${} problem is an important part of what makes it Strongly \npc{}{}. We also note that whenever we are able to solve a variant of the $\text{E}(\text{I}_\text{S})$, we are always creating a unique tree. As soon as there is structural flexibility in the construction of the tree, the problem becomes difficult. \section{Conclusion}\label{sect:conclusion} In this paper we look at the problem of the existence of rooted k-ary trees given some combination of sequences of attributes like subtree sizes, heights and depths. We prove the problem of the existence of the tree given the subtree sizes sequence to be Strongly \npc{}{}; problems that additionally provide height and/or depth sequences, in either synchronized or asynchronized manner still have the same complexity. We also prove that in each of these cases, when asked about the existence of full k-ary trees, the problem remains Strongly \npc{}{}. Existence of trees given $\text{I}_\text{H}$, $\text{I}_\text{D}$ and $\text{I}_{\text{H},\text{D}}$ can be solved in polynomial time, even the problem of existence of full k-ary trees. For all of these problems existence of complete and degenerate trees can be solved in polynomial time. Apart from these, when the inorder traversal rank is given synchronized with the subtree sizes sequence, the existence of a tree can be answered in polynomial time. We argued that this is evidence that the difficulty in the problem is in its partitioning like nature. We also argued, by comparing the complexity of complete and degenerate tree variants with the full tree variant, that the uncertainty or freedom in the structure of a tree plays a role in the intractability of the problem. There are many areas for future work. For problems related to subtree size sequences, exact exponential, approximation algorithms or other such strategies can be searched for, which allow solving it more efficiently. One could also look for minimal super-sequences or maximal sub-sequences which realize a tree. The contrasting nature of the subtree sizes, height and depth attributes can be studied to gain a better understanding of the relation between them in terms of realizability. Along the same lines, we have not been able to find either an algorithm or a proof of NP-Complete{}ness for the asynchronized height and depth sequence problem. Studying this problem might throw insight into how the height and depth of k-ary trees are related. Our focus throughout the paper has been on rooted k-ary trees but all these problems can also be asked for general rooted trees. \subparagraph*{Acknowledgments.} The author would like to thank Professor Rahul Muthu and Professor Srikrishnan Divakaran for their continued guidance and for their helpful suggestions. The author also thanks Professor Jayanth Varma for his review of the draft. \end{document}
arXiv
Active Calculus Matthew Boelkins ☰Contents AssignmentsPractice Peer Instruction (Instructor)Peer Instruction (Student) Change Course Instructor's Page Progress Page Edit ProfileChange PasswordLog Out Choose avatar ✔️You! 🌈 ✔️Open SansAaBbCc 123 PreTeXt Roboto SerifAaBbCc 123 PreTeXt Adjust font heavier f a r t h e r Word spacing smaller gap larger gap ✔️default Reading ruler ✔️none L-underline grey bar sunrise underline Motion by: ✔️follow the mouse up/down arrows - not yet eye tracking - not yet <Prev^UpNext> \(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \DeclareMathOperator{\arcsec}{arcsec} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9} \newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}} \) Active Calculus: Our Goals Features of the Text Students! Read this! Instructors! Read this! 1 Understanding the Derivative 1.1 How do we measure velocity? 1.1.1 Position and average velocity 1.1.2 Instantaneous Velocity 1.1.4 Exercises 1.2 The notion of limit 1.2.1 The Notion of Limit 1.3 The derivative of a function at a point 1.3.1 The Derivative of a Function at a Point 1.4 The derivative function 1.4.1 How the derivative is itself a function 1.5 Interpreting, estimating, and using the derivative 1.5.1 Units of the derivative function 1.5.2 Toward more accurate derivative estimates 1.6 The second derivative 1.6.1 Increasing or decreasing 1.6.2 The Second Derivative 1.6.3 Concavity 1.7 Limits, Continuity, and Differentiability 1.7.1 Having a limit at a point 1.7.2 Being continuous at a point 1.7.3 Being differentiable at a point 1.8 The Tangent Line Approximation 1.8.1 The tangent line 1.8.2 The local linearization 2 Computing Derivatives 2.1 Elementary derivative rules 2.1.1 Some Key Notation 2.1.2 Constant, Power, and Exponential Functions 2.1.3 Constant Multiples and Sums of Functions 2.2 The sine and cosine functions 2.2.1 The sine and cosine functions 2.3.1 The product rule 2.3.2 The quotient rule 2.3.3 Combining rules 2.4 Derivatives of other trigonometric functions 2.4.1 Derivatives of the cotangent, secant, and cosecant functions 2.5.1 The chain rule 2.5.2 Using multiple rules simultaneously 2.5.3 The composite version of basic function rules 2.6 Derivatives of Inverse Functions 2.6.1 Basic facts about inverse functions 2.6.2 The derivative of the natural logarithm function 2.6.3 Inverse trigonometric functions and their derivatives 2.6.4 The link between the derivative of a function and the derivative of its inverse 2.7 Derivatives of Functions Given Implicitly 2.7.1 Implicit Differentiation 2.8 Using Derivatives to Evaluate Limits 2.8.1 Using derivatives to evaluate indeterminate limits of the form \(\frac{0}{0}\text{.}\) 2.8.2 Limits involving \(\infty\) 3 Using Derivatives 3.1 Using derivatives to identify extreme values 3.1.1 Critical numbers and the first derivative test 3.1.2 The second derivative test 3.2 Using derivatives to describe families of functions 3.2.1 Describing families of functions in terms of parameters 3.3 Global Optimization 3.3.1 Global Optimization 3.3.2 Moving toward applications 3.4 Applied Optimization 3.4.1 More applied optimization problems 3.5.1 Related Rates Problems 4 The Definite Integral 4.1 Determining distance traveled from velocity 4.1.1 Area under the graph of the velocity function 4.1.2 Two approaches: area and antidifferentiation 4.1.3 When velocity is negative 4.2 Riemann Sums 4.2.1 Sigma Notation 4.2.2 Riemann Sums 4.2.3 When the function is sometimes negative 4.3.1 The definition of the definite integral 4.3.2 Some properties of the definite integral 4.3.3 How the definite integral is connected to a function's average value 4.4.1 The Fundamental Theorem of Calculus 4.4.2 Basic antiderivatives 4.4.3 The total change theorem 5 Evaluating Integrals 5.1 Constructing Accurate Graphs of Antiderivatives 5.1.1 Constructing the graph of an antiderivative 5.1.2 Multiple antiderivatives of a single function 5.1.3 Functions defined by integrals 5.2 The Second Fundamental Theorem of Calculus 5.2.1 The Second Fundamental Theorem of Calculus 5.2.2 Understanding Integral Functions 5.2.3 Differentiating an Integral Function 5.3 Integration by Substitution 5.3.1 Reversing the Chain Rule: First Steps 5.3.2 Reversing the Chain Rule: \(u\)-substitution 5.3.3 Evaluating Definite Integrals via \(u\)-substitution 5.4 Integration by Parts 5.4.1 Reversing the Product Rule: Integration by Parts 5.4.2 Some Subtleties with Integration by Parts 5.4.3 Using Integration by Parts Multiple Times 5.4.4 Evaluating Definite Integrals Using Integration by Parts 5.4.5 When \(u\)-substitution and Integration by Parts Fail to Help 5.5 Other Options for Finding Algebraic Antiderivatives 5.5.1 The Method of Partial Fractions 5.5.2 Using an Integral Table 5.5.3 Using Computer Algebra Systems 5.6 Numerical Integration 5.6.1 The Trapezoid Rule 5.6.2 Comparing the Midpoint and Trapezoid Rules 5.6.3 Simpson's Rule 5.6.4 Overall observations regarding \(L_n\text{,}\) \(R_n\text{,}\) \(T_n\text{,}\) \(M_n\text{,}\) and \(S_{2n}\text{.}\) 6 Using Definite Integrals 6.1 Using Definite Integrals to Find Area and Length 6.1.1 The Area Between Two Curves 6.1.2 Finding Area with Horizontal Slices 6.1.3 Finding the length of a curve 6.2 Using Definite Integrals to Find Volume 6.2.1 The Volume of a Solid of Revolution 6.2.2 Revolving about the \(y\)-axis 6.2.3 Revolving about horizontal and vertical lines other than the coordinate axes 6.3 Density, Mass, and Center of Mass 6.3.1 Density 6.3.2 Weighted Averages 6.3.3 Center of Mass 6.4 Physics Applications: Work, Force, and Pressure 6.4.1 Work 6.4.2 Work: Pumping Liquid from a Tank 6.4.3 Force due to Hydrostatic Pressure 6.5 Improper Integrals 6.5.1 Improper Integrals Involving Unbounded Intervals 6.5.2 Convergence and Divergence 6.5.3 Improper Integrals Involving Unbounded Integrands 7 Differential Equations 7.1 An Introduction to Differential Equations 7.1.1 What is a differential equation? 7.1.2 Differential equations in the world around us 7.1.3 Solving a differential equation 7.2 Qualitative behavior of solutions to DEs 7.2.1 Slope fields 7.2.2 Equilibrium solutions and stability 7.3 Euler's method 7.3.1 Euler's Method 7.3.2 The error in Euler's method 7.4 Separable differential equations 7.4.1 Solving separable differential equations 7.5 Modeling with differential equations 7.5.1 Developing a differential equation 7.6 Population Growth and the Logistic Equation 7.6.1 The earth's population 7.6.2 Solving the logistic differential equation 8 Sequences and Series 8.1 Sequences 8.1.1 Sequences 8.2 Geometric Series 8.2.1 Geometric Series 8.3 Series of Real Numbers 8.3.1 Infinite Series 8.3.2 The Divergence Test 8.3.3 The Integral Test 8.3.4 The Limit Comparison Test 8.3.5 The Ratio Test 8.4 Alternating Series 8.4.1 The Alternating Series Test 8.4.2 Estimating Alternating Sums 8.4.3 Absolute and Conditional Convergence 8.4.4 Summary of Tests for Convergence of Series 8.5 Taylor Polynomials and Taylor Series 8.5.1 Taylor Polynomials 8.5.2 Taylor Series 8.5.3 The Interval of Convergence of a Taylor Series 8.5.4 Error Approximations for Taylor Polynomials 8.6 Power Series 8.6.1 Power Series 8.6.2 Manipulating Power Series A A Short Table of Integrals B Answers to Activities C Answers to Selected Exercises Section 8.2 Geometric Series Motivating Questions What is a geometric series? What is a partial sum of a geometric series? What is a simplified form of the \(n\)th partial sum of a geometric series? Under what conditions does a geometric series converge? What is the sum of a convergent geometric series? Many important sequences are generated by addition. In Preview Activity 8.2.1, we see an example of a sequence that is connected to a sum. Preview Activity 8.2.1. Warfarin is an anticoagulant that prevents blood clotting; often it is prescribed to stroke victims in order to help ensure blood flow. The level of warfarin has to reach a certain concentration in the blood in order to be effective. Suppose warfarin is taken by a particular patient in a 5 mg dose each day. The drug is absorbed by the body and some is excreted from the system between doses. Assume that at the end of a 24 hour period, 8% of the drug remains in the body. Let \(Q(n)\) be the amount (in mg) of warfarin in the body before the \((n+1)\)st dose of the drug is administered. Explain why \(Q(1) = 5 \times 0.08\) mg. Explain why \(Q(2) = (5+Q(1)) \times 0.08\) mg. Then show that \begin{equation*} Q(2) = (5 \times 0.08)\left(1+0.08\right) \text{mg}\text{.} \end{equation*} \begin{equation*} Q(3) = (5 \times 0.08)\left(1+0.08+0.08^2\right) \text{mg}\text{.} \end{equation*} \begin{equation*} Q(4) = (5 \times 0.08)\left(1+0.08+0.08^2+0.08^3\right) \text{mg}\text{.} \end{equation*} There is a pattern that you should see emerging. Use this pattern to find a formula for \(Q(n)\text{,}\) where \(n\) is an arbitrary positive integer. Complete Table 8.2.1 with values of \(Q(n)\) for the provided \(n\)-values (reporting \(Q(n)\) to 10 decimal places). What appears to be happening to the sequence \(Q(n)\) as \(n\) increases? Table 8.2.1. Values of \(Q(n)\) for selected values of \(n\) \(n\) \(1\) \(2\) \(3\) \(4\) \(5\) \(6\) \(7\) \(8\) \(9\) \(10\) \(Q(n)\) \(0.40\) Subsection 8.2.1 Geometric Series In Preview Activity 8.2.1 we encountered the sum \begin{equation*} (5 \times 0.08)\left(1+0.08+0.08^2+0.08^3+ \cdots + 0.08^{n-1}\right) \end{equation*} for the long-term level of Warfarin in the patient's system. This sum has the form \begin{equation} a+ar+ar^2+ \cdots + ar^{n-1}\tag{8.2.1} \end{equation} where \(a=5 \times 0.08\) and \(r=0.08\text{.}\) Such a sum is called a finite geometric series with ratio \(r\text{.}\) Activity 8.2.2. Let \(a\) and \(r\) be real numbers (with \(r \ne 1\)) and let \begin{equation*} S_n = a+ar+ar^2 + \cdots + ar^{n-1}\text{.} \end{equation*} In this activity we will find a shortcut formula for \(S_n\) that does not involve a sum of \(n\) terms. Multiply \(S_n\) by \(r\text{.}\) What does the resulting sum look like? Subtract \(rS_n\) from \(S_n\) and explain why \begin{equation} S_n - rS_n = a - ar^n\text{.}\tag{8.2.2} \end{equation} Solve equation (8.2.2) for \(S_n\) to find a simple formula for \(S_n\) that does not involve adding \(n\) terms. Hint. Distribute the factor of \(r\text{.}\) Look for common terms in the two expressions being subtracted. Observe that you can remove a factor of \(S_n\) from \(S_n - rS_n\text{.}\) The sum of the terms of a sequence is called a series. We summarize the result of Activity 8.2.2 in the following way. A finite geometric series \(S_n\) is a sum of the form \begin{equation} S_n = a + ar + ar^2 + \cdots + ar^{n-1}\text{,}\tag{8.2.3} \end{equation} where \(a\) and \(r\) are real numbers such that \(r \ne 1\text{.}\) The finite geometric series \(S_n\) can be written more simply as \begin{equation} S_n = a+ar+ar^2+ \cdots + ar^{n-1} = \frac{a(1-r^n)}{1-r}\text{.}\tag{8.2.4} \end{equation} We now apply Equation (8.2.4) to the example involving Warfarin from Preview Activity 8.2.1. Recall that \begin{equation*} Q(n)=(5 \times 0.08)\left(1+0.08+0.08^2+0.08^3+ \cdots + 0.08^{n-1}\right) \text{mg}\text{,} \end{equation*} so \(Q(n)\) is a geometric series with \(a=5 \times 0.08 = 0.4\) and \(r = 0.08\text{.}\) Thus, \begin{equation*} Q(n) = 0.4\left(\frac{1-0.08^n}{1-0.08}\right) = \frac{1}{2.3} \left(1-0.08^n\right)\text{.} \end{equation*} Notice that as \(n\) goes to infinity, the value of \(0.08^n\) goes to 0. So, \begin{equation*} \lim_{n \to \infty} Q(n) = \lim_{n \to \infty} \frac{1}{2.3} \left(1-0.08^n\right) = \frac{1}{2.3} \approx 0.435\text{.} \end{equation*} Therefore, the long-term level of Warfarin in the blood under these conditions is \(\frac{1}{2.3}\text{,}\) which is approximately 0.435 mg. To determine the long-term effect of Warfarin, we considered a finite geometric series of \(n\) terms, and then considered what happened as \(n\) was allowed to grow without bound. In this sense, we were actually interested in an infinite geometric series (the result of letting \(n\) go to infinity in the finite sum). Definition 8.2.2. An infinite geometric series is an infinite sum of the form \begin{equation} a + ar + ar^2 + \cdots = \sum_{n=0}^{\infty} ar^n\text{.}\tag{8.2.5} \end{equation} The value of \(r\) in the geometric series (8.2.5) is called the common ratio of the series because the ratio of the (\(n+1\))st term, \(ar^n\text{,}\) to the \(n\)th term, \(ar^{n-1}\text{,}\) is always \(r\text{:}\) \begin{equation*} \frac{ar^n}{ar^{n-1}} = r\text{.} \end{equation*} Geometric series are common in mathematics and arise naturally in many different situations. As a familiar example, suppose we want to write the number with repeating decimal expansion \begin{equation*} N=0.1212\overline{12} \end{equation*} as a rational number. Observe that \begin{align*} N \amp = 0.12 + 0.0012 + 0.000012 + \cdots\\ \amp = \left(\frac{12}{100}\right) + \left(\frac{12}{100}\right)\left(\frac{1}{100}\right) + \left(\frac{12}{100}\right)\left(\frac{1}{100}\right)^2 + \cdots\text{.} \end{align*} This is an infinite geometric series with \(a=\frac{12}{100}\) and \(r = \frac{1}{100}\text{.}\) By using the formula for the value of a finite geometric sum, we can also develop a formula for the value of an infinite geometric series. We explore this idea in the following activity. Let \(r \ne 1\) and \(a\) be real numbers and let \begin{equation*} S = a+ar+ar^2 + \cdots ar^{n-1} + \cdots \end{equation*} be an infinite geometric series. For each positive integer \(n\text{,}\) let Recall that \begin{equation*} S_n = a\frac{1-r^n}{1-r}\text{.} \end{equation*} What should we allow \(n\) to approach in order to have \(S_n\) approach \(S\text{?}\) What is the value of \(\lim_{n \to \infty} r^n\) for \(|r| \gt 1\text{?}\) for \(|r| \lt 1\text{?}\) Explain. If \(|r| \lt 1\text{,}\) use the formula for \(S_n\) and your observations in (a) and (b) to explain why \(S\) is finite and find a resulting formula for \(S\text{.}\) Let \(n\) increase without bound. Think about what happens to powers of numbers that are less than or greater than 1. Consider \(\frac{1-r^n}{1-r}\) and how the numerator tends to 1 as \(n \to \infty\) for certain values of \(r\text{.}\) We can now find the value of the geometric series \begin{equation*} N = \left(\frac{12}{100}\right) + \left(\frac{12}{100}\right)\left(\frac{1}{100}\right) + \left(\frac{12}{100}\right)\left(\frac{1}{100}\right)^2 + \cdots\text{.} \end{equation*} Using \(a = \frac{12}{100}\) and \(r = \frac{1}{100}\text{,}\) we see that \begin{equation*} N = \frac{12}{100} \left(\frac{1}{1-\frac{1}{100}}\right) = \frac{12}{100} \left(\frac{100}{99}\right) = \frac{4}{33}\text{.} \end{equation*} The sum of a finite number of terms of an infinite geometric series is often called a partial sum of the series. Thus, \begin{equation*} S_n = a+ar+ar^2 + \cdots + ar^{n-1} = \sum_{k=0}^{n-1} ar^k\text{.} \end{equation*} is called the \(n\)th partial sum of the series \(\sum_{k=0}^{\infty} ar^k\text{.}\) We summarize our recent work with geometric series as follows. \begin{equation} a + ar + ar^2 + \cdots = \sum_{n=0}^{\infty} ar^n\text{,}\tag{8.2.6} \end{equation} where \(a\) and \(r\) are real numbers such that \(r \ne 0\text{.}\) The \(n\)th partial sum \(S_n\) of an infinite geometric series is \begin{equation*} S_n = a+ar+ar^2+ \cdots + ar^{n-1}\text{.} \end{equation*} If \(|r| \lt 1\text{,}\) then using the fact that \(S_n = a\frac{1-r^n}{1-r}\text{,}\) it follows that the sum \(S\) of the infinite geometric series (8.2.6) is \begin{equation*} S = \lim_{n \to \infty} S_n = \lim_{n \to \infty} a\frac{1-r^n}{1-r} = \frac{a}{1-r} \end{equation*} The formulas we have derived for an infinite geometric series and its partial sum have assumed we begin indexing the sums at \(n=0\text{.}\) If instead we have a sum that does not begin at \(n=0\text{,}\) we can factor out common terms and use the established formulas. This process is illustrated in the examples in this activity. Consider the sum \begin{equation*} \sum_{k=1}^{\infty} (2)\left(\frac{1}{3}\right)^k = (2)\left(\frac{1}{3}\right) + (2)\left(\frac{1}{3}\right)^2 + (2)\left(\frac{1}{3}\right)^3 + \cdots\text{.} \end{equation*} Remove the common factor of \((2)\left(\frac{1}{3}\right)\) from each term and hence find the sum of the series. Next let \(a\) and \(r\) be real numbers with \(-1\lt r\lt 1\text{.}\) Consider the sum \begin{equation*} \sum_{k=3}^{\infty} ar^k = ar^3+ar^4+ar^5 + \cdots\text{.} \end{equation*} Remove the common factor of \(ar^3\) from each term and find the sum of the series. Finally, we consider the most general case. Let \(a\) and \(r\) be real numbers with \(-1\lt r\lt 1\text{,}\) let \(n\) be a positive integer, and consider the sum \begin{equation*} \sum_{k=n}^{\infty} ar^k = ar^n+ar^{n+1}+ar^{n+2} + \cdots\text{.} \end{equation*} Remove the common factor of \(ar^n\) from each term to find the sum of the series. Think about how \(r = \frac{1}{3}\text{.}\) Note that \(ar^3+ar^4+ar^5 + \cdots = ar^3(1 + r + r^2 + \cdots)\text{.}\) Compare your work in (b). Subsection 8.2.2 Summary \begin{equation*} \sum_{k=0}^{\infty} ar^k \end{equation*} where \(a\) and \(r\) are real numbers and \(r \neq 0\text{.}\) The \(n\)th partial sum of the geometric series \(\sum_{k=0}^{\infty} ar^k\) is \begin{equation*} S_n = \sum_{k=0}^{n-1} ar^k\text{.} \end{equation*} A formula for the \(n\)th partial sum of a geometric series is \begin{equation*} S_n = a \frac{1-r^n}{1-r}\text{.} \end{equation*} If \(|r| \lt 1\text{,}\) the infinite geometric series \(\sum_{k=0}^{\infty} ar^k\) has the finite sum \(\frac{a}{1-r}\text{.}\) Exercises 8.2.3 Exercises 1. Fourth term of a geometric sequence. Find the \(4^{th}\) term of the geometric sequence \(-1 , -3.5 , -12.25 , ...\) 2. A geometric series. Find the sum of the series \(\displaystyle 2 + \frac{2}{7} + \frac{2}{49} + ... + \frac{2}{7^{n-1}} + ...\text{.}\) 3. A series that is not geometric. Determine the sum of the following series. \begin{equation*} \sum_{n=1}^\infty \left(\frac{3^n + 8^n}{12 ^n}\right) \end{equation*} 4. Two sums of geometric sequences. Find the sum of each of the geometric series given below. For the value of the sum, enter an expression that gives the exact value, rather than entering an approximation. A. \(-15 + 5 - {5\over 3} + {5\over 9} - {5\over 27} + {5\over 81} - \cdots =\) B. \(\sum\limits_{n=4}^{17} \left({1\over 2}\right)^n =\) There is an old question that is often used to introduce the power of geometric growth. Here is one version. Suppose you are hired for a one month (30 days, working every day) job and are given two options to be paid. Option 1. You can be paid $500 per day or You can be paid 1 cent the first day, 2 cents the second day, 4 cents the third day, 8 cents the fourth day, and so on, doubling the amount you are paid each day. How much will you be paid for the job in total under Option 1? Complete Table 8.2.3 to determine the pay you will receive under Option 2 for the first 10 days. Table 8.2.3. Option 2 payments Day Pay on this day Total amount paid to date \(1\) \(\dollar0.01\) \(\dollar0.01\) \(10\) Find a formula for the amount paid on day \(n\text{,}\) as well as for the total amount paid by day \(n\text{.}\) Use this formula to determine which option (1 or 2) you should take. Suppose you drop a golf ball onto a hard surface from a height \(h\text{.}\) The collision with the ground causes the ball to lose energy and so it will not bounce back to its original height. The ball will then fall again to the ground, bounce back up, and continue. Assume that at each bounce the ball rises back to a height \(\frac{3}{4}\) of the height from which it dropped. Let \(h_n\) be the height of the ball on the \(n\)th bounce, with \(h_0 = h\text{.}\) In this exercise we will determine the distance traveled by the ball and the time it takes to travel that distance. Determine a formula for \(h_1\) in terms of \(h\text{.}\) Determine a formula for \(h_n\) in terms of \(h\text{.}\) Write an infinite series that represents the total distance traveled by the ball. Then determine the sum of this series. Next, let's determine the total amount of time the ball is in the air. When the ball is dropped from a height \(H\text{,}\) if we assume the only force acting on it is the acceleration due to gravity, then the height of the ball at time \(t\) is given by \begin{equation*} H - \frac{1}{2}gt^2\text{.} \end{equation*} Use this formula to determine the time it takes for the ball to hit the ground after being dropped from height \(H\text{.}\) Use your work in the preceding item, along with that in (a)-(e) above to determine the total amount of time the ball is in the air. Suppose you play a game with a friend that involves rolling a standard six-sided die. Before a player can participate in the game, he or she must roll a six with the die. Assume that you roll first and that you and your friend take alternate rolls. In this exercise we will determine the probability that you roll the first six. Explain why the probability of rolling a six on any single roll (including your first turn) is \(\frac{1}{6}\text{.}\) If you don't roll a six on your first turn, then in order for you to roll the first six on your second turn, both you and your friend had to fail to roll a six on your first turns, and then you had to succeed in rolling a six on your second turn. Explain why the probability of this event is \begin{equation*} \left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{1}{6}\right) = \left(\frac{5}{6}\right)^2\left(\frac{1}{6}\right)\text{.} \end{equation*} Now suppose you fail to roll the first six on your second turn. Explain why the probability is \begin{equation*} \left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{1}{6}\right) = \left(\frac{5}{6}\right)^4\left(\frac{1}{6}\right) \end{equation*} that you to roll the first six on your third turn. The probability of you rolling the first six is the probability that you roll the first six on your first turn plus the probability that you roll the first six on your second turn plus the probability that your roll the first six on your third turn, and so on. Explain why this probability is \begin{equation*} \frac{1}{6} + \left(\frac{5}{6}\right)^2\left(\frac{1}{6}\right) + \left(\frac{5}{6}\right)^4\left(\frac{1}{6}\right) + \cdots\text{.} \end{equation*} Find the sum of this series and determine the probability that you roll the first six. The goal of a federal government stimulus package is to positively affect the economy. Economists and politicians quote numbers like "\(k\) million jobs and a net stimulus to the economy of \(n\) billion of dollars." Where do they get these numbers? Let's consider one aspect of a stimulus package: tax cuts. Economists understand that tax cuts or rebates can result in long-term spending that is many times the amount of the rebate. For example, assume that for a typical person, 75% of her entire income is spent (that is, put back into the economy). Further, assume the government provides a tax cut or rebate that totals \(P\) dollars for each person. The tax cut of \(P\) dollars is income for its recipient. How much of this tax cut will be spent? In this simple model, we will say that the spent portion of the tax cut/rebate from part (a) then becomes income for another person who, in turn, spends 75% of this income. After this ``second round" of spent income, how many total dollars have been added to the economy as a result of the original tax cut/rebate? This second round of spending becomes income for another group who spend 75% of this income, and so on. In economics this is called the multiplier effect. Explain why an original tax cut/rebate of \(P\) dollars will result in multiplied spending of \begin{equation*} 0.75P(1+0.75+0.75^2+ \cdots )\text{.} \end{equation*} Based on these assumptions, how much stimulus will a 200 billion dollar tax cut/rebate to consumers add to the economy, assuming consumer spending remains consistent forever. Like stimulus packages, home mortgages and foreclosures also impact the economy. A problem for many borrowers is the adjustable rate mortgage, in which the interest rate can change (and usually increases) over the duration of the loan, causing the monthly payments to increase beyond the ability of the borrower to pay. Most financial analysts recommend fixed rate loans, ones for which the monthly payments remain constant throughout the term of the loan. In this exercise we will analyze fixed rate loans. When most people buy a large ticket item like car or a house, they have to take out a loan to make the purchase. The loan is paid back in monthly installments until the entire amount of the loan, plus interest, is paid. With a loan, we borrow money, say \(P\) dollars (called the principal), and pay off the loan at an interest rate of \(r\)%. To pay back the loan we make regular monthly payments, some of which goes to pay off the principal and some of which is charged as interest. In most cases, the interest is computed based on the amount of principal that remains at the beginning of the month. We assume a fixed rate loan, that is one in which we make a constant monthly payment \(M\) on our loan, beginning in the original month of the loan. Suppose you want to buy a house. You have a certain amount of money saved to make a down payment, and you will borrow the rest to pay for the house. Of course, for the privilege of loaning you the money, the bank will charge you interest on this loan, so the amount you pay back to the bank is more than the amount you borrow. In fact, the amount you ultimately pay depends on three things: the amount you borrow (called the principal), the interest rate, and the length of time you have to pay off the loan plus interest (called the duration of the loan). For this example, we assume that the interest rate is fixed at \(r\)%. To pay off the loan, each month you make a payment of the same amount (called installments). Suppose we borrow \(P\) dollars (our principal) and pay off the loan at an interest rate of \(r\)% with regular monthly installment payments of \(M\) dollars. So in month 1 of the loan, before we make any payments, our principal is \(P\) dollars. Our goal in this exercise is to find a formula that relates these three parameters to the time duration of the loan. We are charged interest every month at an annual rate of \(r\)%, so each month we pay \(\frac{r}{12}\)% interest on the principal that remains. Given that the original principal is \(P\) dollars, we will pay \(\left(\frac{0.0r}{12}\right)P\) dollars in interest on our first payment. Since we paid \(M\) dollars in total for our first payment, the remainder of the payment (\(M-\left(\frac{r}{12}\right)P\)) goes to pay down the principal. So the principal remaining after the first payment (let's call it \(P_1\)) is the original principal minus what we paid on the principal, or \begin{equation*} P_1 = P - \left( M - \left(\frac{r}{12}\right)P\right) = \left(1 + \frac{r}{12}\right)P - M\text{.} \end{equation*} As long as \(P_1\) is positive, we still have to keep making payments to pay off the loan. Recall that the amount of interest we pay each time depends on the principal that remains. How much interest, in terms of \(P_1\) and \(r\text{,}\) do we pay in the second installment? How much of our second monthly installment goes to pay off the principal? What is the principal \(P_2\text{,}\) or the balance of the loan, that we still have to pay off after making the second installment of the loan? Write your response in the form \(P_2 = ( \ )P_1 - ( \ )M\text{,}\) where you fill in the parentheses. Show that \(P_2 = \left(1 + \frac{r}{12}\right)^2P - \left[1 + \left(1+\frac{r}{12}\right)\right] M\text{.}\) Let \(P_3\) be the amount of principal that remains after the third installment. Show that \begin{equation*} P_3 = \left(1 + \frac{r}{12}\right)^3P - \left[1 + \left(1+\frac{r}{12}\right) + \left(1+\frac{r}{12}\right)^2 \right] M\text{.} \end{equation*} If we continue in the manner described in the problems above, then the remaining principal of our loan after \(n\) installments is \begin{equation} P_n = \left(1 + \frac{r}{12}\right)^nP - \left[\displaystyle \sum_{k=0}^{n-1} \left(1+\frac{r}{12}\right)^k \right] M\text{.}\tag{8.2.7} \end{equation} This is a rather complicated formula and one that is difficult to use. However, we can simplify the sum if we recognize part of it as a partial sum of a geometric series. Find a formula for the sum \begin{equation} \displaystyle \sum_{k=0}^{n-1} \left(1+\frac{r}{12}\right)^k\text{.}\tag{8.2.8} \end{equation} and then a general formula for \(P_n\) that does not involve a sum. It is usually more convenient to write our formula for \(P_n\) in terms of years rather than months. Show that \(P(t)\text{,}\) the principal remaining after \(t\) years, can be written as \begin{equation} P(t) = \left(P - \frac{12M}{r}\right)\left(1+\frac{r}{12}\right)^{12t} + \frac{12M}{r}\text{.}\tag{8.2.9} \end{equation} Now that we have analyzed the general loan situation, we apply formula (8.2.9) to an actual loan. Suppose we charge $1,000 on a credit card for holiday expenses. If our credit card charges 20% interest and we pay only the minimum payment of $25 each month, how long will it take us to pay off the $1,000 charge? How much in total will we have paid on this $1,000 charge? How much total interest will we pay on this loan? Now we consider larger loans, e.g., automobile loans or mortgages, in which we borrow a specified amount of money over a specified period of time. In this situation, we need to determine the amount of the monthly payment we need to make to pay off the loan in the specified amount of time. In this situation, we need to find the monthly payment \(M\) that will take our outstanding principal to \(0\) in the specified amount of time. To do so, we want to know the value of \(M\) that makes \(P(t) = 0\) in formula (8.2.9). If we set \(P(t) = 0\) and solve for \(M\text{,}\) it follows that \begin{equation*} M = \frac{rP \left(1+\frac{r}{12}\right)^{12t}}{12\left(\left(1+\frac{r}{12}\right)^{12t} - 1 \right)}\text{.} \end{equation*} Suppose we want to borrow $15,000 to buy a car. We take out a 5 year loan at 6.25%. What will our monthly payments be? How much in total will we have paid for this $15,000 car? How much total interest will we pay on this loan? Suppose you charge your books for winter semester on your credit card. The total charge comes to $525. If your credit card has an interest rate of 18% and you pay $20 per month on the card, how long will it take before you pay off this debt? How much total interest will you pay? Say you need to borrow $100,000 to buy a house. You have several options on the loan: 30 years at 6.5% 15 years at 8.25%. What are the monthly payments for each loan? Which mortgage is ultimately the best deal (assuming you can afford the monthly payments)? In other words, for which loan do you pay the least amount of total interest? You have attempted of activities on this page. <Prev^TopNext> Authored in PreTeXt Hosted on Runestone
CommonCrawl
\begin{document} \title{New and Improved Algorithms for Unordered Tree Inclusion} \author[1]{Tatsuya Akutsu \thanks{Correspoding author. e-mail: [email protected]. Partially supported by JSPS KAKENHI \#18H04113.}} \author[2,3]{Jesper Jansson} \author[1]{Ruiming Li} \author[4]{Atsuhiro Takasu} \author[1]{Takeyuki Tamura\thanks{Partially supported by JSPS KAKENHI \#25730005.}} \affil[1]{Bioinformatics Center, Institute for Chemical Research, Kyoto University, Uji, Kyoto 611-0011, Japan} \affil[2]{Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China} \affil[3]{Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan} \affil[4]{National Institute of Informatics, Chiyoda-ku, Tokyo, 101-8430, Japan} \maketitle \begin{abstract} The \emph{tree inclusion problem} is, given two node-labeled trees~$P$ and~$T$ (the ``pattern tree'' and the ``target tree''), to locate every minimal subtree in~$T$ (if any) that can be obtained by applying a sequence of node insertion operations to~$P$. Although the \emph{ordered} tree inclusion problem is solvable in polynomial time, the \emph{unordered} tree inclusion problem is NP-hard. The currently fastest algorithm for the latter is a classic algorithm by Kilpel\"{a}inen and Mannila from 1995 that runs in $O(2^{2d} mn)$ time, where $m$ and $n$ are the sizes of the pattern and target trees, respectively, and $d$ is the degree of the pattern tree. Here, we develop a new algorithm that runs in $O(2^{d} mn^2)$ time, improving the exponential factor from $2^{2d}$ to~$2^d$ by considering a particular type of ancestor-descendant relationships that is suitable for dynamic programming. We also study restricted variants of the unordered tree inclusion problem. {\bf Keywords:} tree inclusion, unordered trees, parameterized algorithms, dynamic programming. \end{abstract} \section{Introduction} Tree pattern matching and measuring the similarity of trees are fundamental problem areas in theoretical computer science. One intuitive and previously well-studied measure of the similarity between two rooted, node-labeled trees~$T_1$ and~$T_2$ is the \emph{tree edit distance}, defined as the length of a shortest sequence of node insertion, node deletion, and node relabeling operations that transforms~$T_1$ into~$T_2$~\cite{bille2005survey}. An important special variant of the problem of computing the tree edit distance known as the \emph{tree inclusion problem} is obtained when the only allowed operations are node insertion operations on~$T_1$. Here, we assume the following formulation of the problem: given a ``pattern tree''~$P$ and a ``target tree''~$T$, locate every minimal subtree in~$T$ (if any) that can be obtained by applying a sequence of node insertion operations to~$P$. (Equivalently, one may define the tree inclusion problem so that only node deletion operations on~$T$ are allowed.) In~1995, Kilpel\"{a}inen and Mannila~\cite{kilpelainen1995ordered} proved that the tree inclusion problem for unordered trees is NP-hard, but solvable in polynomial time when the degree of the pattern tree is bounded from above by a constant. The running time of their algorithm is $O(d \cdot 2^{2d} \cdot mn) = O^{\ast}(2^{2d}) = O^{\ast}(4^{d})$, where $m = |P|$, $n = |T|$, and $d$ is the degree of~$P$. Throughout this article, the notation ``$O^{\ast}(\dots)$'' means ``$O(\dots)$'' multiplied by some function that is a polynomial in~$m$ and~$n$. E.g., ``$O^{\ast}(2^{2d})$'' means ``$O(2^{2d} \cdot poly(m,n))$''. Our main contribution is a new algorithm for solving the unordered tree inclusion problem more efficiently. More precisely, its time complexity is $O(d \cdot 2^{d} \cdot mn^2) = O^{\ast}(2^{d})$, which yields the first improvement in over twenty years. Our bound is obtained by introducing the simple yet useful concept of \emph{minimal inclusion} and considering a particular type of ancestor-descendant relationships that turns out to be suitable for dynamic programming. Next, we analyze the computational complexity of unordered tree inclusion for some restricted cases; see Table~\ref{table:complexity} for a summary of the new findings. We give a polynomial-time algorithm for the case where the leaves of~$P$ are distinctly labeled and every label appears at most twice in~$T$, and an $O^{\ast}(1.619^d)$-time algorithm for the NP-hard case where the leaves in~$P$ are distinctly labeled and each label appears at most three times in~$T$. Both of these algorithms effectively utilize some techniques from a polynomial-time algorithm for 2-SAT~\cite{aspvall1979}. (Note that the preliminary version of this paper \cite{akutsu2018unordered} contained a slower algorithm for the latter case running in $O^{\ast}(1.8^d)$ time.) Finally, we derive a randomized $O^{\ast}(1.883^d)$-time algorithm for the case where the heights of~$P$ and~$T$ are one and two, respectively, via a non-trivial combination of our $O^{\ast}(2^d)$-time algorithm, Yamamoto's algorithm for SAT~\cite{yamamoto2005sat}, and color-coding~\cite{alon1995color}. \begin{table}[h!] \caption{The computational complexity of some special cases of the unordered tree inclusion problem. For any tree~$T$, $h(T)$~denotes the height of~$T$ and $occ(T)$ the maximum number of times that any leaf label occurs in~$T$. As indicated in the table, either all nodes or only the leaves are labeled (the former is harder since it generalizes the latter). Note that the fourth case is NP-hard as it generalizes the first two. The algorithm referred to in the last case is randomized. } \begin{center} \begin{tabular}{l|c|c|c} \hline Restriction & Labels on & Complexity & Result \\ \hline $h(T) = 2$, $h(P) = 1$, & all nodes & NP-hard & Corollary~\ref{corollary:KM_hardness} \\ $occ(T) = 3$, $occ(P) = 1$ & & & \\[2mm] $h(T) = 2$, $h(P) = 2$, & leaves & NP-hard & Theorem~\ref{thm:unique_leaves_NP-complete} \\ $occ(T) = 3$, $occ(P) = 1$ & & & \\[2mm] $occ(T) = 2$, $occ(P) = 1$ & all nodes & P & Theorem~\ref{thm:poly} \\[2mm] $occ(T) = 3$, $occ(P) = 1$ & all nodes & $O^{\ast}(1.619^d)$ time & Theorem~\ref{thm:occ-3} \\[2mm] $h(T) = 2$, $h(P) = 1$ & all nodes & $O^{\ast}(1.883^d)$ time & Theorem~\ref{thm:low-height} \\ \hline \end{tabular} \end{center} \label{table:complexity} \end{table} \subsection{Related results} In general, tree edit distance-related problems are computationally harder for unordered trees than for ordered trees. A comprehensive summary of the many results that were already known in~2005 can be found in the survey by Bille~\cite{bille2005survey}. Below, we briefly mention a few of these historical results along with some more recent ones. When $T_1$ and~$T_2$ are \emph{ordered} trees, the tree edit distance can be computed in polynomial time. The first algorithm to do so, invented by Tai~\cite{tai1979tree} in~1979, ran in $O(n^{6})$ time, where $n$ is the total number of nodes in~$T_1$ and~$T_2$. The time complexity was gradually improved upon until Demaine et al.~\cite{demaine2009optimal} thirty years later presented an $O(n^{3})$-time algorithm, which was proved to be worst-case optimal under the conjecture that there is no truly subcubic time algorithm for the all-pairs-shortest-paths problem~\cite{bringmann2018editlb}. Pawlik and Augsten~\cite{pawlik2011rted} developed a robust algorithm whose asymptotic complexity is less than or equal to the complexity of the best competitors for any input instance. In another line of research, since even $O(n^3)$ time is too slow for similarity search and so-called join operations in XML databases, the focus has been on approximate methods. Garofalakis and Kumar~\cite{garofalakis2005xml} gave an algorithm for embedding the tree edit distance in a high-dimensional $L_1$-norm space with a guaranteed distortion, and recently, Boroujen et al. provided an $O(n^2)$-time $(1+\varepsilon)$-approximation algorithm~\cite{boroujeni2019treeedit}. In contrast, the tree edit distance problem is NP-hard for \emph{unordered} trees~\cite{zhang1992editing}. It is MAX SNP-hard even for binary trees in the unordered case~\cite{zhang1994some}, which implies that it is unlikely to admit a polynomial-time approximation scheme. Some exponential-time algorithms for this problem variant were developed by Akutsu et al.~\cite{akutsu2013approximation,akutsu2014efficient}. As for parameterized algorithms, Shasha et al.~\cite{shasha1994exact} gave an $O(4^{\ell_1 + \ell_2} \cdot \min(\ell_1,\ell_2) \cdot mn)$-time algorithm for the problem, where $\ell_1$ and $\ell_2$ are the numbers of leaves in~$T_1$ and~$T_2$, respectively. Taking the tree edit distance (denoted by~$k$) to be the parameter instead, an $O^{\ast}(2.62^k)$-time algorithm for the unit-cost edit operation model was developed by Akutsu et al.~\cite{akutsu2011exact}. As mentioned above, Kilpel\"{a}inen and Mannila~\cite{kilpelainen1995ordered} proved that the unordered tree inclusion problem is NP-hard and gave an algorithm that runs in $O(d \cdot 2^{2d} \cdot mn)$ time, where $m = |P|$, $n = |T|$, and $d$ is the degree of~$P$. Bille and G{\o}rtz~\cite{bille2011tree} presented a fast algorithm for the case of ordered trees, and Valiente~\cite{valiente2005constrained} gave a polynomial-time algorithm for a constrained version of the unordered case. Piernik and Morzy~\cite{piernik2013partial} introduced a similar problem for ordered trees and developed an efficient algorithm. Finally, we remark that the special case of the tree inclusion problem in which node insertion operations are only allowed to insert new \emph{leaves} corresponds to a subtree isomorphism problem, which can be solved in polynomial time even for unordered trees~\cite{matouvsek1992complexity}. \subsection{Applications} \label{sec:applications} Research in tree pattern matching has led to algorithms used in numerous practical applications over the years. Some examples include fast methods for querying structured text databases, document similarity search, natural language processing, compiler optimization, automated theorem proving, comparison of RNA secondary structures, assessing the accuracy of phylogenetic tree reconstruction methods, and medical image analysis~\cite{bille2011tree,icde14,kilpelainen1995ordered,valiente2005constrained}. Recently, due to the rapid advance of AI technology, matching methods for knowledge bases have become increasingly important. In particular, researchers in the database community have enhanced the basic \emph{subtree similarity search} technique to search a knowledge base of hierarchically structured information under various definitions of similarity; e.g., Cohen and Or~\cite{icde14} presented a general subtree similarity search algorithm that is compatible with a wide range of tree distance functions, and Chang et al.~\cite{vldb15} proposed a top-$k$ tree matching algorithm. In the Natural Language Processing (NLP) field, researchers are applying deep learning techniques to NLP problems and developing algorithms for processing parsing/dependency trees~\cite{emnlp15}. As an example of the versatility of tree comparison algorithms, three different tree pattern matching applications involving glycan data from the KEGG database~\cite{kanehisa2013data}, weblogs data~\cite{zaki2005efficiently}, and bibliographical data from ACM, DBLP, and Google Scholar~\cite{kopcke2010evaluation} were all expressed in terms of an optimization version of the unordered tree inclusion problem named the \emph{extended tree inclusion problem} and studied experimentally by Mori et al.~\cite{MTJHTA_15}. Note that for bibliographic matching, a single article usually has at most two or three versions (e.g., preprint, conference version, and journal version), and it is very rare that a single article includes two co-authors with exactly the same family and given names. Therefore, two reasonable assumptions when modeling bibliographic matching as the tree inclusion problem are that the leaves of the pattern tree~$P$ are distinctly labeled and that each label occurs at most~$c$ times in the target tree~$T$ for some bounded value of~$c$. Another important restriction is on the height of trees. In entity resolution, some authors have applied tree matching where entities are usually represented by a shallow tree. Mori et al. \cite{MTJHTA_15} represented a bibliographic record by a tree of height 2 and linked identical records in two different bibliographic databases. Konda et al. \cite{konda2016er} evaluated their entity resolution system by using various datasets. Movie records from IMDb\footnote{IMDb: Ratings, Reviews, and Where to Watch the Best Movies: https://www.imdb.com} used in their experiment, for example, were extracted from IMDb web pages in HTML format. The fields of the movie record are included in a subtree of height 7 in the web page. Since the HTML code contains many tags for rendering the page, the height of trees required for the movie record is much lower. Apart from these practical viewpoints, it is of theoretical interests to study restricted cases because the unordered tree inclusion problem remains NP-hard even in considerably restricted cases, as shown in Table~\ref{table:complexity}. \section{Definitions and notation} \label{sec:definitions} From here on, all trees are assumed to be rooted, unordered, and node-labeled. Let~$T$ be a tree. We use $r(T)$, $h(T)$, and~$V(T)$ to denote the root of~$T$, the height of~$T$, and the set of nodes in~$T$, respectively. For any $v \in V(T)$, $\ell(v)$~is the node label of~$v$ and $Chd(v)$~is the set of children of~$v$. Furthermore, $Anc(v)$~and $Des(v)$ are the sets of strict ancestors and strict descendants of~$v$, respectively (i.e., $v$~itself is excluded from these sets), whereas $AncDes(v) = Anc(v) \cup Des(v) \cup \{v\}$~is the set of all ancestors of~$v$, all descendants of~$v$, and~$v$. Also, $T(v)$~denotes the subtree of~$T$ induced by $Des(v) \cup \{v\}$. A \emph{node insertion operation} on a tree~$T$ is an operation that creates a new node~$v$ having any label and then: (i)~attaches~$v$ as a child of some node~$u$ currently in~$T$ and makes $v$ become the parent of a (possibly empty) subset of the children of~$u$ instead of~$u$, so that $u$ is no longer their parent; or (ii)~makes the current root of~$T$ become a child of~$v$ and lets $v$ become the root of $T$ instead. For any two trees~$T_1$ and~$T_2$, we say that \emph{$T_1$~is included in~$T_2$} if there exists a sequence of node insertion operations that, when applied to~$T_1$, yields~$T_2$. Equivalently, $T_1$~is included in~$T_2$ if $T_1$ can be obtained by applying a sequence of \emph{node deletion operations} (defined as the inverse of a node insertion operation) to~$T_2$. A \emph{mapping} between two trees~$T_1$ and~$T_2$ is a subset $M \subseteq V(T_1) \times V(T_2)$ such that for every $(u_1,v_1), (u_2,v_2) \in M$, it holds that: (i)~$u_1 = u_2$ if and only if $v_1 = v_2$; and (ii)~$u_1$ is an ancestor of~$u_2$ if and only if $v_1$ is an ancestor of~$v_2$. Condition (i) states that each node appears at most once in $M$, and condition (ii) states that ancestor-descendant relations must be preserved. A mapping~$M$ between~$T_1$ and~$T_2$ such that $|M| = |V(T_1)|$ and $u$ and~$v$ have the same node label for every $(u,v) \in M$ is called an \emph{inclusion mapping} (see Fig.~\ref{fig:inclusion} for an example). It is known that $T_1$~is included in~$T_2$ if and only if there exists an inclusion mapping between~$T_1$ and~$T_2$~\cite{tai1979tree}. We write $T_1(u) \subset T_2(v)$ if $T_1(u)$ is included in $T_2(v)$ under the additional condition that there exists an inclusion mapping that maps~$u$ to~$v$. For any two trees~$T_1$ and~$T_2$, $T_1 \sim T_2$ means that $T_1$ is isomorphic to~$T_2$, in the sense that node labels have to be preserved. In the \emph{tree inclusion problem}, the input is two trees~$P$ and~$T$, also referred to as the ``pattern tree'' and the ``target tree'', and the objective is to locate every minimal subtree of~$T$ that includes~$P$, where $T(v)$ is called a \emph{minimal subtree} if it minimally includes $P$, the definition of which is given below. For any instance of the tree inclusion problem, we define $m = |V(P)|$ and $n=|V(T)|$, and let $d$ denote the degree of~$P$, i.e., the maximum number of children of any node in~$P$. We assume w.l.o.g. (without loss of generality) that $m \leq n$ because otherwise $P$ cannot be included in $T$. The following concept plays a key role in our algorithm (see Fig.~\ref{fig:inclusion} for an illustration). \begin{definition} For any instance of the tree inclusion problem and any $u \in V(P)$ and $v \in V(T)$, $T(v)$~is said to \emph{minimally include~$P(u)$} (written as $P(u) \prec T(v)$) if $P(u) \subset T(v)$ holds and there is no $v' \in Des(v)$ such that $P(u) \subset T(v')$. \end{definition} We may simply use $P$ and $T$ in place of $P(u)$ and $T(v)$ if $u$ and $v$ are the roots of $P$ and $T$, respectively. Locating every minimal subtree is reasonable because $P(u) \subset T(v')$ holds for all ancestors $v'$ of $v$ if $P(u) \prec T(v)$ holds. \begin{figure} \caption{An example of unordered tree inclusion. Here, $P \subset T$ holds by an inclusion mapping $M = \{(u_1,v_1),(u_2,v_2), (u_3,v_5), (u_4,v_3),(u_5,v_8)\}$. $P(u_2) \subset T(v_2)$, $P(u_2) \subset T(v_6)$, and~$P(u_2) \subset T(v_7)$ hold as well. Furthermore, $P \prec T$, $P(u_2) \prec T(v_2)$, and $P(u_2) \prec T(v_7)$ hold, but $P(u_2) \prec T(v_6)$~does not hold.} \label{fig:inclusion} \end{figure} \begin{proposition} Given any instance of the tree inclusion problem and any $u \in V(P)$ and $v \in V(T)$ with $Chd(u) = \{u_1,\ldots,u_d\}$, it holds that $P(u) \subset T(v)$ if and only if the following conditions are satisfied: \begin{itemize} \item [(1)] $\ell(u) = \ell(v)$; \item [(2)] $v$ has a set of descendants $D(v) = \{v_1,\ldots,v_d\}$ such that $v_i \notin Des(v_j)$ for every $i \neq j$; and \item [(3)] there exists a bijection $\phi$ from $Chd(u)$ to $D(v)$ such that $P(u_i) \prec T(v_i)$ holds for every $i \in \{1,2,\ldots,d\}$. \end{itemize} \label{prop:central} \end{proposition} \begin{proof} Suppose that Conditions (1)-(3) are satisfied. Condition (3) implies that there exists an injection mapping $\phi'$ between the forest induced by $u_1,\ldots,u_d$ and their descendants and the forest induced by $v_1,\ldots,v_d$ and their descendants such that $\phi'(u_i)=v_i$. Let $\phi'' = \phi' \cup \{(u,v)\}$. Since $u_1,\ldots,u_d$ are the children of $u$ and $v_1,\ldots,v_d$ are descendants of $v$, $\phi''$ is an inclusion mapping and thus $P(u) \subset T(v)$ holds. Conversely, suppose that $P(u) \subset T(v)$ holds, which means that there exists an inclusion mapping $\phi$ from $P(u)$ to $T(v)$ with $\phi(u)=v$. Let $w_i=\phi(u_i)$ for $i=1,\ldots,d$. Then, $w_i \notin Des(w_j)$ holds for every $i \neq j$ because $\phi$ is an inclusion mapping. Furthermore, for each $w_i$, there must exist $v_i \in \{w_i\} \cup Des(w_i)$ such that $P(u_i) \prec T(v_i)$ holds with an inclusion mapping $\phi_i$ from $P(u_i)$ to $T(v_i)$ satisfying $\phi_i(u_i)=v_i$. Note that $v_i \notin Des(v_j)$ holds for every $i \neq j$ because $w_i \notin Des(w_j)$ holds for every $i \neq j$. Let $\phi' = \{(u,v)\} \cup \phi_1 \cup \ldots \phi_d$. Condition (1) is satisfied because $(u,v) \in \phi'$. Here we let $D(v)=\{v_1,\ldots,v_d\}$. Then, Condition (2) is satisfied as stated above. Condition (3) is also satisfied because $\phi'(u_i) = v_i$ holds for all $i=1,\ldots,d$. \end{proof} Proposition~\ref{prop:central} essentially states that the children of~$u$ must be mapped to descendants of~$v$ that do not have ancestor-descendant relationships. Since $P$ is included in~$T$ if and only if there exists a $v \in V(T)$ with $P \prec T(v)$, we need to determine if $P(u) \prec T(v)$, assuming that whether $P(u_j) \prec T(v_i)$ holds is known for all $(u_j,v_i)$ with $u_j \in Des(u) \cup \{ u \}$, $v_i \in Des(v) \cup \{ v \}$, and $(u_j,v_i) \neq (u,v)$. This assumption is satisfied if we apply a dynamic programming procedure to determine if $P \prec T(v)$, using an $O(mn)$ size table and following any partial ordering on $(u,v)$s in $V(P) \times V(T)$ such that $(u,v)$ precedes $(u',v')$ if and only if $u' \in Des(u) \cup \{u\}$, $v' \in Des(v) \cup \{v\}$, and $(u',v') \neq (u,v)$. \begin{proposition} Suppose that $P(u) \prec T(v)$ can be determined in $O(f(d,m,n))$ time, assuming that whether $P(u_j) \prec T(v_i)$ holds is known for all pairs $(u_j,v_i)$ such that $(u_j,v_i) \in V(P(u)) \times V(T(v)) \setminus \{(u,v)\}$. Then the unordered tree inclusion problem can be solved in $O(f(d,m,n) \cdot mn)$ time by using a bottom-up dynamic programming procedure. \label{prop:bottom-up} \end{proposition} \section{An $O(d \cdot 2^d \cdot mn^2)$-time algorithm} \label{sec:improved} The core of Kilpel\"{a}inen and Mannila's algorithm~\cite{kilpelainen1995ordered} for unordered tree inclusion is the computation of a set~$S(v)$ for each node $v \in V(T)$, also called the \emph{match system for target node~$v$}. In their paper, $S(v)$~was originally defined as a set of subsets of nodes from~$P$, where each such subset consists of the root nodes in a subforest of~$P$ that is included in~$T(v)$. However, $S(v)$ was restricted to subsets of $Chd(u)$ for a single node $u$ in $P$ when the bounded outdegree case was considered. We employ this restricted definition in this paper and define~$S(v)$ for any fixed $u \in V(P)$ by: \begin{eqnarray*} S(v) & = & \{ U \subseteq Chd(u) \,|\, P(U) \subset T(v) \}, \end{eqnarray*} where $P(U)$ is the forest induced by the nodes in~$U$ and their descendants and $P(U) \subset T(v)$ means that every tree from~$P(U)$ is included in~$T(v)$ without overlap (i.e., $T(v)$ can be obtained from~$P(U)$ by node insertion operations). For details, see~\cite{kilpelainen1995ordered}. Kilpel\"{a}inen and Mannila's algorithm~\cite{kilpelainen1995ordered} computes the $S(v)$-sets in a bottom-up order. It fixes an arbitrary left-to-right ordering of the nodes of~$T$ (the ordering will not affect the correctness). Precisely, the left-to-right ordering is determined as follows. We assume that for each node having two or more children, a left-to-right ordering is given to the children. For any two nodes $v_i, v_j \in V(T)$ (resp., $v_i,v_j \in V(P)$) that do not have any ancestor-descendant relationship, let $v$ be the lowest common ancestor, which is uniquely determined. For any descendant $v_k$ of $v$, let $v_k'$ be the child of $v$ such that $v_k$ is a descendant of $v_k'$ or $v_k=v_k'$. Then, $v_i$ is left (resp., right) of $v_j$ if and only if $v_i'$ is a left (resp., right) sibling of $v_j'$. Note that left-right relationships are defined for nodes only if they do not have any ancestor-descendant relationship. Below, we denote ``$v_i$~is left of~$v_j$'' by $v_i \triangleleft v_j$. To compute~$S(v)$, their algorithm performs the following operation from left to right for the children $v_1,\ldots,v_l$ of~$v$: \begin{eqnarray*} S & := & \{A \cup B \,|\, A \in S,\, B \in S(v_i) \}, \end{eqnarray*} starting with $S = \emptyset$, and then $S(v)$ is assigned the resulting~$S$. Clearly, the size of~$S(v)$ is no greater than~$2^d$. However, this way of updating~$S$ causes an $\Omega(2^{2d})$-factor in the running time because it examines $\Omega(2^d) \times \Omega(2^d)$ set pairs. To avoid this bottleneck, we need a new approach for computing~$S(v)$, explained next. We shall focus on how to determine if $P(u) \prec T(v)$ holds for a fixed $(u,v)$ because this part is crucial for reducing the time complexity. Assume w.l.o.g. that $u$ has $d$~children and write $Chd(u) = \{u_1,\ldots,u_d\}$. To simplify the presentation, we will assume until the end of this section that $P(u_i) \sim P(u_j)$ does not hold for any $u_i, u_j \in Chd(u)$ with $u_i \neq u_j$. For any $v_i \in V(T(v))$, define $M(v_i)$ by: \begin{eqnarray*} M(v_i) & = & \{ u_j \in Chd(u) \,|\, P(u_j) \prec T(v_i) \}. \end{eqnarray*} For example, $M(v_0) = \emptyset$, $M(v_2)=\{u_C\}$, and $M(v_3)=\{u_D,u_E\}$ in Fig.~\ref{fig:key-idea}. Note that $M(v_i)$ is known for all descendants $v_i$ of $v$ before testing $P(u) \prec T(v)$ and does not change during the course of this testing. For any $v_i \in V(T(v))$, $LF(v,v_i)$~denotes the set of nodes in~$V(T(v))$ each of which is left of~$v_i$ (see Fig.~\ref{fig:key-idea} for an example). Next, define $S(v,v_i)$ by: \begin{eqnarray*} S(v,v_i) & = & \{ U \subseteq Chd(u) \,|\, P(U) \subset T(LF(v,v_i)) \} \cup \\ & & \{ U \subseteq Chd(u) \,|\, (U=U' \cup \{u_j\}) \land (P(U') \subset T(LF(v,v_i))) \land\\ & & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(u_j \in M(v_i)) \} \end{eqnarray*} where $T(LF(v,v_i))$ is the forest induced by the nodes in $LF(v,v_i)$ and their descendants. Note that $P(\emptyset) \subset T(...)$ always holds. Note also that each element of $S(v,v_i)$ is a subset of the children of $u$ that are included in the forest induced by the nodes left to $v_i$ and in $V(T(v_i))$ under the condition that there exists at most one child $u_j$ such that $P(u_j)$ is included in $T(v_i)$ in the corresponding inclusion mapping. The motivation for introducing $S(v,v_i)$ is that Lemma~\ref{lemma:S_v} below will allow us to recover $S(v)$ from a collection of $S(v,v_i)$-sets, and the $S(v,v_i)$-sets can be computed efficiently with dynamic programming. We explain $S(v,v_i)$ using an example based on Fig.~\ref{fig:key-idea}. Suppose we have the relations $P(u_A) \prec T(v_1)$, $P(u_B) \prec T(v_1)$, $P(u_C) \prec T(v_2)$, $P(u_D) \prec T(v_3)$, $P(u_E) \prec T(v_3)$, $P(u_D) \prec T(v_4)$, and $P(u_F) \prec T(v_4)$. Then, the following holds: \begin{tabbing} $S(v,v_0) = \{~\emptyset ~\}$, \\ $S(v,v_1) = \{~\emptyset,~\{u_A\},~\{u_B\} ~\}$, \\ $S(v,v_2) = \{~\emptyset,~\{u_C\}~\}$, \\ $S(v,v_3) = \{~\emptyset,~\{u_D\},~\{u_E\} ~\}$, \\ $S(v,v_4) = \{~\emptyset,~\{u_D\},~\{u_E\},~\{v_F\},~\{u_D,u_E\},~\{u_D,u_F\}, ~\{u_E,u_F\} ~\}$. \end{tabbing} \begin{figure} \caption{An example. A triangle~$X$ attached to~$v_i$ means that $P(u_X) \subset T(v_i)$. Note that the triangle~$D$ appears at~$v_2$, $v_3$, and~$v_4$. However, $P(u_D) \prec T(v_2)$ does not hold since it does not satisfy the minimality condition. Therefore, $u_D$~may be matched to~$v_3$ or~$v_4$, but $u_D$~will never be matched to~$v_2$ in {\bf TreeIncl1}.} \label{fig:key-idea} \end{figure} Next, we present a dynamic programming-based algorithm named {\bf TreeIncl1}, for determining if $P(u) \prec T(v)$. To compute all the $S(v,v_i)$-sets, we construct a DAG (directed acyclic graph) $G(V,E)$ from~$T(v)$, as illustrated in Fig.~\ref{fig:dag}. Here, $V$~is defined by $V = V(T(v)) - \{ v \}$, and $E$ is defined by $E = \{ (v_i,v_j) \,|\, v_i \triangleleft v_j \}$. We define $Pred(v_i)$ by $Pred(v_i) = \{v_j \,|\, (v_j,v_i) \in E \}$, meaning the set of the ``predecessors'' of~$v_i$, and also being equivalent to $LF(v,v_i)$. {\bf TreeIncl1}~traverses $G(V,E)$ so that node~$v_i$ is visited only after all of its predecessors have been visited, at which point it runs the procedure {\bf ComputeSet}$(v,v_i)$~below to compute and store~$S(v,v_i)$ for this~$v_i$. Recall that $M(v_i) = \{ u_j \in Chd(u) \,|\, P(u_j) \prec T(v_i) \}$. \begin{figure} \caption{Example of the DAG $G(V,E)$ constructed from~$T(v)$, where $v \notin V$, $E$~is shown by dotted arrows, and $T(v)$~is shown by bold lines.} \label{fig:dag} \end{figure} \noindent Procedure {\bf ComputeSet}$(v,v_i)$: \begin{itemize} \item [(1)\,\,\,] If $Pred(v_i) = \emptyset$ then $S(v,v_i) := \{ \emptyset \} \cup \{ \{u_h\} \,|\, u_h \in M(v_i) \}$ \item [(2)\,\,\,] Else: \item [(2a)] ~ ~ $S_0(v_i) := \bigcup_{v_j \in Pred(v_i)} S(v,v_{j})$ \item [(2b)] ~ ~ $S(v,v_i) := S_0(v_i) \cup \{ S \cup \{ u_h \} \,|\, u_h \in M(v_i), S \in S_0(v_i) \}$ \end{itemize} Finally, after $G(V,E)$ has been completely traversed, {\bf TreeIncl1}~assigns $S(v) := \bigcup_{v_i \in Des(v)} S(v,v_i)$. Then $P(u)$ is included in $T(v)$ with $u$ corresponding to~$v$ if and only if $u$~and $v$ have the same label and $Chd(u) \in S(v)$. Note that $S(v)=\emptyset$ holds for each $v$ if $Chd(u)=\emptyset$. \begin{lemma} Procedure {\bf ComputeSet}$(v,v_i)$ correctly computes $S(v,v_i)$s, and $S(v) = \cup_{v_i \in Des(v)} S(v,v_i)$. \label{lemma:S_v} \end{lemma} \begin{proof} First we show that {\bf ComputeSet}$(v,v_i)$ correctly computes $S(v,v_i)$s. It is seen from Proposition~\ref{prop:central} that $U \in S(v,v_i)$ holds for $U=\{u_{i_1},\ldots,u_{i_k}\} \subseteq Chd(u)$ ($U = \emptyset$ if $k=0$) if and only if there exists a sequence of nodes $(v_{j_1},v_{j_2},\ldots,v_{j_k})$ such that $P(u_{i_p}) \prec T(v_{j_p})$ holds for all $p=1,\ldots,k$ and $v_{j_1} \triangleleft v_{j_2} \triangleleft \cdots \triangleleft v_{j_k}$ (by appropriately renumbering indices of $u_{i_1},\ldots,u_{i_k}$), where $v_{j_k} = v_i$ or $v_{j_k} \triangleleft v_i$. On the other hand, it is seen from {\bf ComputeSet}$(v,v_i)$ that this procedure examines all possible sequences such that $v_{j_1} \triangleleft v_{j_2} \triangleleft \cdots \triangleleft v_{j_{k'}}$ with $v_{j_{k'}} = v_i$ or $v_{j_{k'}} \triangleleft v_i$, and adds at most one $u_h \in M(v_{j_p})$ to each set in $S_0(v_{j_p})$. It is also seen that the procedure and the above discussion that $S_0(v_{j_p})$ consists of $U$s such that $U \subseteq Chd(u)$ and $P(U) \subset T(LF(v,v_{j_p}))$. Therefore, we can see from the definition of $M(\ldots)$ that {\bf ComputeSet}$(v,v_i)$ correctly computes $S(v,v_i)$s. Then we show the second statement of the lemma. Let $U \in S(v)$ and $d_U=|U|$. Let $\phi$ be an injection from $U$ to $Des(v)$ giving an inclusion mapping for $P(U) \subset T(v)$, which is the one guaranteed by Proposition~\ref{prop:central}. Let $\{v_1',\ldots,v_{d_U}'\} = \{ \phi(u_j) | u_j \in U \}$, where $v_1' \triangleleft v_2' \triangleleft \cdots \triangleleft v_{d_U}'$. Then, $v_i' \in LF(v,v_{i+1}')$ and $v_i' \in LF(v,v_{d_U}')$ hold for all $i=1,\ldots,d_U-1$. Furthermore, $P(u_j) \prec T(v_i')$ holds for $v_i' = \phi(u_j)$. Therefore, $U \in S(v,v_{d_U}')$. It is straightforward to see that $S(v,v_i)$ does not contain any element not in~$S(v)$. \end{proof} The overall procedure of {\bf TreeIncl1} is given by the pseudocode of {\bf Algorithm}~\ref{alg:treeincl1}. In this procedure, we traverse nodes in both $P$ and $T$ from left to right n the postorder (i.e., leaves to the root). We maintain $Min(u)$ for $u \in V(P)$ (resp., $Min(v)$ for $v \in V(T)$) that consists of the currently available nodes $v'$ (resp., $u'$) such that $P(u) \prec T(v')$ (resp., $P(u') \prec T(v)$). \begin{algorithm} \caption{{\bf TreeIncl1}$(P,T)$} \label{alg:treeincl1} \begin{algorithmic} \FOR{all $v \in V(P) \cup V(T)$} \STATE{$Min(v):=\emptyset$} \ENDFOR \FOR{all $u \in V(P)$ in postorder} \FOR{all $v \in V(T)$ in postorder} \STATE{$M(v):=\{ u_j \in Chd(u)| u_j \in Min(v)\}$} \FOR{all $v_i \in Des(v)$ in postorder} \STATE{{\bf ComputeSet$(v.v_i)$}} \ENDFOR \STATE{$S(v) := \cup_{v_i \in Des(v)} S(v,v_i)$} \IF{$Chd(u) \in S(v)$ \AND $u \notin Min(v_i)$ for all $v_i \in Des(v)$ \AND $\ell(u)=\ell(v)$} \STATE{$Min(v):=Min(v)\cup\{u\}$} \STATE{$Min(u):=Min(u)\cup\{v\}$} \ENDIF \ENDFOR \ENDFOR \RETURN $Min(r(P))$ \end{algorithmic} \end{algorithm} \begin{lemma} {\bf TreeIncl1}~outputs the set of all nodes $v$ such that $P \prec T(v)$ in $O(d \cdot 2^d \cdot m n^3)$ time using $O(d \cdot 2^d \cdot n + mn)$ space. \label{lemma:alga} \end{lemma} \begin{proof} Since the correctness follows from Lemma~\ref{lemma:S_v}, we analyze the time complexity. The sizes of the $S(v)$, $S(v,v_{i_j})$s, and $S_0(v_i)$s are $O(d \cdot 2^d)$, where we can use a simple bit vector of size $O(d)$ to represent each subset of $U$. The computation of each of these sets takes $O(d \cdot 2^d \cdot n)$ time. Since the number of $S(v,v_{i_j})$s and $S_0(v_i)$s per $(u,v) \in V(P) \times V(T)$ are $O(n)$, the total computation time for $S(v,v_{i_j})$s per $(u,v)$ is $O(d \cdot 2^d \cdot n^2)$. Hence, the total computation time for computing $S(v)$s for all $(u,v)$s is $O(d \cdot 2^d \cdot m n^3)$. Since the size of each $S(v,v_i)$ is $O(d \cdot 2^d)$ and we need to maintain $S(v,v_i)$ for $v_i \in Des(v)$ per $(u,v)$, $O(d \cdot 2^d \cdot n)$ space is enough to maintain $S(v,v_i)$s. Note that we can re-use the same space for different $(u.v)$s. The time needed for other operations can be analyzed as follows. We can use simple bit vectors to maintain $Min(u)$s and $Min(v)$s, which need $O(mn)$ space in total and $O(1)$ time per addition of an element or checking of the membership. Therefore, the total computation time required to maintain $Min(u)$s and $Min(v)$s is $O(mn)$. Furthermore, $M(v)$ can be computed in $O(d)$ time per $(u,v)$ and thus the total time to compute $M(v)$s is $O(dmn)$, and ``$u \notin Min(v_i)$ for all $v_i \in Des(v)$'' can be checked in $O(|Des(v)|) \leq O(n)$ time per $(u,v)$ and thus the total computation time needed for this checking is $O(mn^2)$. Therefore, the time and space complexities of {\bf TreeIncl1}~are $O(d \cdot 2^d \cdot m n^3)$ and $O(d \cdot 2^d \cdot n + mn)$, respectively. \end{proof} \textbf{Remark:} If there exist $u_i, u_j \in Chd(u), u_i \neq u_j$ such that $P(u_i) \sim P(u_j)$, we treat each element in $S(v)$, $S(v,v_{i_j})$s, and $S_0(v_i)$s as a multiset where any $u_i$ and $u_j$ such that $P(u_i) \sim P(u_j)$ are identified and the multiplicity of $u_i$ is bounded by the number of $P(u_j)$s isomorphic to $P(u_i)$. Then, since $|Chd(u)| \leq d$ for all $u$ in $P$, the size of each multiset is at most $d$ and the number of different multisets is not greater than $2^d$. Therefore, the same time complexity result holds. (The same arguments can be applied to the following sections.) Note that by treating~$u_i$ and~$u_j$ separately, we do not need to modify the algorithm. \begin{figure} \caption{Example of $T'(v)$ and $G'(V',E')$. $E'$~is shown by dashed arrows.} \label{fig:new-tree} \end{figure} Next, we discuss how to improve the efficiency of {\bf TreeIncl1}. Actually, to compute $S_0(v_i)$, it is not necessary to consider all of the $v_{i_j}$s that are left of~$v_i$. Instead, we can construct a tree $T'(v)$ from a given~$T(v)$ according to the following rule (see Fig.~\ref{fig:new-tree} for an illustration): \begin{itemize} \item For each pair of consecutive siblings $(v_i,v_j)$ in $T(v)$, add a new sibling (leaf) $v_{(i,j)}$ between $v_i$ and~$v_j$. \end{itemize} Newly added nodes are called \emph{virtual nodes}. All virtual nodes have the same label that does not appear in $P$, to ensure that no $u \in V(P)$ is in $M(v_{(i,j)})$. We then construct a DAG $G'(V',E')$ on $V'=V(T'(v))$ where $(v_i,v_j) \in E'$ if and only if one of the following holds: \begin{itemize} \item $v_j$ is a virtual node, and $v_i$ is in the rightmost path of~$T'(v_{j_1})$, where $v_j = v_{(j_1,j_2)}$; or \item $v_i$ is a virtual node, and $v_j$ is in the leftmost path of~$T'(v_{i_2})$, where $v_i = v_{(i_1,i_2)}$. \end{itemize} By replacing $G(V,E)$ by~$G'(V',E')$ in {\bf TreeIncl1}~(and keeping all other steps intact), we obtain what we call {\bf TreeIncl2}. Note that in {\bf TreeIncl2}, $v_{(i,j)}$s are treated in the same ways as for $v_i$s and thus we need not introduce the definitions for such terms as $S(v.v_{(i,j)})$ and $LF(v,v_{(i,j)})$ nor change the definition of $S(v.v_i)$. \begin{figure} \caption{Illustration of a path from $v_i$ to $v_j$ in the proof of Lemma~\ref{lemma:algb}.} \label{fig:path} \end{figure} \begin{lemma} {\bf TreeIncl2}~computes $S(v,v_i)$s for all $v_i \in Des(v)$ in $O(d \cdot 2^d \cdot n)$ time per $(u,v) \in V(P) \times V(T)$. \label{lemma:algb} \end{lemma} \begin{proof} First we prove that there exists a path in $G'(V',E')$ from $v_i \in V$ to $v_j \in V$ if and only if $v_i \triangleleft v_j$ (see also Fig.~\ref{fig:path}). It can be seen from the definition of the left-right relationship that if $(v_i,v_k) \in E'$ and $(v_k,v_j) \in E'$ where $v_i,v_j \in V$ and $v_k$ is a virtual node, then $v_i \triangleleft v_j$. Since virtual nodes and non-virtual nodes appear alternatively in every path in $G'(V',E')$, the ``only if'' part holds. Suppose that $v_i \triangleleft v_j$ holds for $v_i,v_j \in V$. Let $v_k$ be the lowest common ancestor of $v_i$ and $v_j$. We assume w.l.o.g. that $v_i$ or $v_j$ is not a child of $v_k$ because the other cases can be proved in the same way. Let $v_{k_1},v_{k_2},v_{k_3},v_{k_4}$ be children of $v_k$ such that $v_{k_1} \in Anc(v_i)$, $(v_{k_1},v_{k_2}) \in E'$, $(v_{k_3},v_{k_4}) \in E'$, and $v_{k_4} \in Anc(v_j)$, where $v_{k_2}$ and $v_{k_3}$ are virtual nodes and can be the same node. We show that there exists a path in $G'(V',E')$ from $v_i$ to $v_{k_2}$. Let $v_h$ be the lowest ancestor of $v_i$ that has children $v_{h_1},v_{h_2},v_{h_3},v_{h_4}$ such that $v_{h_1} \in Ans(v_i)$, $(v_{h_1},v_{h_2}) \in E'$, $(v_{h_3},v_{h_4}) \in E'$, and $v_{h_4}$ ($\neq v_{h_1}$) is the rightmost child of $v_h$, where $v_{h_2}$ and $v_{h_3}$ are virtual nodes and can be the same node. Then, $(v_i,v_{h_2}) \in E'$ holds from the construction of $G'(V',E')$ and thus there exists a path from $v_i$ to $v_{h_4}$. We can repeat this procedure by regarding $v_{h_4}$ as $v_i$, and so on, from which it follows that there exists a path in $G'(V',E')$ from $v_i$ to $v_{k_2}$. It is also seen from the symmetry on the left-right relationship that there exists a path in $G'(V',E')$ from $v_{k_3}$ to $v_j$. Furthermore, there clearly exists a path in $G'(V',E')$ from $v_{k_2}$ to $v_{k_3}$, which completes the proof of the ``if'' part. Moreover, from the above discussion, it can be seen that {\bf TreeIncl2}~examines the same set of sequences $v_{j_1} \triangleleft v_{j_2} \triangleleft \cdots \triangleleft v_{j_{k'}}$ as {\bf TreeIncl1}~examines when ignoring virtual nodes. Furthermore, addition of an element is not performed at any virtual node, and no element is deleted at any virtual or non-virtual node $v$ in constructing $S(v)$. Therefore, {\bf TreeIncl2}~correctly computes $S(v,v_i)$s. Next we analyze the time complexity. We can see that $|E'| = O(n)$ since: \begin{itemize} \item $|V(T'(v))| = O(n)$; \item each non-virtual node in $G'(V',E')$ has at most one incoming edge and at most one outgoing edge; and \item each new edge connects a non-virtual node and virtual node. \end{itemize} Therefore, the total number of set operations is $O(d \cdot 2^d \cdot n)$, and the lemma follows. \end{proof} From Lemmas \ref{lemma:alga} and \ref{lemma:algb}, we have the following main theorem. \begin{theorem} Unordered tree inclusion can be solved in $O(d \cdot 2^d \cdot mn^2)$ time and $O(d \cdot 2^d \cdot n + mn)$ space. \label{thm:main} \end{theorem} If we use the height~$h(T)$ of a tree~$T$ as an additional parameter, we can express the time complexity as $O(d \cdot 2^d \cdot h(T) \cdot mn)$ because the time complexity is represented in this case as $O(m \sum_{v \in V(T)} d \cdot 2^d \cdot |T(v)|)$ and $\sum_{v \in V(T)} |T(v)| \leq (h(T)+1)n$ hold. This bound is better than the one by Kilpel\"{a}inen and Mannila~\cite{kilpelainen1995ordered} when $d$ is large (to be precise, when $d > c \log(h(T))$ for some constant~$c$). \section{NP-hardness of the case of pattern trees with unique leaf labels} \label{sec:hardness} For any node-labeled tree~$T$, let $L(T)$ be the set of all leaf labels in~$T$. For any $c \in L(T)$, let $occ(T,c)$ be the number of times that~$c$ occurs in~$T$, and define $occ(T) = \max_{c \in L(T)} occ(T,c)$. The decision version of the tree inclusion problem is the problem of determining whether~$T$ can be obtained from~$P$ by applying a sequence of node insertion operations. Kilpel\"{a}inen and Mannila~\cite{kilpelainen1995ordered} proved that the decision version of unordered tree inclusion is NP-complete by a reduction from Satisfiability. In their reduction, the clauses in a given instance of Satisfiability are used to label the non-root nodes in the constructed trees~$P$ and~$T$; in particular, for every clause~$C$, each literal in~$C$ introduces one node in~$T$ whose node label represents~$C$. (See the proofs of Lemma~7.2 and Theorem~7.3 in~\cite{kilpelainen1995ordered} by Kilpel\"{a}inen and Mannila for details.) By using 3-SAT instead of Satisfiability in their reduction, every clause will determine the label of at most three nodes in~$T$, so we immediately have: \begin{corollary} \label{corollary:KM_hardness} The decision version of the unordered tree inclusion problem is NP-complete even if restricted to instances where $h(T) = 2$, $h(P) = 1$, $occ(T) = 3$, and $occ(P) = 1$. \end{corollary} In Kilpel\"{a}inen and Mannila's reduction, the labels assigned to the internal nodes of~$T$ are significant. Here, we consider the computational complexity of the special case of the problem where all internal nodes in~$P$ and~$T$ have the same label, or equivalently, where only the leaves are labeled. The next theorem is the main result of this section. \begin{theorem} \label{thm:unique_leaves_NP-complete} The decision version of the unordered tree inclusion problem is NP-complete even if restricted to instances where $h(T) = 2$, $h(P) = 2$, $occ(T) = 3$, $occ(P) = 1$, and all internal nodes have the same label. \end{theorem} \begin{proof} Membership in NP was shown in the proof of Theorem~7.3 by Kilpel\"{a}inen and Mannila~\cite{kilpelainen1995ordered}. Next, to prove the NP-completeness, we present a reduction from \textsc{Exact Cover by 3-Sets (X3C)}, which is known to be NP-complete~\cite{book:GarJoh79}. \textsc{X3C} is defined as follows. \noindent \fbox{ \parbox{\boxwidth}{ \textsc{Exact Cover by 3-Sets (X3C)}: Given a set $U = \{u_1, u_2, \dots, u_n\}$ and a collection $\mathcal{S} = \{S_1, S_2, \dots, S_m\}$ of subsets of~$U$ where $|S_i| = 3$ for every $S_i \in \mathcal{S}$ and every $u_i \in U$ belongs to at most three subsets in~$\mathcal{S}$, does $(U,\mathcal{S})$ admit an exact cover, i.e., is there an $\mathcal{S}' \subseteq \mathcal{S}$ such that $|\mathcal{S}'| = n/3$ and $\bigcup_{S_i \in \mathcal{S}'} S_i = U$? } } We assume w.l.o.g. that in any given instance of \textsc{X3C}, $n/3$~is an integer and each $u_i \in U$ belongs to at least one subset in~$\mathcal{S}$. Given an instance $(U,\mathcal{S})$ of \textsc{X3C}, construct two node-labeled, unordered trees~$T$ and~$P$ as described next. (Refer to Fig.~\ref{fig:unique_leaves_example} for an example of the reduction.) Let $W = \{s_i^j \,:\, 1 \leq i \leq m,\, 0 \leq j \leq n/3\}$ be a set of elements different from~$U$ (i.e., $U \cap W = \emptyset$), define $L = U \cup W$, and let $\alpha$ be an element not in~$L$. For any $L' \subseteq L$, let $t(L')$ denote the height-$1$ unordered tree consisting of a root node labeled by~$\alpha$ whose children are bijectively labeled by~$L'$. Construct $T$ by creating a node~$r$ labeled by~$\alpha$ and attaching the roots of the following trees as children of~$r$: \begin{itemize} \item[(i)] $t(\{s_i^0\} \cup S_i)$ for each $i \in \{1,2,\dots,m\}$ \item[(ii)] $t(\{s_i^{j-1},s_i^{j}\})$ for each $i \in \{1,2,\dots,m\}$, $j \in \{1,2,\dots,n/3\}$ \item[(iii)] $t(\{s_1^j,s_2^j,\dots,s_m^j\})$ for each $j \in \{1,2,\dots,n/3\}$ \end{itemize} Construct $P$ by taking a copy of~$t(U)$ and then, for each $w \in W$, attaching the root of~$t(\{w\})$ as a child of the root of~$P$. Note that by construction, $L(T) = L(P) = L$, $h(T) = 2$, $h(P) = 2$, $occ(T) = 3$, and $occ(P) = 1$ hold. \begin{figure}\label{fig:unique_leaves_example} \end{figure} We will now show that $P$ is included in~$T$ if and only if $(U,\mathcal{S})$ admits an exact cover. \noindent $(\leftarrow)$ First, suppose that $(U,\mathcal{S})$ admits an exact cover $\{S_{\sigma_1}, S_{\sigma_2}, \dots, S_{\sigma_{n/3}}\} \subseteq \mathcal{S}$. Then $P$ is included in~$T$ because: \begin{itemize} \item[{\raise0.9pt\hbox{$\bullet$}}] For each $S_i \in \mathcal{S}$ in the exact cover, the three leaves in~$P$ that are labeled by~$S_i$ can be mapped to the $t(\{s_i^0\} \cup S_i)$-subtree in~$T$. \item[{\raise0.9pt\hbox{$\bullet$}}] For each $S_i \in \mathcal{S}$ in the exact cover, the leaf in~$P$ labeled by~$s_i^j$ can be mapped to the $t(\{s_i^{j},s_i^{j+1}\})$-subtree in~$T$ for $j \in \{0,1,\dots,k-1\}$, to the $t(\{s_1^j,s_2^j,\dots,s_m^j\})$-subtree for $j = k$, and to the $t(\{s_i^{j-1},s_i^{j}\})$-subtree for $j \in \{k+1,k+2,\dots,n/3\}$, where $k$ is defined by $S_i = S_{\sigma_k}$. \item[{\raise0.9pt\hbox{$\bullet$}}] For each $S_i \in \mathcal{S}$ that is not in the exact cover, the leaf in~$P$ labeled by~$s_i^0$ can be mapped to the $t(\{s_i^0\} \cup S_i)$-subtree in~$T$. \item[{\raise0.9pt\hbox{$\bullet$}}] For each $S_i \in \mathcal{S}$ that is not in the exact cover, the leaf in~$P$ labeled by~$s_i^j$ can be mapped to the $t(\{s_i^{j-1},s_i^{j}\})$-subtree in~$T$ for $j \in \{1,2,\dots,n/3\}$. \end{itemize} \noindent $(\rightarrow)$ Next, suppose that $P$ is included in~$T$. By the definitions of~$T$ and~$P$, each subtree rooted at a child of the root of~$T$ can have at most one leaf with a label in~$W$ or at most three leaves with labels in~$U$ mapped to it from~$P$. Since $|W| = m \cdot (n/3 + 1)$ but there are only $(m+1) \cdot n/3$ subtrees in~$T$ of the form $t(\{s_i^{j-1},s_i^{j}\})$ and $t(\{s_1^j,s_2^j,\dots,s_m^j\})$, at least $m - n/3$ subtrees of the form $t(\{s_i^0\} \cup S_i)$ must have a leaf with a label from $\{s_i^0 \,:\, 1 \leq i \leq m\}$ mapped to them. This means that at most $n/3$ subtrees of the form $t(\{s_i^0\} \cup S_i)$ remain for the $n$~leaves in~$P$ labeled by~$U$ to be mapped to, and hence, exactly $n/3$ such subtrees have to be used. Denote these $n/3$ subtrees by $t(\{s_{\sigma_1}^0\} \cup S_{\sigma_1})$, $t(\{s_{\sigma_2}^0\} \cup S_{\sigma_2})$, $\dots$, $t(\{s_{\sigma_{n/3}}^0\} \cup S_{\sigma_{n/3}})$. Then $\{S_{\sigma_1}, S_{\sigma_2}, \dots, S_{\sigma_{n/3}}\}$ is an exact cover of~$(U,\mathcal{S})$. \end{proof} \section{A polynomial-time algorithm for the case of $occ(P,T)=2$} \label{sec:poly} This section and the following ones consider the decision version of unordered tree inclusion. By repeatedly applying each procedure $O(n)$ times, we can solve the locating problem version and thus the theorems hold as they are. In this section, we require that each leaf of $P$ has a unique label and that it appears at no more than $k$ leaves in $T$. We denote this number $k$ by $occ(P,T)$ (see Fig.~\ref{fig:d2d3}). Note that the case of $occ(P)=1$ and $occ(T)=k$ is included in the case of $occ(P,T)=k$. From the unique leaf label assumption, we have the following observation. \begin{figure} \caption{For these trees, $Occ(u_1,M)=Occ(u_2,M)=3$, $Occ(u_3,M)=Occ(u_4,M)=Occ(u_5,M)=2$, $d_2=3$, $d_3=2$, and $occ(P,T)=3$,} \label{fig:d2d3} \end{figure} \begin{proposition} Suppose that $P(u)$ has a leaf labeled with $b$. If $P(u) \subset T(v)$, then $v$ is an ancestor of a leaf (or leaf itself) with label $b$. \end{proposition} We say that $v_j$ is a \emph{minimal node for $u_i$} if $P(u_i) \prec T(v_j)$ holds. It follows from the proposition above that the number of minimal nodes is at most $k$ for each $u_i$ if $occ(P,T)=k$. The preliminary version of this paper \cite{akutsu2018unordered} showed that the case $k=2$ can be solved in polynomial time by using a reduction to 2-SAT. Here, we give a more direct solution that effectively utilizes some techniques from a classic polynomial-time algorithm for 2-SAT~\cite{aspvall1979}. This algorithm will be extended for the case of $k=3$ in the next subsection. From Proposition~\ref{prop:bottom-up}, it is enough to consider the decision of whether $P(u) \subset T(v)$ with $u$ corresponding to $v$. Let $Chd(u) = \{u_1,\ldots,u_d\}$. We present a simple algorithm to decide whether or not $P(u) \subset T(v)$. We can assume by induction that $P(u_i) \prec T(v_j)$ is known for all $u_i \in Chd(u)$ and for all $v_j \in V(T(v)) - \{ v \}$. Let $M = \{ (u_i,v_j) \,|\, P(u_i) \prec T(v_j) ~\land~ v_j \in V(T(v)) \}$. We define $OCC(u_i,M)$ and $Occ(u_i,M)$ by \begin{eqnarray*} OCC(u_i,M) & = & \{(u_i,v_j) \,|\, (u_i,v_j) \in M\}.\\ Occ(u_i,M) & = & |OCC(u_i,M)|. \end{eqnarray*} See Fig.~\ref{fig:d2d3} for an illustration. A node $u_i$ with $Occ(u_i,M)=h$ is called a node of \emph{rank} $h$. Note that $u_i$, $v_j$, and $M$ appearing above depend on $(u,v)$. The crucial task is to find an injective mapping $\psi$ (called a \emph{valid mapping}) from $P(u)$ to $V(T(v))-\{v\}$ such that $P(u_i) \prec T(\psi(u_i))$ holds for all $u_i$ ($i=1,\ldots,d$) and there is no ancestor/descendant relationship between any $\psi(u_i)$ and $\psi(u_j)$ ($u_i \neq u_j$). If this task can be performed in $O(f(d,m,n))$ time, from Proposition~\ref{prop:bottom-up}, the total time complexity will be $O^{\ast}(f(d,m,n))$. We assume w.l.o.g. that $\psi$ is given as a set of mapping pairs. Hereafter, we let $Chd(u)=\{u_{i_1},\ldots,u_{i_d}\}$. Since we consider the case of $occ(P,T)$ $=2$, we assume w.l.o.g. that all $u_{i_k}$s have rank 2 (i.e., $Occ(u_{i_k},M)=2$ for $k=1,\ldots,d$). Accordingly, we let $OCC(u_{i_k},M)=\{(u_{i_k},v_{j_{k,0}}),(u_{i_k},v_{j_{k,1}})\}$ for $k=1,\ldots,$ $d$. As in \cite{aspvall1979}, we construct a directed graph $G_2(V_2,E_2)$ by \begin{eqnarray*} V_2 & = & \{u_{i_{k,0}},u_{i_{k,1}} \mid u_{i_k} \in Chd(u)\},\\ E_2 & = & \{(u_{i_{k,p}},u_{i_{h,q}}) \mid v_{j_{k,p}} \in AncDes(v_{j_{h,1-q}}),~h \neq k \}, \end{eqnarray*} where $u_{i_{k,p}}$s are newly introduced symbols. See also Fig.~\ref{fig:occ-2}. Intuitively, an arc $(u_{i_{k,p}},u_{i_{h,q}})$ implies that if $(u_{i_k},v_{j_{k,p}})$ is in the inclusion mapping then it is possible for $(u_{i_h},v_{j_{h,q}})$, but not $(u_{i_h},v_{j_{h,1-q}})$, to be in the mapping, too. \begin{figure}\label{fig:occ-2} \end{figure} \begin{proposition} There exists a path (resp., an edge) from $u_{i_{k,p}}$ to $u_{i_{h,q}}$ if and only if there exists a path (resp., an edge) from $u_{i_{h,1-q}}$ to $u_{i_{k,1-p}}$ \end{proposition} \begin{proof} It is shown in \cite{aspvall1979} that $G_2(V_2,E_2)$ has a duality property: $G_2$ is isomorphic to the graph obtained from $G_2$ by reversing the direction of all the edges and complementing the names of all vertices. Since $u_{i_{h,q}}$ and $u_{i_{h,1-q}}$ (resp., $u_{i_{k,p}}$ and $u_{i_{k,1-p}}$) correspond to complementary variables, the proposition holds. \end{proof} Consider a 0-1 assignment to $V_2$, where 0 and 1 correspond to {\bf false} and {\bf true}, respectively. An assignment is called \emph{consistent} if the following conditions are satisfied. \begin{itemize} \item $u_{i_{k,0}} + u_{i_{k,1}} = 1$ holds for all $k=1,\ldots,d$, \item if $u_{i_{k,p}}=1$, all vertices reachable from $u_{i_{k,p}}$ have value 1. \end{itemize} Note that the first condition implies that $u_{i_{k,1-p}}$ corresponds to the negation of $u_{i_{k,p}}$, which further means that $u_{i_k}$ must be mapped to exactly one of $v_{j_{k,0}}$ and $v_{j_{k,1}}$. Note also that the second condition implies that if $u_{i_{k,p}}=0$, all vertices reachable to $u_{i_{k,p}}$ have value 0. \begin{proposition} $P(u) \subset T(v)$ holds if and only if there exists a consistent assignment. Furthermore, $\psi$ can be obtained from the vertices to which 1 is assigned. \end{proposition} \begin{proof} Suppose that there exists a consistent assignment. Then, we can construct an inclusion mapping $\psi$ for $Chd(u)$ by letting $\psi(u_{i_k})=v_{j_{k.p}}$ for $p$ such that $u_{i_{k,p}}=1$, for all $u_{i_k} \in Chd(u)$, where the validity follows from the above two conditions and the meaning of an arc. Conversely, suppose that there exists an inclusion mapping $\psi$. Then, we let $u_{i_{k,p}}=1$ if and only if $\psi(u_{i_k})=v_{j_{k.p}}$ for all $u_{i_k} \in Chd(u)$, which clearly satisfies the above two conditions. \end{proof} As in \cite{aspvall1979}, we have the following proposition. \begin{proposition} There exists a consistent assignment to $V_2$ if and only if there is no $k$ such that $u_{i_{k,0}}$ and $u_{i_{k,1}}$ belong to the same strongly connected component in $G_2(V_2,E_2)$. \end{proposition} The strongly connected components can be computed in linear time \cite{tarjan1972}. Furthermore, a consistent assignment can be obtained by greedily assigning 1 to vertices from deeper to shallower SCCs under the DFS (depth first search) ordering as in \cite{aspvall1979}. Since this procedure can clearly be done in polynomial time, the following theorem holds. \begin{theorem} Unordered tree inclusion can be solved in polynomial time if $occ(P,$ $T)=2$. \label{thm:poly} \end{theorem} \section{An $O^{\ast}(1.619^d)$-time algorithm for the case of $occ(P,T)=3$} \label{sec:occ3} In this section, we present an $O^{\ast}(1.619^d)$-time algorithm for the case of $occ(P,T)=3$, where $d$ is the maximum degree of $P$, $m=|V(P)|$, and $n=|V(T)|$. Note that this case remains NP-hard from Theorem~\ref{thm:unique_leaves_NP-complete}. The basic strategy is to combine bottom-up dynamic programming and detection of a consistent assignment as in Section~\ref{sec:poly} to determine whether $P(u) \subset T(v)$ holds, where a recursive procedure is employed here for finding a consistent assignment. Let $Chd(u)=\{u_{i_1},\ldots,u_{i_d}\}$. As in Section~\ref{sec:poly}, we can assume that $P(u_{i_k}) \prec T(v_{j_h})$ is known for all $u_{i_k}$ and for all $v_{j_h} \in V(T(v)) - \{ v \}$, and we let $M = \{ (u_{i_k},v_{j_h}) \,|\, P(u_{i_k}) \prec T(v_{j_h}) ~\land~ v_{j_h} \in V(T(v)) \}$. Let $d_3$ (resp., $d_2$) be the number of $u_{i_k}$s of rank 3 (resp., rank 2) (see also Fig.~\ref{fig:d2d3}). We assume w.l.o.g. that $d_2 + d_3 = d$ because $Occ(u_{i_k},M)=1$ means that $\psi(u_{i_k})$ is uniquely determined and thus we can ignore $u_{i_k}$s with $Occ(u_{i_k},M)=1$. \begin{figure}\label{fig:occ-3} \end{figure} We construct $G_2(V_2,E_2)$ as in Section~\ref{sec:poly}, using only $u_{i_k}$s with rank 2 and the corresponding $v_{j_h}$s, considering ancestor-descendant relations only among them. Then, for each $u_{i_k} \in Chd(u)$ such that $OCC(u_{i_k},M) = \{(u_{i_k},v_{j_{k,0}}),(u_{i_k},v_{j_{k,1}}),$ $(u_{i_k},v_{j_{k,2}})\}$, we let $V_{OCC3}(u_{i_k}) = \{u_{i_{k,0}},u_{i_{k,1}},u_{i_{k,2}} \}$, where $u_{i_{k,p}}$s are newly introduced symbols. Let $V_{OCC3} = \bigcup_{Occ(u_{i_k},M)=3} V_{OCC3}(u_{i_k})$. Then, we construct $G_3(V_3,E_3)$ from $G_2(V_2,E_2)$ by \begin{eqnarray*} V_3 & = & V_2 \cup V_{OCC3},\\ E_3 & = & E_2 \cup \{ (u_{i_{k,p}},u_{i_{h,q}}) \mid u_{i_{k,p}} \in V_{OCC3}, u_{i_{h,q}} \in V_2, v_{i_{h,1-q}} \in AncDes(v_{i_{k,p}}) \}. \end{eqnarray*} See Fig.~\ref{fig:occ-3} for an example of $G_3(V_3,E_3)$. \begin{definition} \label{def:inad} We say that $u_{i_{k,p}} \in V_{OCC3}$ is an \emph{inadmissible vertex} if there exist paths from $u_{i_{k,p}}$ to $u_{i_{l,0}}$ and $u_{i_{l,1}}$ in $G_3(V_3,E_3)$ for some $u_{i_l} \in Chd(u)$ of rank 2. We also say that $(u_{i_{k,p}},u_{i_{h,q}}) \in V_{OCC3} \times V_{OCC3}$ ($k \neq h$) is an \emph{inadmissible pair} if $v_{i_{h,q}} \in AncDes(v_{i_{k,p}})$ holds, or there exist a path reachable from $u_{i_{k,p}}$ to $u_{i_{l,0}}$ in $G_3(V_3,E_3)$ and a path reachable from $u_{i_{h,q}}$ to $u_{i_{l,1}}$ in $G_3(V_3,E_3)$ for some $u_{i_l} \in Chd(u)$ of rank 2. \end{definition} It is to be noted that an inadmissible vertex or an inadmissible pair $(u_{i_{k,p}},u_{i_{h,q}})$ cannot appear in any injective mapping $\psi$ for $P(u) \subset T(v)$ because the use of an inadmissible vertex or an inadmissible pair would make a consistent assignment impossible. Accordingly, we can assume w.l.o.g. that there does not exist an inadmissible vertex $u_{i_{k,p}}$ in $V_{OCC3}$. \begin{proposition} Suppose that there exists a consistent assignment on vertices in $G_2(V_2,E_2)$ in the sense defined in Section~\ref{sec:poly}. If there does not exist an inadmissible pair, there exists a valid mapping $\psi$. Furthermore, such a mapping can be found in polynomial time. \label{prop:admissible} \end{proposition} \begin{proof} We present a greedy algorithm for finding a consistent assignment, from which a valid mapping can be obtained. Beginning with an empty assignment on all vertices in $V_2$, we repeat the following procedure in any order: for each $u_{i_{k}}$ of rank 3, assign 1 to $u_{i_{k,0}}$, assign 0 to $u_{i_{k,1}}$ and $u_{i_{k,2}}$, and assign 1 to all vertices in $G_3(V_3,E_3)$ reachable from $u_{i_{k,0}}$. Finally, we extend the resulting assignment to a consistent assignment by assigning 1 to remaining vertices from deeper to shallower strongly connected components under the DFS ordering. Clearly, this algorithm works in polynomial time. It is also seen from the definition of the inadmissible pair that this algorithm always finds a consistent assignment. \end{proof} We denote the procedure in the proof of Proposition~\ref{prop:admissible} by $FindMappingAD(M)$. This procedure returns {\bf true} or {\bf false}. {\bf true} corresponds to the case where a consistent assignment and a valid mapping $\psi$ exist. It is straightforward to modify the procedure so that it outputs $\psi$ when it exists. In order to handle inadmissible pairs, we employ a simple recursive procedure. Suppose that $(u_{i_{k,p}},u_{i_{h,q}})$ is an inadmissible pair. If we include $(u_{i_k},v_{j_{k,p}})$ in $\psi$, we cannot include $(u_{i_h},v_{j_{h,q}})$ in $\psi$. In this case, $d_3$ is decreased by 2. If we do not include $(u_{i_k},v_{j_{k,p}})$, we can delete this pair from $M$, which decreases $d_3$ by 1. Based on this idea, we obtain the following main procedure for the case of $occ(P,T)=3$. Note that if we include $(u_{i_k},v_{j_{k,p}})$ in $\psi$, all pairs $(u_{i_k},v_{j_{k,r}})$ with $r=0,1,2$ are removed from $M$. Furthermore, all pairs $(u_{i_h},v_{j_{h,q}})$ such that $v_{j_{h,q}} \in AncDes(v_{j_{k,p}})$ are removed from $M$, which may cause further removal. $Update(M,(u_{i_k},v_{j_{k,p}}))$ executes this updating procedure while making the corresponding 0-1 assignments on $G_3(V_3,E_3)$. \begin{algorithm} \caption{$FindMapping(M)$} \label{alg:findmap} \begin{algorithmic} \IF{there exists an inadmissible pair $((u_{i_k},v_{j_{k,p}}),(u_{i_h},v_{j_{h,q}}))$} \STATE{$M_1 := Update(M,(u_{i_k},v_{j_{k,p}}))$} \STATE{$M_2 := M - \{(u_{i_k},v_{j_{k,p}}) \}$} \IF{$FindMapping(M_1) = \TRUE$} \RETURN{\TRUE} \ELSE \RETURN{$FindMapping(M_2)$} \ENDIF \ENDIF \RETURN{$FindMappingAD(M)$} \end{algorithmic} \end{algorithm} \begin{theorem} Unordered tree inclusion can be solved in $O^{\ast}(1.619^d)$ time if $occ(P,$ $T)=3$. \label{thm:occ-3} \end{theorem} \begin{proof} It follows from the discussions above that $FindMapping(M)$ correctly decides whether $P(u) \subset T(v)$ (when $u$ and $v$ have the same label). Therefore, we analyze the exponential factor (depending on $d$) of the time complexity of $FindMapping(M)$. Let $f(k)$ denote the number of times that $FindMapping(M)$ is called when $k = |\{ u_i \,|\, Occ(u_i,M) = 3 \}|$. Clearly, if $k \leq 1$, $f(k) \leq 1$. Otherwise (i.e., $k \geq 2$), it may invoke two recursive calls: one with at most $k-2$ nodes of rank 3 and the other with at most $k-1$ nodes of rank 3. Therefore, we have \begin{eqnarray*} f(k) & \leq & f(k-1) + f(k-2), \end{eqnarray*} from which $f(k) = O(1.619^k)$ follows (c.f., Fibonacci number). Since $d_3 \leq d$ holds and both $FindMappingAD(M)$ and $Update(M,(u_{i_k},u_{j_{k,p}}))$ work in polynomial time per execution, the total time complexity is $O^{\ast}(1.619^{d})$. \end{proof} \section{A randomized algorithm for the case of $h(P)=1$ and $h(T)=2$} \label{low-height} Finally, we consider the case of $h(P)=1$ and $h(T)=2$, denoted by {\bf IncH2}. This problem variant is NP-hard according to Corollary~\ref{corollary:KM_hardness}. We assume w.l.o.g. that the roots of $P$ and $T$ have the same unique label and thus they must match in any inclusion mapping. Let $U=\{u_1,\ldots,u_d\}$ be the set of children of $r(P)$. Let $v_1,\ldots,v_g$ be the children of $r(T)$, and let $v_{i,1},\ldots,v_{i,n_i}$ be the children of each $v_i$. First, we assume that $\ell(u_i) \neq \ell(u_j)$ holds for all $i \neq j$, where $\ell(v)$ denotes the label of $v$. This special case is denoted by {\bf IncH2U}. Recall that {\bf IncH2U} remains NP-hard from the condition of $occ(P)=1$ of Corollary~\ref{corollary:KM_hardness}. {\bf IncH2U} can be solved by a reduction to CNF SAT, different from the one mentioned in Section~\ref{sec:poly}. (In fact, it can be considered as an inverse reduction of the one originally used to prove the NP-hardness of unordered tree inclusion by Kilpel\"{a}inen and Mannila~\cite{kilpelainen1995ordered}.) For each $u_i$, we define $X^{POS}_i$ and $X^{NEG}_i$ by \begin{eqnarray*} X^{POS}_i & = & \{ x_j \,|\, \ell(u_i)=\ell(v_j) \},\\ X^{NEG}_i & = & \{ x_j \,|\, (\exists v_{j,k} \in Chd(v_j))(\ell(u_i)=\ell(v_{j,k})) \}. \end{eqnarray*} For each $u_i$, we construct a clause $C_i$ by $$ C_i = \left( \bigvee_{x_j \in X^{POS}_i} x_j \right) \lor \left( \bigvee_{x_j \in X^{NEG}_i} \overline{x_j} \right). $$ Then, the resulting SAT instance is $\{C_1,\ldots,C_d\}$. Intuitively, $x_j=1$ corresponds to the case where $u_i$ is mapped to $v_j$, where $\ell(u_i)=\ell(v_j)$. Of course, multiple $v_j$s may correspond to $u_i$. However, it is enough to consider an arbitrary one. \begin{proposition} {\bf IncH2U} can be solved in $O^{\ast}(1.234^d)$ time. \end{proposition} \begin{proof} First we prove the correctness of the reduction, where we assume w.l.o.g. that $r(P)$ is mapped to $r(T)$. Suppose that there exists an inclusion mapping $\phi$ from $V(P)$ to $V(T)$. Then, we let $x_j=1$ if $\phi(u_i)=v_j$, and $x_j=0$ if $\phi(u_i)=v_{j,k}$. An arbitrary assignment can be done on each of the other variables. Then, we can see that there is no inconsistency on the resulting assignment and all $C_i$s are satisfied. Conversely, suppose that there exists a satisfying assignment on $C_i$s. We let $\phi(u_i)=v_j$ if $x_j=1$ and $\ell(u_i)=\ell(v_j)$. Otherwise, we can let $\phi(u_i)=v_{j,k}$ for some $v_j$ such that $x_j=0$ and $\ell(u_i)=\ell(v_{j,k})$. This $\phi$ gives an inclusion mapping. Next we consider the time complexity. In order to solve the satisfiability instance, we use Yamamoto's $O^{\ast}(1.234^{d})$-time algorithm for SAT with $d$~clauses~\cite{yamamoto2005sat}. Since the other parts can be done in polynomial time, we have the proposition. \end{proof} In order to solve {\bf IncH2}, we combine two algorithms: (A1)~a random sampling-based algorithm; and (A2)~a modified version of the $O(d 2^d mn^2)$-time algorithm in Section~\ref{sec:improved}. For (A1), we employ the \emph{color-coding} technique \cite{alon1995color}. Let $d_0$ be the number of $u_i$s having unique labels, and let $d_1 \leq d_2 \leq \cdots \leq d_h$ be the multiplicities of the other labels in $U$. Define $\alpha = 1 - \frac{d_0}{d}$. Note that $d_0 + d_1 + \cdots + d_h = d$ and $d - d_0 = \alpha d$ hold. For each label $a_i$ with $d_i \geq 2$ (i.e., $i>0$), we relabel the nodes in $P$ having label $a_i$ by $a^1_i,a^2_i,\ldots,a^{d_i}_i$ in an arbitrary order. For each node $v$ in $T$ having label $a_i$, we assign $a^j_i$ ($j=1,\ldots,d_i$) to $v$ uniformly at random, and then apply the SAT-based algorithm for {\bf IncH2U}. Let $M$ be the set of pairs in an inclusion mapping from $P$ to $T$. If all nodes of $T$ appearing in $M$ have different labels, a valid inclusion mapping can be obtained. This success probability is given by \begin{eqnarray*} {\frac {d_1 !}{d_1^{d_1}}} \cdot {\frac {d_2 !}{d_2^{d_2}}} \cdots {\frac {d_h !}{d_h^{d_h}}} ~\geq~ {\frac {(\alpha d)!}{(\alpha d)^{(\alpha d)}}} . \end{eqnarray*} This inequality can be proved by repeatedly applying $$ {\frac {d_1 !}{d_1^{d_1}}} \cdot {\frac {d_2 !}{d_2^{d_2}}} ~\geq~ {\frac {(d_1+d_2)!}{(d_1+d_2)^{d_1+d_2}}}, $$ which is seen from $$ {\frac {(d_1+d_2)^{d_1+d_2}}{d_1^{d_1} d_2^{d_2}}} ~\geq~ \left( \begin{array}{c} d_1+d_2\\ d_1 \end{array} \right) ~=~ {\frac {(d_1+d_2)!}{d_1! d_2!}}. $$ Since ${\frac {k!}{k^k}} \geq e^{-k}$ holds for sufficiently large $k$, the success probability is at least $e^{-\alpha d}$. Therefore, if we repeat the random sampling procedure $e^{\alpha d}$ times, the failure probability is at most $(1-e^{-{\alpha d}})^{e^{\alpha d}} \leq e^{-1} < {\frac 1 2}$ because $\ln\left[ (1-{\frac 1 x})^x \right] = x\ln(1-{\frac 1 x}) \leq x (-{\frac 1 x}) = -1 = \ln(e^{-1})$ holds for any $x > 1$. If we repeat the procedure $k (\log n) e^{\alpha d}$ times where $k$ is any positive constant (i.e., the total time complexity is $O^{\ast}(1.234^d \cdot e^{\alpha d})$), the failure probability is at most ${\frac 1 {n^k}}$. For (A2), we modify the $O(d 2^d mn^2)$-time algorithm as follows. Recall that if there exist labels with multiplicity more than one, $S(v,v_i)$ is a multi-set. In order to represent a multi-set, we memorize the multiplicity of each label. Then, the number of distinct multi-sets is given by \begin{eqnarray*} N(d_0,\ldots,d_h) & = & 2^{d_0} \cdot \prod_{l=1}^h (d_l+1) . \end{eqnarray*} Since $d_i+1 \leq 3^{\lceil d_i/2 \rceil}$ holds for any $d_i \geq 2$, this number is bounded as follows: \begin{eqnarray*} N(d_0,\ldots,d_h) & \leq & 2^{d_0} \cdot 3^{\lceil (d-d_0)/2 \rceil}. \end{eqnarray*} Then, the time complexity of (A2) is $O^{\ast}(2^{(1-\alpha)d} \cdot 3^{(\alpha/2) d})$. Since we can select the minimum of the time complexities of (A1) and (A2), the resulting time complexity is given by \begin{eqnarray*} \max_{\alpha} \min(O^{\ast}(1.234^d \cdot e^{\alpha d}),O^{\ast}(2^{(1-\alpha)d} \cdot 3^{(\alpha/2) d})) . \end{eqnarray*} Since $1.234^d \cdot e^{\alpha d}$ and $2^{(1-\alpha)d} \cdot 3^{(\alpha/2)d}$ are increasing and decreasing functions of $\alpha$, respectively, this maximum is attained when $1.234 \cdot e^{\alpha} = 2^{(1-\alpha)} \cdot 3^{(\alpha/2)}$. By numerical calculation, we have $\alpha \approx 0.42217$, from which the following theorem follows. \begin{theorem} {\bf IncH2} can be solved in randomized $O^{\ast}(1.883^d)$ time with probability at least $1-{\frac 1 {n^k}}$, where $k$ is any positive constant. \label{thm:low-height} \end{theorem} The above algorithm may be derandomized by using $k$-perfect hash families as in~\cite{alon1995color}. However, since the construction of a $k$-perfect hash family has a high complexity, the resulting algorithm would have a time complexity much worse than $O^{\ast}(2^d)$. \section{Concluding remarks} We have presented a new algorithm for unordered tree inclusion running in $O^{\ast}(2^{d})$ time, thus reducing the exponent~$2d$ in the previously best known bound on the time complexity~\cite{kilpelainen1995ordered} to~$d$. However, the $2^{d}$-factor may not be optimal. For example, our randomized algorithm for the special case of $h(P) = 1$ and $h(T) = 2$ runs in $O^{\ast}(1.883^d)$ time, which suggests that further improvements could be possible. However, we were unable to obtain an $O^{\ast}((2-\varepsilon)^d)$-time algorithm for any constant $\varepsilon > 0$, even when $h(P) = h(T) = 2$. Similarly, we could not obtain an $O^{\ast}((2-\varepsilon)^d)$-time algorithm for any constant $\varepsilon > 0$ when $occ(P,T) = 4$. Therefore, to develop an $O^{\ast}((2-\varepsilon)^d)$-time algorithm for unordered tree inclusion or to prove an $\Omega(2^d)$ lower bound (e.g., using recent techniques from~\cite{abbound2016subtree,abbound2014alignment,bringmann2018editlb} for proving lower bounds on various tree and sequence matching problems) is left as an open problem. Future work includes generalizing our techniques and applying them to the \emph{extended tree inclusion problem} mentioned in Section~\ref{sec:applications}. This problem variant was introduced by Mori et al.~\cite{MTJHTA_15} as a way to make unordered tree inclusion more useful for practical pattern matching applications. It asks for an optimal connected subgraph of~$T$ (if any) that can be obtained by applying node insertion operations as well as node relabeling operations to~$P$ while allowing non-uniform costs to be assigned to the different node operations. It was shown in \cite{MTJHTA_15} that the unrooted case can be solved in $O^{\ast}(2^{2d})$ time, and a further extension of the problem that also allows at most~$K$ node deletion operations can be solved in $O^{\ast}((ed)^K K^{1/2} 2^{2(dK+d-K)})$ time, where $e$ is the base of the natural logarithm. \end{document}
arXiv
In the diagram, rectangle $PQRS$ is divided into three identical squares. If $PQRS$ has perimeter 120 cm, what is its area, in square centimeters? [asy] size(4cm); pair p = (0, 1); pair q = (3, 1); pair r = (3, 0); pair s = (0, 0); draw(p--q--r--s--cycle); draw(shift(1) * (p--s)); draw(shift(2) * (p--s)); label("$P$", p, NW); label("$Q$", q, NE); label("$R$", r, SE); label("$S$", s, SW); [/asy] Let the side length of each of the squares be $x$. [asy] size(4cm); pair p = (0, 1); pair q = (3, 1); pair r = (3, 0); pair s = (0, 0); draw(p--q--r--s--cycle); draw(shift(1) * (p--s)); draw(shift(2) * (p--s)); label("$P$", p, NW); label("$Q$", q, NE); label("$R$", r, SE); label("$S$", s, SW); // x labels pair v = (0, 0.5); pair h = (0.5, 0); int i; for(i = 0; i < 3; ++i) {label("$x$", shift(i) * h, S); label("$x$", shift(i, 1) * h, N);} label("$x$", v, W); label("$x$", shift(3) * v, E); [/asy] Then the perimeter of $PQRS$ equals $8x$, so $8x = 120$ cm or $x = 15$ cm. Since $PQRS$ is made up of three squares of side length 15 cm, then its area is $3(15)^2 = 3(225) = \boxed{675}$ square centimeters.
Math Dataset
Hexagram A hexagram (Greek) or sexagram (Latin) is a six-pointed geometric star figure with the Schläfli symbol {6/2}, 2{3}, or {{3}}. Since there are no true regular continuous hexagrams, the term is instead used to refer to a compound figure of two equilateral triangles. The intersection is a regular hexagon. Look up hexagram in Wiktionary, the free dictionary. Regular hexagram A regular hexagram TypeRegular polygonal figure Edges and vertices6 Schläfli symbola{6}, {6/2}, 2{3} or {{3}} Coxeter–Dynkin diagrams or Symmetry groupDihedral (D6) Internal angle (degrees)60° Propertiesstar, compound, cyclic, equilateral, isogonal, isotoxal Dual polygonself Star polygons • pentagram • hexagram • heptagram • octagram • enneagram • decagram • hendecagram • dodecagram The hexagram is part of an infinite series of shapes which are compounds of two n-dimensional simplices. In three dimensions, the analogous compound is the stellated octahedron, and in four dimensions the compound of two 5-cells is obtained. It has been historically used in various religious and cultural contexts and as decorative motifs. The symbol was used as a decorative motif in medieval Christian churches and Jewish synagogues.[1] The hexagram is thought to have originated in Buddhism and was also used by Hindus. It was used by Muslims as a mystic symbol in the medieval period, known as the Seal of Solomon, depicted as either a hexagram or pentagram. [2][3] Group theory In mathematics, the root system for the simple Lie group G2 is in the form of a hexagram, with six long roots and six short roots. Construction by compass and a straight edge A six-pointed star, like a regular hexagon, can be created using a compass and a straight edge: • Make a circle of any size with the compass. • Without changing the radius of the compass, set its pivot on the circle's circumference, and find one of the two points where a new circle would intersect the first circle. • With the pivot on the last point found, similarly find a third point on the circumference, and repeat until six such points have been marked. • With a straight edge, join alternate points on the circumference to form two overlapping equilateral triangles. Construction by linear algebra A regular hexagram can be constructed by orthographically projecting any cube onto a plane through three vertices that are all adjacent to the same vertex. The twelve midpoints to edges of the cube form a hexagram. For example, consider the projection of the unit cube with vertices at the eight possible binary vectors in three dimensions $(1,0,0),(0,1,0),(0,0,1),(1,1,0),(1,0,1),(0,1,1),(1,1,1)$ onto the plane $x+y+z=1$. The midpoints are $(0,0,1/2),(0,1/2,1/2),(0,1,1/2),(1,1,1/2)$, and all points resulting from these by applying a permutation to their entries. These 12 points project to a hexagram: six vertices around the outer hexagon and six on the inner. Origins and Shape As a derivative of two overlapping triangles, the hexagram may have developed from different peoples with no direct correlation to one another. The oldest known depiction of a six-pointed star (dating back to the 3rd millennium BC.) was excavated in the Ashtarak burial mound in “Nerkin Naver” (in Historic Armenia). The mandala symbol called yantra, found on ancient South Indian Hindu temples, is a geometric toolset that incorporates hexagrams into its framework. It symbolizes the nara-narayana, or perfect meditative state of balance achieved between Man and God, and if maintained, results in "moksha," or "nirvana" (release from the bounds of the earthly world and its material trappings). Some researchers have theorized that the hexagram represents the astrological chart at the time of David's birth or anointment as king. The hexagram is also known as the "King's Star" in astrological circles. In antique papyri, pentagrams, together with stars and other signs, are frequently found on amulets bearing the Jewish names of God, and used to guard against fever and other diseases. Curiously the hexagram is not found among these signs. In the Greek Magical Papyri (Wessely, l.c. pp. 31, 112) at Paris and London there are 22 signs side by side, and a circle with twelve signs, but neither a pentagram nor a hexagram. Religious usage Indian religions Six-pointed stars have also been found in cosmological diagrams in Hinduism, Buddhism, and Jainism. The reasons behind this symbol's common appearance in Indic religions and the West are unknown. One possibility is that they have a common origin. The other possibility is that artists and religious people from several cultures independently created the hexagram shape, which is a relatively simple geometric design. Within Indic lore, the shape is generally understood to consist of two triangles—one pointed up and the other down—locked in harmonious embrace. The two components are called "Om" and the "Hrim" in Sanskrit, and symbolize man's position between earth and sky. The downward triangle symbolizes Shakti, the sacred embodiment of femininity, and the upward triangle symbolizes Shiva, or Agni Tattva, representing the focused aspects of masculinity. The mystical union of the two triangles represents Creation, occurring through the divine union of male and female. The two locked triangles are also known as 'Shanmukha'—the six-faced, representing the six faces of Shiva & Shakti's progeny Kartikeya. This symbol is also a part of several yantras and has deep significance in Hindu ritual worship and history. In Buddhism, some old versions of the Bardo Thodol, also known as The "Tibetan Book of the Dead", contain a hexagram with a swastika inside. It was made up by the publishers for this particular publication. In Tibetan, it is called the "origin of phenomenon" (chos-kyi 'byung-gnas). It is especially connected with Vajrayogini, and forms the center part of her mandala. In reality, it is in three dimensions, not two, although it may be portrayed either way. The Shatkona is a symbol used in Hindu yantra that represents the union of both the male and feminine form. More specifically it is supposed to represent Purusha (the supreme being), and Prakriti (mother nature, or causal matter). Often this is represented as Shiva - Shakti.[4] Anahata or heart chakra is the fourth primary chakra, according to Hindu Yogic, Shakta and Buddhist Tantric traditions. In Sanskrit, anahata means "unhurt, unstruck, and unbeaten". Anahata Nad refers to the Vedic concept of unstruck sound (the sound of the celestial realm). Anahata is associated with balance, calmness, and serenity. Judaism The Magen David is a generally recognized symbol of Judaism and Jewish identity and is also known colloquially as the Jewish Star or "Star of David." Its usage as a sign of Jewish identity began in the Middle Ages, though its religious usage began earlier, with the current earliest archeological evidence being a stone bearing the shield from the arch of a 3–4th century synagogue in the Galilee.[5] Christianity The first and the most important Armenian Cathedral of Etchmiadzin (303 AD, built by the founder of Christianity in Armenia) is decorated with many types of ornamented hexagrams and so is the tomb of an Armenian prince of the Hasan-Jalalyan dynasty of Khachen (1214 AD) in the Gandzasar Church of Artsakh. The hexagram may be found in some Churches and stained-glass windows. In Christianity, it is sometimes called the star of creation. A very early example, noted by Nikolaus Pevsner, can be found in Winchester Cathedral, England in one of the canopies of the choir stalls, circa 1308.[6] Latter-day Saints (Mormons) The Star of David is also used less prominently by the Church of Jesus Christ of Latter-day Saints, in the temples and in architecture. It symbolizes God reaching down to man and man reaching up to God, the union of Heaven and earth. It may also symbolize the Tribes of Israel and friendship and their affinity towards the Jewish people. Additionally, it is sometimes used to symbolize the quorum of the twelve apostles, as in Revelation 12, wherein the Church of God is symbolized by a woman wearing a crown of twelve stars. It is also sometimes used to symbolize the Big Dipper, which points to the North Star, a symbol of Jesus Christ. Islam The symbol is known in Arabic as Khātem Sulaymān (Seal of Solomon; خاتم سليمان) or Najmat Dāwūd (Star of David; نجمة داوود). The "Seal of Solomon" may also be represented by a five-pointed star or pentagram. In the Qur'an, it is written that David and King Solomon (Arabic, Suliman or Sulayman) were prophets and kings, and are figures revered by Muslims. The Medieval pre-Ottoman Hanafi Anatolian beyliks of the Karamanids and Jandarids used the star on their flag.[7] The symbol also used on Hayreddin Barbarossa flag. Today the six-pointed star can be found in mosques and on other Arabic and Islamic artifacts. • Coin minted in the Emirate of Sicily during the reign of Al-Mustansir Billah (11th century CE) • 1204 coin minted in Aleppo by Az-Zahir Ghazi • Hexagram at Humayun's Tomb, Delhi, India (late 16th century) • Hexagram on obverse of Moroccan 4 Falus coin (1873) • Hexagram on the Minaret of Arasta Mosque, Prizren, Kosovo • Morocco Fez Embroidery Horse Cover • Hexagram on the flag of Hayreddin Barbarossa • Hexagram on the flag of Karamanid beylik • The Gates from the tomb of Mahmud of Ghazni, taken to the Somnath temple • A Cirebonese cotton banner with a Chinese influenced lion with Arabic calligraphy with hexagrams; (dated to the late 18th or the 19th century) Usage in heraldry In heraldry and vexillology, a hexagram is a fairly common charge employed, though it is rarely called by this name. In Germanic regions it is known simply as a "star." In English and French heraldry, however, the hexagram is known as a "mullet of six points," where mullet is a French term for a spur rowel which is shown with five pointed arms by default unless otherwise specified. In Albanian heraldry and vexillology, hexagram has been used since classical antiquity and it is commonly referred to as sixagram. The coat of arms of the House of Kastrioti depicts the hexagram on a pile argent over the double headed eagle. Usage in theosophy The Star of David is used in the seal and the emblem of the Theosophical Society (founded in 1875). Although it is more pronounced, it is used along with other religious symbols. These include the Swastika, the Ankh, the Aum, and the Ouroboros. The star of David is also known as the Seal of Solomon that was its original name until around 50 years ago. Usage in occultism The hexagram, like the pentagram, was and is used in practices of the occult and ceremonial magic and is attributed to the 7 "old" planets outlined in astrology. The six-pointed star is commonly used both as a talisman[8] and for conjuring spirits and spiritual forces in diverse forms of occult magic. In the book The History and Practice of Magic, Vol. 2, the six-pointed star is called the talisman of Saturn and it is also referred to as the Seal of Solomon.[9] Details are given in this book on how to make these symbols and the materials to use. Traditionally, the Hexagram can be seen as the combination of the four elements. Fire is symbolized as an upwards pointing triangle, while Air (its elemental opposite) is also an upwards pointing triangle, but with a horizontal line through its center. Water is symbolized as a downwards pointing triangle, while Earth (its elemental opposite) is also a downwards pointing triangle, but with a horizontal line through its center. When you combine the symbols of Fire and Water, a hexagram (six-pointed star) is created. The same follows for when you combine the symbols of Air and Earth. When you combine both hexagrams, you get the double-hexagram. Thus, a combination of the elements is created.[10] In Rosicrucian and Hermetic Magic, the seven Traditional Planets correspond with the angles and the center of the Hexagram as follows, in the same patterns as they appear on the Sephiroth and on the Tree of Life. Saturn, although formally attributed to the Sephira of Binah, within this frame work nonetheless occupies the position of Daath.[11] In alchemy, the two triangles represent the reconciliation of the opposites of fire and water.[12] The hexagram is used as a sign for quintessence, the fifth element. Usage in Freemasonry "The interlacing triangles or deltas symbolize the union of the two principles or forces, the active and passive, male and female, pervading the universe ... The two triangles, one white and the other black, interlacing, typify the mingling of apparent opposites in nature, darkness and light, error and truth, ignorance and wisdom, evil and good, throughout human life." – Albert G. Mackey: Encyclopedia of Freemasonry The hexagram is featured within and on the outside of many Masonic temples as a decoration. It may have been found within the structures of King Solomon's temple, from which Freemasons are inspired in their philosophies and studies. Like many other symbols in Freemasonry, the deciphering of the hexagram is non-dogmatic and left to the interpretation of the individual. Other uses The Raëlian symbol with the swastika (left) and the alternative spiral version (right) Flags • The flag of Australia had a six pointed star to represent the six federal states from 1901 to 1908. • The Ulster Banner flag of Northern Ireland, used from 1953 to 1972. The six pointed star, representing the six counties that make up Northern Ireland. The star of the Ulster Banner is not the compound of two equilateral triangles. The intersection is not a regular hexagon. • A flag used by rebels during the Whiskey Insurrection in South-Western Pennsylvania, 1794. • A hexagram appears on the Dardania Flag, proposed for Kosovo by the Democratic League of Kosovo. • The flag of Nigeria depicted a green hexagram surrounding a crown from with the white word "Nigeria" under it on a red disc from 1914 to 1960. • The flag of Israel has a blue hexagram in the middle. Other symbolic uses • A six-point interlocking triangles has been used for thousands of years as an indication a sword was made, and "proved", in the Damascus area of the Middle East. Still today, it is a required "proved" mark on all official UK and United States military swords though the blades themselves no longer come from the Middle East. • In southern Germany the hexagram can be found as part of tavern anchors. It is symbol for the tapping of beer and sign of the brewer's guild. In German this is called "Bierstern" (beer star) or "Brauerstern" (brewer's star). • A six-point star is used as an identifying mark of the Folk Nation alliance of US street gangs. • The Indian sage and seer Sri Aurobindo used it—e.g. on the cover of his books—as a symbol of the aspiration of humanity calling to the Divine to descend into life (the triangle with the point at the top), and the descent of the Divine into the earth's atmosphere and all individuals in response to that calling (the triangle with the point at the bottom). (This was explained by the Mother, his spiritual partner in Her 14-volume Agenda and elsewhere by Sri Aurobindo in his writings.) Man-made and natural occurrences • The main runways and taxiways of Heathrow Airport were arranged roughly in the shape of a hexagram.[13] • A hexagram in a circle is incorporated prominently in the supports of Worthing railway station's platform 2 canopy (UK). • An extremely large, free-standing wood hexagram stands in the central park of the Municipality of El Tejar, Guatemala. Additionally, every year at Christmastime the residents of El Tejar erect a giant fake Christmas tree in front of their municipal building, with a hexagram sitting at its peak. Unicode • In Unicode, the "Star of David" symbol ✡ is encoded in U+2721. Other hexagrams The figure {6/3} can be shown as a compound of three digons. Other hexagrams can be constructed as a continuous path. Regular compounds D2 symmetry unicursal D3 symmetry isogonal D3 symmetry isotoxal {6/2}=2{3} {6/3}=3{2} See also • Pentagram • Star of Bethlehem • Star of David • Seal of Solomon • Heptagram • The Thelemic Unicursal hexagram • Pascal's mystic hexagram • Hexagram (I Ching) • Sacred Geometry Footnotes 1. Scholem 1949, p. 244:"It is not to be found at all in medieval synagogues or on medieval ceremonial objects, although it has been found in quite a number of medieval Christian churches again, not as a Christian symbol but only as a decorative motif. The appearance of the symbol in Christian churches long before its appearance in our synagogues should warn the overzealous interpreters. " 2. Scholem 1949, p. 246:"In the beginning these designs had no special names or terms, and it is only in the Middle Ages that definite names began to be given to some of those most widely used. There is very little doubt that terms like these first became popular among the Muslims, who showed a tremendous interest in all the occult sciences, arranging and ordering them systematically long before the Practical Cabalists thought of doing so. It is not to be wondered at, therefore, that for a long time both the five-pointed and the six-pointed stars were called by one name, the "Seal of Solomon," and that no distinction was made between them. This name is obviously related to the Jewish legend of Solomon's dominion over the spirits, and of his ring with the Ineffable Name engraved on it. These legends expanded and proliferated in a marked fashion during the Middle Ages, among Jews and Muslims alike, but the name, "Seal of Solomon," apparently originated with the Muslims." 3. Leonora Leet, "The Hexagram and Hebraic Sacred Science" in :The Secret Doctrine of the Kabbalah, 1999, pp. 212-217. 4. sivasakti.com: Introduction to Yantra 5. "King Solomon's Seal", MFA, King Solomon-s Seal 6. Buildings of England: Hampshire and the North (now second edition) ISBN 978 0 300 12084 4, p.604. 7. The Muslim Empires of the Ottomans, Safavids, and Mughals, By Stephen F. Dale, 2009 8. P299 (and throughout) of The Complete Goldendawn by Israel Regardie. ISBN 978-0875426631 9. "The History and Practice of Magic" (Secaucus, NJ: University Books, published by arrangement with Lyle Stewart, 1979), Vol. II, p. 304 10. P315-316 of The Wicca Bible by Ann-Marie Gallagher. ISBN 978-1-84181-250-2. Same information also found in many other books. 11. P31 of The Ritual Magic Manual: A Complete Course in Practical Magic by David Griffin. ISBN 978-0965840897 12. Allgemein (February 3, 2020). "Hexagram – The mystical symbol of the hexagram". Hermetic Academy. Retrieved 10 August 2021. 13. bbc.co.uk and see File:Aerial photograph of Heathrow Airport, 1955.jpg References • Scholem, Gershom (September 1949). "The Curious History of the Six-Pointed Star: How the "Magen David" Became the Jewish Symbol". Commentary Magazine. Retrieved 2018-07-10.{{cite news}}: CS1 maint: url-status (link) • Graham, Dr. O.J. The Six-Pointed Star: Its Origin and Usage 4th ed. Toronto: The Free Press 777, 2001. ISBN 0-9689383-0-2 • Grünbaum, B. and G. C. Shephard; Tilings and Patterns, New York: W. H. Freeman & Co., (1987), ISBN 0-7167-1193-1. • Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto 1993), ed T. Bisztriczky et al., Kluwer Academic (1994) pp. 43–70. • Wessely, l.c. pp. 31, 112 External links Wikimedia Commons has media related to Hexagrams. • Hexagram (MathWorld) • The Archetypal Mandala of India • Thesis from Munich University on hexagram as brewing symbol Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple
Wikipedia
Dirac comb In mathematics, a Dirac comb (also known as sha function, impulse train or sampling function) is a periodic function with the formula $\operatorname {\text{Ш}} _{\ T}(t)\ :=\sum _{k=-\infty }^{\infty }\delta (t-kT)$ :=\sum _{k=-\infty }^{\infty }\delta (t-kT)} for some given period $T$.[1] Here t is a real variable and the sum extends over all integers k. The Dirac delta function $\delta $ and the Dirac comb are tempered distributions.[2][3] The graph of the function resembles a comb (with the $\delta $s as the comb's teeth), hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function. The symbol $\operatorname {\text{Ш}} \,\,(t)$, where the period is omitted, represents a Dirac comb of unit period. This implies[1] $\operatorname {\text{Ш}} _{\ T}(t)\ ={\frac {1}{T}}\operatorname {\text{Ш}} \ \!\!\!\left({\frac {t}{T}}\right).$ Because the Dirac comb function is periodic, it can be represented as a Fourier series based on the Dirichlet kernel:[1] $\operatorname {\text{Ш}} _{\ T}(t)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }e^{i2\pi n{\frac {t}{T}}}.$ The Dirac comb function allows one to represent both continuous and discrete phenomena, such as sampling and aliasing, in a single framework of continuous Fourier analysis on tempered distributions, without any reference to Fourier series. The Fourier transform of a Dirac comb is another Dirac comb. Owing to the Convolution Theorem on tempered distributions which turns out to be the Poisson summation formula, in signal processing, the Dirac comb allows modelling sampling by multiplication with it, but it also allows modelling periodization by convolution with it.[4] Dirac-comb identity The Dirac comb can be constructed in two ways, either by using the comb operator (performing sampling) applied to the function that is constantly $1$, or, alternatively, by using the rep operator (performing periodization) applied to the Dirac delta $\delta $. Formally, this yields (Woodward 1953; Brandwood 2003) $\operatorname {comb} _{T}\{1\}=\operatorname {\text{Ш}} _{T}=\operatorname {rep} _{T}\{\delta \},$ where $\operatorname {comb} _{T}\{f(t)\}\triangleq \sum _{k=-\infty }^{\infty }\,f(kT)\,\delta (t-kT)$ and $\operatorname {rep} _{T}\{g(t)\}\triangleq \sum _{k=-\infty }^{\infty }\,g(t-kT).$ In signal processing, this property on one hand allows sampling a function $f(t)$ by multiplication with $\operatorname {\text{Ш}} _{\ T}$, and on the other hand it also allows the periodization of $f(t)$ by convolution with $\operatorname {\text{Ш}} _{T}$ (Bracewell 1986). The Dirac comb identity is a particular case of the Convolution Theorem for tempered distributions. Scaling The scaling property of the Dirac comb follows from the properties of the Dirac delta function. Since $\delta (t)={\frac {1}{a}}\ \delta \!\left({\frac {t}{a}}\right)$[5] for positive real numbers $a$, it follows that: $\operatorname {\text{Ш}} _{\ T}\left(t\right)={\frac {1}{T}}\operatorname {\text{Ш}} \,\!\left({\frac {t}{T}}\right),$ $\operatorname {\text{Ш}} _{\ aT}\left(t\right)={\frac {1}{aT}}\operatorname {\text{Ш}} \,\!\left({\frac {t}{aT}}\right)={\frac {1}{a}}\operatorname {\text{Ш}} _{\ T}\!\!\left({\frac {t}{a}}\right).$ Note that requiring positive scaling numbers $a$ instead of negative ones is not a restriction because the negative sign would only reverse the order of the summation within $\operatorname {\text{Ш}} _{\ T}$, which does not affect the result. Fourier series See also: Dirichlet kernel It is clear that $\operatorname {\text{Ш}} _{\ T}(t)$ is periodic with period $T$. That is, $\operatorname {\text{Ш}} _{\ T}(t+T)=\operatorname {\text{Ш}} _{\ T}(t)$ for all t. The complex Fourier series for such a periodic function is $\operatorname {\text{Ш}} _{\ T}(t)=\sum _{n=-\infty }^{+\infty }c_{n}e^{i2\pi n{\frac {t}{T}}},$ where the Fourier coefficients are (symbolically) ${\begin{aligned}c_{n}&={\frac {1}{T}}\int _{t_{0}}^{t_{0}+T}\operatorname {\text{Ш}} _{\ T}(t)e^{-i2\pi n{\frac {t}{T}}}\,dt\quad (-\infty <t_{0}<+\infty )\\&={\frac {1}{T}}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}\operatorname {\text{Ш}} _{\ T}(t)e^{-i2\pi n{\frac {t}{T}}}\,dt\\&={\frac {1}{T}}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}\delta (t)e^{-i2\pi n{\frac {t}{T}}}\,dt\\&={\frac {1}{T}}e^{-i2\pi n{\frac {0}{T}}}\\&={\frac {1}{T}}.\end{aligned}}$ All Fourier coefficients are 1/T resulting in $\operatorname {\text{Ш}} _{\ T}(t)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }\!\!e^{i2\pi n{\frac {t}{T}}}.$ When the period is one unit, this simplifies to $\operatorname {\text{Ш}} \ \!(x)=\sum _{n=-\infty }^{\infty }\!\!e^{i2\pi nx}.$ Remark: Most rigorously, Riemann or Lebesgue integration over any products including a Dirac delta function yields zero. For this reason, the integration above (Fourier series coefficients determination) must be understood "in the generalized functions sense". It means that, instead of using the characteristic function of an interval applied to the Dirac comb, one uses a so-called Lighthill unitary function as cutout function, see Lighthill 1958, p.62, Theorem 22 for details. Fourier transform The Fourier transform of a Dirac comb is also a Dirac comb. For the Fourier transform ${\mathcal {F}}$ expressed in frequency domain (Hz) the Dirac comb $\operatorname {\text{Ш}} _{T}$ of period $T$ transforms into a rescaled Dirac comb of period $1/T,$ i.e. for ${\mathcal {F}}\left[f\right](\xi )=\int _{-\infty }^{\infty }dtf(t)e^{-2\pi i\xi t},$ ${\mathcal {F}}\left[\operatorname {\text{Ш}} _{T}\right](\xi )={\frac {1}{T}}\sum _{k=-\infty }^{\infty }\delta (\xi -k{\frac {1}{T}})={\frac {1}{T}}\operatorname {\text{Ш}} _{\ {\frac {1}{T}}}(\xi )~$ is proportional to another Dirac comb, but with period $1/T$ in frequency domain (radian/s). The Dirac comb $\operatorname {\text{Ш}} $ of unit period $T=1$ is thus an eigenfunction of ${\mathcal {F}}$ to the eigenvalue $1.$ This result can be established (Bracewell 1986) by considering the respective Fourier transforms $S_{\tau }(\xi )={\mathcal {F}}[s_{\tau }](\xi )$ of the family of functions $s_{\tau }(x)$ defined by $s_{\tau }(x)=\tau ^{-1}e^{-\pi \tau ^{2}x^{2}}\sum _{n=-\infty }^{\infty }e^{-\pi \tau ^{-2}(x-n)^{2}}.$ Since $s_{\tau }(x)$ is a convergent series of Gaussian functions, and Gaussians transform into Gaussians, each of their respective Fourier transforms $S_{\tau }(\xi )$ also results in a series of Gaussians, and explicit calculation establishes that $S_{\tau }(\xi )=\tau ^{-1}\sum _{m=-\infty }^{\infty }e^{-\pi \tau ^{2}m^{2}}e^{-\pi \tau ^{-2}(\xi -m)^{2}}.$ The functions $s_{\tau }(x)$ and $S_{\tau }(\xi )$ are thus each resembling a periodic function consisting of a series of equidistant Gaussian spikes $\tau ^{-1}e^{-\pi \tau ^{-2}(x-n)^{2}}$ and $\tau ^{-1}e^{-\pi \tau ^{-2}(\xi -m)^{2}}$ whose respective "heights" (pre-factors) are determined by slowly decreasing Gaussian envelope functions which drop to zero at infinity. Note that in the limit $\tau \rightarrow 0$ each Gaussian spike becomes an infinitely sharp Dirac impulse centered respectively at $x=n$ and $\xi =m$ for each respective $n$ and $m$, and hence also all pre-factors $e^{-\pi \tau ^{2}m^{2}}$ in $S_{\tau }(\xi )$ eventually become indistinguishable from $e^{-\pi \tau ^{2}\xi ^{2}}$. Therefore the functions $s_{\tau }(x)$ and their respective Fourier transforms $S_{\tau }(\xi )$ converge to the same function and this limit function is a series of infinite equidistant Gaussian spikes, each spike being multiplied by the same pre-factor of one, i.e. the Dirac comb for unit period: $\lim _{\tau \rightarrow 0}s_{\tau }(x)=\operatorname {\text{Ш}} ({x}),$ and $\lim _{\tau \rightarrow 0}S_{\tau }(\xi )=\operatorname {\text{Ш}} ({\xi }).$ Since $S_{\tau }={\mathcal {F}}[s_{\tau }]$, we obtain in this limit the result to be demonstrated: ${\mathcal {F}}[\operatorname {\text{Ш}} ]=\operatorname {\text{Ш}} .$ The corresponding result for period $T$ can be found by exploiting the scaling property of the Fourier transform, ${\mathcal {F}}[\operatorname {\text{Ш}} _{T}]={\frac {1}{T}}\operatorname {\text{Ш}} _{\frac {1}{T}}.$ Another manner to establish that the Dirac comb transforms into another Dirac comb starts by examining continuous Fourier transforms of periodic functions in general, and then specialises to the case of the Dirac comb. In order to also show that the specific rule depends on the convention for the Fourier transform, this will be shown using angular frequency with $\omega =2\pi \xi :$ :} for any periodic function $f(t)=f(t+T)$ its Fourier transform ${\mathcal {F}}\left[f\right](\omega )=F(\omega )=\int _{-\infty }^{\infty }dtf(t)e^{-i\omega t}$ obeys: $F(\omega )(1-e^{i\omega T})=0$ because Fourier transforming $f(t)$ and $f(t+T)$ leads to $F(\omega )$ and $F(\omega )e^{i\omega T}.$ This equation implies that $F(\omega )=0$ nearly everywhere with the only possible exceptions lying at $\omega =k\omega _{0},$ with $\omega _{0}=2\pi /T$ and $k\in \mathbb {Z} .$ When evaluating the Fourier transform at $F(k\omega _{0})$ the corresponding Fourier series expression times a corresponding delta function results. For the special case of the Fourier transform of the Dirac comb, the Fourier series integral over a single period covers only the Dirac function at the origin and thus gives $1/T$ for each $k.$ This can be summarised by interpreting the Dirac comb as a limit of the Dirichlet kernel such that, at the positions $\omega =k\omega _{0},$ all exponentials in the sum $\sum \nolimits _{m=-\infty }^{\infty }e^{\pm i\omega mT}$ point into the same direction and add constructively. In other words, the continuous Fourier transform of periodic functions leads to $F(\omega )=2\pi \sum _{k=-\infty }^{\infty }c_{k}\delta (\omega -k\omega _{0})$ with $\omega _{0}=2\pi /T,$ and $c_{k}={\frac {1}{T}}\int _{-T/2}^{+T/2}dtf(t)e^{-i2\pi kt/T}.$ The Fourier series coefficients $c_{k}=1/T$ for all $k$ when $f\rightarrow \operatorname {\text{Ш}} _{T}$, i.e. ${\mathcal {F}}\left[\operatorname {\text{Ш}} _{T}\right](\omega )={\frac {2\pi }{T}}\sum _{k=-\infty }^{\infty }\delta (\omega -k{\frac {2\pi }{T}})$ is another Dirac comb, but with period $2\pi /T$ in angular frequency domain (radian/s). As mentioned, the specific rule depends on the convention for the used Fourier transform. Indeed, when using the scaling property of the Dirac delta function, the above may be re-expressed in ordinary frequency domain (Hz) and one obtains again: $\operatorname {\text{Ш}} _{\ T}(t){\stackrel {\mathcal {F}}{\longleftrightarrow }}{\frac {1}{T}}\operatorname {\text{Ш}} _{\ {\frac {1}{T}}}(\xi )=\sum _{n=-\infty }^{\infty }\!\!e^{-i2\pi \xi nT},$ such that the unit period Dirac comb transforms to itself: $\operatorname {\text{Ш}} \ \!(t){\stackrel {\mathcal {F}}{\longleftrightarrow }}\operatorname {\text{Ш}} \ \!(\xi ).$ Finally, the Dirac comb is also an eigenfunction of the unitary continuous Fourier transform in angular frequency space to the eigenvalue 1 when $T={\sqrt {2\pi }}$ because for the unitary Fourier transform ${\mathcal {F}}\left[f\right](\omega )=F(\omega )={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }dtf(t)e^{-i\omega t},$ the above may be re-expressed as $\operatorname {\text{Ш}} _{\ T}(t){\stackrel {\mathcal {F}}{\longleftrightarrow }}{\frac {\sqrt {2\pi }}{T}}\operatorname {\text{Ш}} _{\ {\frac {2\pi }{T}}}(\omega )={\frac {1}{\sqrt {2\pi }}}\sum _{n=-\infty }^{\infty }\!\!e^{-i\omega nT}.$ Sampling and aliasing Multiplying any function by a Dirac comb transforms it into a train of impulses with integrals equal to the value of the function at the nodes of the comb. This operation is frequently used to represent sampling. $(\operatorname {\text{Ш}} _{\ T}x)(t)=\sum _{k=-\infty }^{\infty }\!\!x(t)\delta (t-kT)=\sum _{k=-\infty }^{\infty }\!\!x(kT)\delta (t-kT).$ Due to the self-transforming property of the Dirac comb and the convolution theorem, this corresponds to convolution with the Dirac comb in the frequency domain. $\operatorname {\text{Ш}} _{\ T}x\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\frac {1}{T}}\operatorname {\text{Ш}} _{\frac {1}{T}}*X$ Since convolution with a delta function $\delta (t-kT)$ is equivalent to shifting the function by $kT$, convolution with the Dirac comb corresponds to replication or periodic summation: $(\operatorname {\text{Ш}} _{\ {\frac {1}{T}}}\!*X)(f)=\!\sum _{k=-\infty }^{\infty }\!\!X\!\left(f-{\frac {k}{T}}\right)$ This leads to a natural formulation of the Nyquist–Shannon sampling theorem. If the spectrum of the function $x$ contains no frequencies higher than B (i.e., its spectrum is nonzero only in the interval $(-B,B)$) then samples of the original function at intervals ${\tfrac {1}{2B}}$ are sufficient to reconstruct the original signal. It suffices to multiply the spectrum of the sampled function by a suitable rectangle function, which is equivalent to applying a brick-wall lowpass filter. $\operatorname {\text{Ш}} _{\ \!{\frac {1}{2B}}}x\ \ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \ 2B\,\operatorname {\text{Ш}} _{\ 2B}*X$ ${\frac {1}{2B}}\Pi \left({\frac {f}{2B}}\right)(2B\,\operatorname {\text{Ш}} _{\ 2B}*X)=X$ In time domain, this "multiplication with the rect function" is equivalent to "convolution with the sinc function" (Woodward 1953, p.33-34). Hence, it restores the original function from its samples. This is known as the Whittaker–Shannon interpolation formula. Remark: Most rigorously, multiplication of the rect function with a generalized function, such as the Dirac comb, fails. This is due to undetermined outcomes of the multiplication product at the interval boundaries. As a workaround, one uses a Lighthill unitary function instead of the rect function. It is smooth at the interval boundaries, hence it yields determined multiplication products everywhere, see Lighthill 1958, p.62, Theorem 22 for details. Use in directional statistics In directional statistics, the Dirac comb of period $2\pi $ is equivalent to a wrapped Dirac delta function and is the analog of the Dirac delta function in linear statistics. In linear statistics, the random variable $(x)$ is usually distributed over the real-number line, or some subset thereof, and the probability density of $x$ is a function whose domain is the set of real numbers, and whose integral from $-\infty $ to $+\infty $ is unity. In directional statistics, the random variable $(\theta )$ is distributed over the unit circle, and the probability density of $\theta $ is a function whose domain is some interval of the real numbers of length $2\pi $ and whose integral over that interval is unity. Just as the integral of the product of a Dirac delta function with an arbitrary function over the real-number line yields the value of that function at zero, so the integral of the product of a Dirac comb of period $2\pi $ with an arbitrary function of period $2\pi $ over the unit circle yields the value of that function at zero. See also • Comb filter • Frequency comb • Poisson summation formula References 1. "The Dirac Comb and its Fourier Transform - DSPIllustrations.com". dspillustrations.com. Retrieved 2022-06-28. 2. Schwartz, L. (1951), Théorie des distributions, vol. Tome I, Tome II, Hermann, Paris 3. Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 0-8493-8273-4 4. Bracewell, R. N. (1986), The Fourier Transform and Its Applications (revised ed.), McGraw-Hill; 1st ed. 1965, 2nd ed. 1978. 5. Rahman, M. (2011), Applications of Fourier Transforms to Generalized Functions, WIT Press Southampton, Boston, ISBN 978-1-84564-564-9. Further reading • Brandwood, D. (2003), Fourier Transforms in Radar and Signal Processing, Artech House, Boston, London. • Córdoba, A (1989), "Dirac combs", Letters in Mathematical Physics, 17 (3): 191–196, Bibcode:1989LMaPh..17..191C, doi:10.1007/BF00401584, S2CID 189883287 • Woodward, P. M. (1953), Probability and Information Theory, with Applications to Radar, Pergamon Press, Oxford, London, Edinburgh, New York, Paris, Frankfurt. • Lighthill, M.J. (1958), An Introduction to Fourier Analysis and Generalized Functions, Cambridge University Press, Cambridge, U.K.. Probability distributions (list) Discrete univariate with finite support • Benford • Bernoulli • beta-binomial • binomial • categorical • hypergeometric • negative • Poisson binomial • Rademacher • soliton • discrete uniform • Zipf • Zipf–Mandelbrot with infinite support • beta negative binomial • Borel • Conway–Maxwell–Poisson • discrete phase-type • Delaporte • extended negative binomial • Flory–Schulz • Gauss–Kuzmin • geometric • logarithmic • mixed Poisson • negative binomial • Panjer • parabolic fractal • Poisson • Skellam • Yule–Simon • zeta Continuous univariate supported on a bounded interval • arcsine • ARGUS • Balding–Nichols • Bates • beta • beta rectangular • continuous Bernoulli • Irwin–Hall • Kumaraswamy • logit-normal • noncentral beta • PERT • raised cosine • reciprocal • triangular • U-quadratic • uniform • Wigner semicircle supported on a semi-infinite interval • Benini • Benktander 1st kind • Benktander 2nd kind • beta prime • Burr • chi • chi-squared • noncentral • inverse • scaled • Dagum • Davis • Erlang • hyper • exponential • hyperexponential • hypoexponential • logarithmic • F • noncentral • folded normal • Fréchet • gamma • generalized • inverse • gamma/Gompertz • Gompertz • shifted • half-logistic • half-normal • Hotelling's T-squared • inverse Gaussian • generalized • Kolmogorov • Lévy • log-Cauchy • log-Laplace • log-logistic • log-normal • log-t • Lomax • matrix-exponential • Maxwell–Boltzmann • Maxwell–Jüttner • Mittag-Leffler • Nakagami • Pareto • phase-type • Poly-Weibull • Rayleigh • relativistic Breit–Wigner • Rice • truncated normal • type-2 Gumbel • Weibull • discrete • Wilks's lambda supported on the whole real line • Cauchy • exponential power • Fisher's z • Kaniadakis κ-Gaussian • Gaussian q • generalized normal • generalized hyperbolic • geometric stable • Gumbel • Holtsmark • hyperbolic secant • Johnson's SU • Landau • Laplace • asymmetric • logistic • noncentral t • normal (Gaussian) • normal-inverse Gaussian • skew normal • slash • stable • Student's t • Tracy–Widom • variance-gamma • Voigt with support whose type varies • generalized chi-squared • generalized extreme value • generalized Pareto • Marchenko–Pastur • Kaniadakis κ-exponential • Kaniadakis κ-Gamma • Kaniadakis κ-Weibull • Kaniadakis κ-Logistic • Kaniadakis κ-Erlang • q-exponential • q-Gaussian • q-Weibull • shifted log-logistic • Tukey lambda Mixed univariate continuous- discrete • Rectified Gaussian Multivariate (joint) • Discrete: • Ewens • multinomial • Dirichlet • negative • Continuous: • Dirichlet • generalized • multivariate Laplace • multivariate normal • multivariate stable • multivariate t • normal-gamma • inverse • Matrix-valued: • LKJ • matrix normal • matrix t • matrix gamma • inverse • Wishart • normal • inverse • normal-inverse • complex Directional Univariate (circular) directional Circular uniform univariate von Mises wrapped normal wrapped Cauchy wrapped exponential wrapped asymmetric Laplace wrapped Lévy Bivariate (spherical) Kent Bivariate (toroidal) bivariate von Mises Multivariate von Mises–Fisher Bingham Degenerate and singular Degenerate Dirac delta function Singular Cantor Families • Circular • compound Poisson • elliptical • exponential • natural exponential • location–scale • maximum entropy • mixture • Pearson • Tweedie • wrapped • Category • Commons
Wikipedia
\begin{definition}[Definition:Slice Functor] Let $\mathbf C$ be a metacategory. Let $\mathbf{Cat}$ be the category of categories. The '''slice functor''' is the functor $\mathbf C / \cdot: \mathbf C \to \mathbf{Cat}$ defined by: {{begin-axiom}} {{axiom|lc= Object functor: |m = \mathbf C / C := \mathbf C / C }} {{axiom|lc= Morphism functor: |m = \mathbf C / f := f_* }} {{end-axiom}} where $\mathbf C / C$ is a slice category, and $f_*$ is the composition functor defined by $f$. The effect of $\mathbf C / \cdot$ is captured in the following diagram: ::$\begin{xy} <0em,0em>*+{A} = "a", <4em,0em>*+{B} = "b", <4em,-4em>*+{C}= "c", "a";"b" **@{-} ?>*@{>} ?<>(.5)*!/_1em/{f}, "b";"c" **@{-} ?>*@{>} ?<>(.5)*!/_.6em/{g}, "a";"c" **@{-} ?>*@{>} ?<>(.4)*!/^1em/{g \circ f}, "b"+/r4em/+/_3em/;"b"+/r8em/+/_3em/ **@{~} ?>*@2{>} ?*!/_1em/{\mathbf C / \cdot}, "a"+/r13em/*+{\mathbf C / A}="Fa", "b"+/r14em/*+{\mathbf C / B}="Fb", "c"+/r14em/+/_1em/*+{\mathbf C / C}="Fc", "Fa";"Fb" **@{-} ?>*@{>} ?<>(.5)*!/_1em/{f_*}, "Fb";"Fc" **@{-} ?>*@{>} ?<>(.5)*!/_1em/{g_*}, "Fa";"Fc" **@{-} ?>*@{>} ?<>(.7)*!/r3em/{\left({g \circ f}\right)_* \\ = g_* f_*}, \end{xy}$ where $g_* f_*$ denotes a composite functor. \end{definition}
ProofWiki
$C^\infty$ Local solutions of elliptical $2-$Hessian equation in $\mathbb{R}^3$ DCDS Home Asymptotic stability of steady state solutions for the relativistic Euler-Poisson equations February 2016, 36(2): 1005-1021. doi: 10.3934/dcds.2016.36.1005 Existence and stability of a two-parameter family of solitary waves for a 2-coupled nonlinear Schrödinger system Nghiem V. Nguyen 1, and Zhi-Qiang Wang 2, Department of Mathematics and Statistics, Utah State University, Logan, UT 84322, United States Center for Applied Mathematics, Tianjin University, Tianjin, 300072, China Received June 2014 Published August 2015 In this paper, the existence and stability results for a two-parameter family of vector solitary-wave solutions (i.e both components are nonzero) of the nonlinear Schrödinger system \begin{equation*} \left\{ \begin{matrix} iu_t+ u_{xx} + (a |u|^2 + b |v|^2) u=0,\\ iv_t+ v_{xx} + (b |u|^2 + c |v|^2) v=0,\\ \end{matrix} \right. \end{equation*} where $u,v$ are complex-valued functions of $(x,t)\in \mathbb R^2$, and $a,b,c \in \mathbb R$ are established. The results extend our earlier ones as well as those of Ohta, Cipolatti and Zumpichiatti and de Figueiredo and Lopes. As opposed to other methods used before to establish existence and stability where the two constraints of the minimization problems are related to each other, our approach here characterizes solitary-wave solutions as minimizers of an energy functional subject to two independent constraints. The set of minimizers is shown to be stable; and depending on the interplay between the parameters $a,b$ and $c$, further information about the structures of this set are given. Keywords: Nonlinear Schrödinger system, solitary waves, orbital stability, variational problems., ground states. Mathematics Subject Classification: Primary: 35A15, 35B35, 35Q3. Citation: Nghiem V. Nguyen, Zhi-Qiang Wang. Existence and stability of a two-parameter family of solitary waves for a 2-coupled nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1005-1021. doi: 10.3934/dcds.2016.36.1005 J. Albert and S. Bhattarai, Existence and stability of a two-parameter family of solitary waves for an NLS-KdV system,, Adv. Differential Equations, 18 (2013), 1129. Google Scholar A. Ambrosetti and E. Colorado, Bound and ground states of coupled nonlinear Schrödinger equations,, C. R. Math. Acad. Sci. Paris, 342 (2006), 453. doi: 10.1016/j.crma.2006.01.024. Google Scholar ________, Standing waves of some coupled nonlinear Schrödinger equations,, J. London Math. Soc., 75 (2007), 67. Google Scholar T. Bartsch, N. Dancer and Z.-Q. Wang, A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system,, Cal. of Var. and PDEs, 37 (2010), 345. doi: 10.1007/s00526-009-0265-y. Google Scholar T. Bartsch and Z.-Q. Wang, Note on ground states of nonlinear Schrödinger systems,, Journ. Part. Diff. Eqns., 19 (2006), 200. Google Scholar T. Bartsch, Z.-Q. Wang and J. Wei, Bound states for a coupled Schrödinger system,, J. Fixed Point Theory Appl., 2 (2007), 353. doi: 10.1007/s11784-007-0033-6. Google Scholar D. J. Benney and A. C. Newell, The propagation of nonlinear wave envelopes,, Jour. Math. Phys., 46 (1967), 133. Google Scholar J. Byeon, Effect of symmetry to the structure of positive solutions in nonlinear elliptic problems,, Jour. Diff. Eqns., 163 (2000), 429. doi: 10.1006/jdeq.1999.3737. Google Scholar T. Cazenave, An Introduction to Nonlinear Schrödinger Equations,, Textos de Métodos Matemáticos, (1989). Google Scholar _________, Semilinear Schrödinger equations,, AMS-Courant Lecture Notes, 10 (2003). Google Scholar R. Cipolatti and W. Zumpichiatti, Orbitally stable standing waves for a system of coupled nonlinear Schrödinger equations,, Nonlinear Anal., 42 (2000), 445. doi: 10.1016/S0362-546X(98)00357-5. Google Scholar E. N. Dancer, J. Wei and T. Weth, A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system,, Ann. Inst. H. Poincare Anal. Non Linearaire, 27 (2010), 953. doi: 10.1016/j.anihpc.2010.01.009. Google Scholar D. G. de Figueiredo and O. Lopes, Solitary waves for some nonlinear Schrödinger systems,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 149. doi: 10.1016/j.anihpc.2006.11.006. Google Scholar D. Garrisi, On the orbital stability of standing-waves solutions to a coupled non-linear Klein-Gordon equation,, Adv. Nonlinear Stud., 12 (2012), 639. Google Scholar A. Hasegawa and F. Tappert, Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers I. Anomalous dispersion,, Appl. Phys. Lett., 23 (1973). doi: 10.1063/1.1654836. Google Scholar ________, Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers II. Normal dispersion,, Appl. Phys. Lett., 23 (1973). Google Scholar I. Ianni and S. Le Coz, Multi-speed solitary wave solutions for nonlinear Schrödinger systems,, J. London Math. Soc. (2), 89 (2014), 623. doi: 10.1112/jlms/jdt083. Google Scholar E. H. Lieb and M. Loss, Analysis, Second edition,, Graduate studies in mathematics, (2001). doi: 10.1090/gsm/014. Google Scholar T.-C. Lin and J. Wei, Ground state of $N$ coupled nonlinear Schrödinger equations in $R^n$, $n\leq 3$,, Comm. Math. Phys., 255 (2005), 629. doi: 10.1007/s00220-005-1313-x. Google Scholar P.-L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. I,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 1 (1984), 109. Google Scholar _________, The concentration-compactness principle in the calculus of variations. The locally compact case. II,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 1 (1984), 223. Google Scholar Z. Liu and Z.-Q. Wang, Multiple bound states of nonlinear Schrödinger systems,, Comm. Math. Phys., 282 (2008), 721. doi: 10.1007/s00220-008-0546-x. Google Scholar N. V. Nguyen and Z.-Q. Wang, Orbital stability of solitary waves for a nonlinear Schrödinger system,, Adv. Diff. Eqns., 16 (2011), 977. Google Scholar N. V. Nguyen and Z.-Q. Wang, Orbital stability of solitary waves of a $3-$coupled nonlinear Schrödinger system,, Non. Anal. A: Theory, 90 (2013), 1. doi: 10.1016/j.na.2013.05.027. Google Scholar M. Ohta, Stability of solitary waves for coupled nonlinear Schrödinger equations,, Nonlinear Anal.: Theory, 26 (1996), 933. doi: 10.1016/0362-546X(94)00340-8. Google Scholar G. J. Roskes, Some nonlinear multiphase interactions,, Stud. Appl. Math., 55 (1976). Google Scholar B. Sirakov, Least energy solitary waves for a system of nonlinear Schrödinger equations in $\mathbf {R^n}$,, Comm. Math. Phys., 271 (2007), 199. doi: 10.1007/s00220-006-0179-x. Google Scholar X. Song, Stability and instability of standing waves to a system of Schrödinger equations with combined power-type nonlinearities,, Jour. Math. Anal. Appl., 366 (2010), 345. doi: 10.1016/j.jmaa.2009.12.011. Google Scholar J. Yang, Multiple permanent-wave trains in nonlinear systems,, Stud. Appl. Math., 100 (1998), 127. doi: 10.1111/1467-9590.00073. Google Scholar V. E. Zakharov, Stability of periodic waves of finite amplitude on the surface of a deep fluid,, Sov. Phys. Jour. Appl. Mech. Tech. Phys., 9 (1968), 190. doi: 10.1007/BF00913182. Google Scholar V. E. Zakharov, Collapse of Langmuir waves,, Sov. Phys. JETP, 35 (1972), 908. Google Scholar A. K. Zvezdin and A. F. Popkov, Contribution to the nonlinear theory of magnetostatic spin waves,, Sov. Phys. JETP, 2 (1983). Google Scholar Sevdzhan Hakkaev. Orbital stability of solitary waves of the Schrödinger-Boussinesq equation. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1043-1050. doi: 10.3934/cpaa.2007.6.1043 Fábio Natali, Ademir Pastor. Orbital stability of periodic waves for the Klein-Gordon-Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 221-238. doi: 10.3934/dcds.2011.31.221 Chuangye Liu, Zhi-Qiang Wang. A complete classification of ground-states for a coupled nonlinear Schrödinger system. Communications on Pure & Applied Analysis, 2017, 16 (1) : 115-130. doi: 10.3934/cpaa.2017005 Santosh Bhattarai. Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1789-1811. doi: 10.3934/dcds.2016.36.1789 Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843 Alex H. Ardila. Stability of ground states for logarithmic Schrödinger equation with a $δ^{\prime}$-interaction. Evolution Equations & Control Theory, 2017, 6 (2) : 155-175. doi: 10.3934/eect.2017009 Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395 Zupei Shen, Zhiqing Han, Qinqin Zhang. Ground states of nonlinear Schrödinger equations with fractional Laplacians. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2115-2125. doi: 10.3934/dcdss.2019136 Daniele Garrisi, Vladimir Georgiev. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4309-4328. doi: 10.3934/dcds.2017184 Juan Belmonte-Beitia, Vladyslav Prytula. Existence of solitary waves in nonlinear equations of Schrödinger type. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1007-1017. doi: 10.3934/dcdss.2011.4.1007 David Usero. Dark solitary waves in nonlocal nonlinear Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1327-1340. doi: 10.3934/dcdss.2011.4.1327 Yonggeun Cho, Hichem Hajaiej, Gyeongha Hwang, Tohru Ozawa. On the orbital stability of fractional Schrödinger equations. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1267-1282. doi: 10.3934/cpaa.2014.13.1267 Hiroaki Kikuchi. Remarks on the orbital instability of standing waves for the wave-Schrödinger system in higher dimensions. Communications on Pure & Applied Analysis, 2010, 9 (2) : 351-364. doi: 10.3934/cpaa.2010.9.351 Wen Feng, Milena Stanislavova, Atanas Stefanov. On the spectral stability of ground states of semi-linear Schrödinger and Klein-Gordon equations with fractional dispersion. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1371-1385. doi: 10.3934/cpaa.2018067 Giuseppe Maria Coclite, Helge Holden. Ground states of the Schrödinger-Maxwell system with dirac mass: Existence and asymptotics. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 117-132. doi: 10.3934/dcds.2010.27.117 Eugenio Montefusco, Benedetta Pellacci, Marco Squassina. Energy convexity estimates for non-degenerate ground states of nonlinear 1D Schrödinger systems. Communications on Pure & Applied Analysis, 2010, 9 (4) : 867-884. doi: 10.3934/cpaa.2010.9.867 Dongdong Qin, Xianhua Tang, Qingfang Wu. Ground states of nonlinear Schrödinger systems with periodic or non-periodic potentials. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1261-1280. doi: 10.3934/cpaa.2019061 Benedetta Noris, Hugo Tavares, Gianmaria Verzini. Stable solitary waves with prescribed $L^2$-mass for the cubic Schrödinger system with trapping potentials. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 6085-6112. doi: 10.3934/dcds.2015.35.6085 Chang-Lin Xiang. Remarks on nondegeneracy of ground states for quasilinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5789-5800. doi: 10.3934/dcds.2016054 Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 525-544. doi: 10.3934/dcds.2001.7.525 Nghiem V. Nguyen Zhi-Qiang Wang
CommonCrawl
Applied Network Science Research | Open | Published: 27 June 2019 MinerLSD: efficient mining of local patterns on attributed networks Martin Atzmueller ORCID: orcid.org/0000-0002-2480-69011,2, Henry Soldano3,4, Guillaume Santini3 & Dominique Bouthinon3 Applied Network Sciencevolume 4, Article number: 43 (2019) | Download Citation Local pattern mining on attributed networks is an important and interesting research area combining ideas from network analysis and data mining. In particular, local patterns on attributed networks allow both the characterization in terms of their structural (topological) as well as compositional features. In this paper, we present MinerLSD, a method for efficient local pattern mining on attributed networks. In order to prevent the typical pattern explosion in pattern mining, we employ closed patterns for focusing pattern exploration. In addition, we exploit efficient techniques for pruning the pattern space: We adapt a local variant of the standard Modularity metric used in community detection that is extended using optimistic estimates, and furthermore include graph abstractions. Our experiments on several standard datasets demonstrate the efficacy of our proposed novel method MinerLSD as an efficient method for local pattern mining on attributed networks. The analysis of complex networks, e.g., by investigating structural properties and identifying interesting patterns, is an important task to make sense of such networks, in order to ultimately enable an understanding of their phenomena and structures, e.g., (Newman 2003; Kumar et al. 2006; Almendral et al. 2007; Mitzlaff et al. 2011; Silva et al. 2012; Mitzlaff et al. 2013; Atzmueller 2014; Pool et al. 2014; Galbrun et al. 2014; Mitzlaff et al. 2014; Kibanov et al. 2014; Soldano et al. 2015; Atzmueller et al. 2016; Bendimerad et al. 2016; Kaytoue et al. 2017; Atzmueller 2017; 2019). In this context, data mining on such networks represented as attributed graphs has recently emerged as a prominent research topic, e.g., (Moser et al. 2009; Silva et al. 2012; Atzmueller 2014; Galbrun et al. 2014; Soldano et al. 2015; Atzmueller et al. 2016; Bendimerad et al. 2016; Kaytoue et al. 2017). Methods for mining attributed graphs focus on the identification and extraction of patterns using topological information as well as compositional information on nodes and/or edges given by a set of attributes, e.g., (Atzmueller 2018; Wasserman and Faust 1994). In particular, local pattern mining focuses on the identification of dense substructures in a graph that are captured by specific patterns composed of the given attributes, e.g., for detecting communities (Moser et al. 2009; Silva et al. 2012; Pool et al. 2014; Galbrun et al. 2014; Soldano et al. 2015; Atzmueller et al. 2016). In this paper, an adapted and substantially extended revision of Atzmueller et al. (2018), we present MinerLSD a method for the efficient mining of local patterns on attributed networks. Compared to our work described in Atzmueller et al. (2018), we have added onto the discussion of the MinerLSD algorithm, also considering further related approaches for putting the proposed method into context. Furthermore, we have considerably extended the evaluation and discussion of the proposed novel algorithm with new experiments, also using new (larger) datasets, and by illustrating the pattern mining approach using exemplary patterns. MinerLSD focuses both on local pattern mining (e.g., for local community detection) using the local modularity metric (Newman 2004; Newman and Girvan 2004; Atzmueller et al. 2016), as well as graph abstraction that reduces graphs to k-core subgraphs (Soldano et al. 2015). In order to prevent the typical pattern explosion in pattern mining, we employ closed patterns. In addition, we exploit optimistic estimates for the local modularity for focussing pattern exploration inspired by community detection methods and for pruning the pattern space. Essentially, the optimistic estimate technique provides two advantages: First, it neglects the importance of a minimal support threshold which is typically applied in pattern mining. Second, it enables a very efficient pattern exploration approach, given a suitable threshold for the local modularity, as we will show below. Then, this threshold can of course alternatively be entirely eliminated in a top-k approach. We demonstrate the efficacy of our presented novel method MinerLSD by performing experiments on several standard datasets, in relation to two baselines for local pattern mining. Our contributions are summarized as follows: For local pattern mining on attributed graphs, we analyze the impact of generating closed patterns compared to standard pattern mining in terms of the search effort. Using two baseline algorithms, we further investigate the impact of pruning the pattern exploration space using an optimistic estimate of the local modularity measure with different thresholds. Finally, we propose the MinerLSD method for efficient local pattern mining on attributed graphs. MinerLSD relies on closed pattern mining, optimistic estimate pruning, and graph abstraction. The rest of this paper is organized as follows: Section "Related Work" discusses related work, before section "Background" introduces basic notions and concepts. After that, "The MinerLSD Algorithm" section presents the novel MinerLSD method. Next, section "Datasets" introduces the applied datasets. Section "Experiments and Results" discusses our experimental results. Finally, section "Conclusions" concludes with a summary and interesting directions for future work. The detection of local patterns is a prominent approach in knowledge discovery and data mining, e.g., (Morik 2002; Morik et al. 2005; Knobbe et al. 2008). Below, we discuss related work in the areas of local pattern mining, closed patterns, graph abstractions, and community detection on attributed graphs. In particular, the proposed novel MinerLSD algorithm builds on methods for those fields. Thus, similar to the approaches discussed below, the proposed MinerLSD approach also utilizes closed patterns, and graph abstractions, i.e., core subgraphs. However, it extends this using optimistic estimate pruning using an interestingness measure adapted from (local) community detection. In section "Experiments and Results", we perform an extensive evaluation of the impact of closed patterns, optimistic estimates, and core structures on the pattern mining effort. Pattern Mining In general, local pattern mining, e.g., (Agrawal and Srikant 1994; Han et al. 2000; Morik 2002; Morik et al. 2005; Knobbe et al. 2008; Lemmerich et al. 2012; Atzmueller 2015; Lemmerich et al. 2016) has many flavors, including association rule mining, subgroup discovery, and graph mining. At its core, it considers the support set of any pattern, i.e., the set of objects, often called transactions, in which the pattern occurs. The goal then is to enumerate the set of all patterns that satisfy some constraint. In the case of association rules (Agrawal and Srikant 1994; Han et al. 2000) typically the frequency of a pattern, or the frequency of a contained implication in the pattern, respectively, are considered. Whenever the constraint is anti-monotonic, as the frequency, a top-down search may be efficiently pruned. Still this results in investigating a lot of patterns. In the field of subgroup discovery, more complex constraints formalizx ed in quality (or interestingness) functions have been proposed; here, these do not necessarily fulfill anti-monotonicity. To handle that, optimistic estimates for those quality functions have been proposed (Wrobel 1997; Grosskreutz et al. 2008; Atzmueller and Lemmerich 2009; Lemmerich et al. 2016) in order to efficiently prune the pattern search space. Closed pattern mining (see for instance (Pasquier et al. 1999)) reduces the search by considering patterns as equivalent when having the same support set, and generating only closed patterns, i.e., a most specific pattern among all equivalent patterns. Efficient enumeration algorithms have been provided, e.g., (Uno et al. 2004; Boley et al. 2010)). Various algorithms and methodologies using closure operators have also been proposed in the domain of formal concept analysis (Wille 1982), which goes further than the enumeration alone, being interested in the lattice structure of the set of closed patterns (Ganter and Wille 1999). Local Pattern Mining on Attributed Networks For investigating complex networks, a popular approach consists of extracting a core subgraph from the network, i.e., some essential part of the graph whose nodes satisfy a local property. The k-core definition was first proposed in Seidman (1983). It requires all nodes in the core subgraph to have a degree of at least k. The idea was further extended to a wide class of so-called generalized cores (Batagelj and Zaversnik 2011). The resulting subgraphs may be made of several connected components that are then considered as structural communities. However, as this may be too weak to obtain cohesive communities, some post-processing may then be necessary. A successful method, for example, identifies k-communities (Palla et al. 2005) that are extracted from the connected components of a graph derived from the original graph. Recently an extension of the closed pattern mining methodology to attributed graphs has been proposed. It relies on the reduction of the support set of a pattern to the core of the pattern subgraph (Soldano and Santini 2014). This results in less and larger classes of equivalent patterns, and hence less closed patterns. The MinerLC algorithm proposed by Soldano et al. (2017) is a generic method to enumerate the set of such core closed patterns. The algorithm MinerLSD that we propose in "The MinerLSD Algorithm" section, closely follows the MinerLC algorithm and adds requirements regarding the local modularity of the pattern core subgraphs. This is performed efficiently using the optimistic estimate pruning strategy of the COMODO algorithm for community detection, mentioned in section "Community Detection on Attributed Graphs". Community Detection on Attributed Graphs Communities and cohesive subgroups have been extensively studied in network science, e.g., using social network analysis methods (Wasserman and Faust 1994). Fortunato (2010) presents a thorough survey on the state of the art community detection algorithms in graphs, focussing on detecting disjoint communities, e.g., (Newman and Girvan 2004; Fortunato and Castellano 2007). In contrast to such partitioning approaches, overlapping communities allow an extended modeling of actor–actor relations in social networks: Nodes of a corresponding graph can then participate in multiple communities, e.g., (Palla et al. 2007; Lancichinetti et al. 2009; Xie and Szymanski 2013). A comprehensive survey on algorithms for overlapping community detection is provided in Xie et al. (2013). In contrast to the algorithms and approaches discussed above, the proposed approach utilizes further descriptive information of attributed graphs, e.g., (Bothorel et al. 2015). Attributed (or labeled) graphs as richer graph representations enable approaches that specifically exploit the descriptive information of the labels assigned to nodes and/or edges of the graph. Exemplary approaches include density-based methods, e.g., (Zhou et al. 2009; Combe et al. 2015), distance-based methods, e.g., (Steinhaeuser and Chawla 2008; Ge et al. 2008), entropy-based methods, e.g., (Zhu et al. 2011; Smith et al. 2014), model-based methods, e.g., (Balasubramanyan and Cohen 2011; Xu et al. 2012), seed-centric methods, e.g., (Kanawati 2014a; Yakoubi and Kanawati 2014; Kanawati 2014b; Belfin et al. 2018) and finally pattern mining approaches, which we will describe in the following in more detail. Pattern mining approaches for community detection on attributed graphs typically connect (local) pattern mining and community detection according to several interestingness measures or optimization criteria. Moser et al. (2009), for example, combine the concepts of dense subgraphs and subspace clusters for mining cohesive patterns. Starting with quasi-cliques, those are expanded until constraints regarding the description or the graph structure are violated. Similarly, Günnemann et al. (2013) combine subspace clustering and dense subgraph mining, also interleaving quasi-clique and subspace construction. Galbrun et al. (2014) propose an approach for the problem of finding overlapping communities in graphs and social networks, that aims to detect the top-k communities so that the total edge density over all k communities is maximized. This is also related to a maximum coverage problem for the whole graph. For labeled graphs, each community is required to be described by a set of labels. The algorithmic variants proposed by Galbrun et al. apply a greedy strategy for detecting dense subgroups, and restrict the resulting set of communities, such that each edge can belong to at most one community. This partitioning involves a global approach on the community quality, in contrast to our local approach. Silva et al. (2012) study the correlation between attribute sets and the occurrence of dense subgraphs in large attributed graphs. The proposed method considers frequent attribute sets using an adapted frequent item mining technique, and identifies the top-k dense subgraphs induced by a particular attribute set, called structural correlation patterns. The DCM method presented by Pool et al. (2014) includes a two-step process of community detection and community description. A heuristic approach is applied for discovering the top-k communities, utilizing a special interestingness function which is based on counting outgoing edges of a community similar; for that, they also demonstrate the trend of a correlation with the Modularity function. The COMODO algorithm proposed by Atzmueller et al. (2016) applies an adapted subgroup discovery (Atzmueller and Puppe 2006; Atzmueller 2015) approach for community detection on attributed graphs. That is, COMODO applies subgroup discovery for detecting interesting patterns (constructed from the set of compositional attributes) for which their interestingness is evaluated on the graph topological structure. The algorithm works on an edge dataset that is attributed with common attributes of the respective nodes. Then, communities are detected in a top-k approach maximizing a given community interestingness measure. This includes, among others, the local modularity, which is derived from the (global) measure, i.e., the (Newman) Modularity (Newman 2004; Newman and Girvan 2004). For an efficient community detection approach, COMODO utilizes optimistic estimate pruning. In this paper, we adapt the COMODO approach integrating optimistic estimate pruning for the local modularity as proposed by COMODO with closed abstract pattern mining of the MinerLC algorithm. This results in the efficient and effective MinerLSD algorithm, making use of efficient techniques based on abstract closed pattern mining and branch-and-bound pruning according to the local modularity. At the same time, these techniques allow effective selection strategies utilizing graph abstractions together with local modularity, as we will show below. In the following, we outline the background on closed local pattern mining, introduce pruning based on optimistic estimates, and discuss pattern exploration, abstraction, and selection combining principles from pattern mining and graph mining, i.e., utilizing closure on the attribute space and topological criteria based on local modularity (estimates) and k-cores. Mining Closed Patterns to Enumerate Core Subgraphs We consider the following general problem: Let G be an attributed graph, i.e., a graph where each vertex v is described by an itemset D(v) taken from a set of items I. We want to enumerate all (maximal) vertex subsets W in G such that there exists an itemset q which is a subset of all itemsets D(v),v∈W. W is furthermore required to satisfy some graph related constraints. In standard terminology, q is a pattern that occurs in all element of W which is also called the support set or extension ext(q) of q. Efficient top-down enumeration algorithms exist as far as the constraints are anti-monotonic: whenever the constraint fails to be satisfied by some pattern, it also fails for all more specific patterns. This is obviously the case for the minimum support constraint that requires the size of ext(q) to be above some minimal support threshold s. A first way to reduce the overall search space and the size of the solution set is to avoid duplicates, i.e., patterns q,q′ that occur in the same subgroup, for which ext(q)=ext(q′). This is obtained by only enumerating closed patterns. Given any pattern q the associated closed pattern is the most specific pattern f(q) which occurs in the same subgroup as q, i.e., ext(f(q))=ext(q). Furthermore, since we consider the vertices of a graph, it is natural to consider graph related constraints, as for instance requiring that all vertices have a degree of at least k in the subgroup graph GW. For that purpose, each candidate subgroup X is reduced to its core p(X)=W using the core operatorp. We start with the definition of closure: The operator f that returns for any pattern q the closed pattern f(q) is a closure operator (see below) defined by f(q)=int∘p∘ext(q); the respective operators are defined as follows (note that ∘ denotes function composition): The intersection operator int(X) returns the most specific pattern occurring in the vertex subset X. The core operator p(X) returns the core, according to some core definition, of the subgraph GX of G induced by the vertex subset X. p is an interior operator (see below). Let S be an ordered set and f:S→S a self map such that for any x,y∈S, f is monotone, i.e. x≤y implies f(x)≤f(y) and idempotent, i.e. f(f(x))=f(x): - If f(x)≥x, f is called a closure operator. - If f(x)≤x, f is called an interior operator. Essentially, core closed pattern mining relies on three main results: It has been shown that whenever p is an interior operator, f=int∘p∘ext is a closure operator (Pernelle et al. 2002). Furthermore, core definitions rely on a monotone property of a vertex within an induced subgraph (Batagelj and Zaversnik 2002). For instance, the k-core of a subgraph GX is defined as the largest vertex subset W⊆X such that in the induced subgraph GW all vertices v have a degree of at least k. The property is monotone in the sense that when increasing GX to $G_{X'}\phantom {\dot {i}\!}$ the degree of v cannot decrease. Finally, it has been shown that the core operator which returns the core of some subgraph GX, according to a monotone property, is an interior operator (Soldano and Santini 2014). Overall, this means that f(q) returns the largest pattern which occurs in the core of the vertex subset ext(q) in which q occurs. This is exploited in core closed pattern mining (Soldano et al. 2017), performing a top-down search of the pattern space jumping from closed pattern to closed pattern: each closed pattern q is augmented with some item x, then the next closed pattern f(q∪{x}) is computed. Pruning Local Patterns in Graphs Using Optimistic Estimates Another way to reduce the solution set is to consider some interestingness measure M and require a subgroup W to induce a subgraph GW with an interestingness M(W) above some threshold. However such measures, for example, the local modularity (see below), are usually not anti-monotonic. This difficulty may be overcome by using some optimistic estimate of M which is both anti-monotonic and allows an efficient pruning of the search space. Optimistic estimates are one prominent option in local pattern mining to prune search spaces by complementing non-(anti)-monotonic interestingness measures by their respective optimistic estimators, e.g., (Grosskreutz et al. 2008; Wrobel 1997). Intuitively, if for a given pattern (and all of its potential specializations) it can be proven that their quality is either below the quality of the current top patterns, or below a specified threshold, then pattern exploration does not need to continue for that pattern, and the search space can often be pruned significantly. In the scope of local pattern mining on graphs, several standard community quality functions have been investigated, also specifying optimistic estimates for a number of such community evaluation functions. As shown in Atzmueller et al. (2016) these lead to a quite efficient approach for descriptive community detection using local pattern mining. In summary, using optimistic estimates we can enumerate pairs (c,W), of pattern c and subgroup W inducing the subgraph GW. Then, we can select subgraphs according to an interestingness measure M of the subgraph using an anti-monotonic optimistic estimate of M to prune the search. Additionally, a minimal support constraint can also be applied in order to improve the effectiveness of pruning. Below, we summarize main results on using optimistic estimate pruning for community detection, specifically addressing the (local) modularity quality measure. Here, the concept of a community intuitively describes a group W of individuals out of a population such that members of W are strongly "connected" to each other but sparsely "connected" to those individuals that are not contained in W. This notion translates to communities as vertex sets W⊆V of an undirected graph G=(V,E); in the following, we adopt the notation of Atzmueller et al. (2016) for introducing the main concepts: n:=|V|, m:=|E|, and mW:=|{{u,v}∈E:u,v∈W}| denotes the number of intra-edges of W. There are different interestingness measures for estimating the quality of a community $2^{V}\rightarrow \mathbb {R}$, also according to different criteria and intuitions about what "makes up" a good community. One particular community quality function is the Modularity (Newman 2004; Newman and Girvan 2004). In the context of local pattern mining, we aim to maximize local quality functions for single communities. For that, we apply an adaptation of the Modularity interestingness measure, which essentially is a global measure estimating the quality of a community partitioning. Then, we focus on the modularity contribution of each individual community in order to obtain a local measure for each community, cf., (Atzmueller et al. 2016), which we further call local modularity (MODL). Overall, the Modularity MOD (Newman 2004; Newman and Girvan 2004; Newman 2006) of a graph clustering with k communities C1,…,Ck⊆V focuses on the number of edges within a community and compares that with the expected such number given a null-model (i.e., a corresponding random graph where the node degrees of G are preserved). It is given by $$ \text{MOD} = \frac{1}{2m}\sum_{u,v\in V}\left(A_{u,v} - \frac{\mathrm{d}(u)\mathrm{d}(v)}{2m}\right)\delta(C(u), C(v))\,, $$ where C(i) denotes for i∈V the community to which node i belongs. Au,v denotes the respective entry of the adjacency matrix A. δ(C(u),C(v)) is the Kronecker delta symbol that equals 1 if C(u)=C(v), and 0 otherwise. The modularity contribution of a single community given by a vertex set W,W⊆V in a local context (e.g., in a subgraph induced by the pattern), i.e., the local modularity (MODL), can then be computed (cf., (Newman 2006; Nicosia et al. 2009; Atzmueller et al. 2016)) as follows: $$ \text{MODL}(W) = \frac{m_{W}}{m} - \sum_{u,v\in W}\frac{\mathrm{d}(u) \mathrm{d}(v)}{4m^{2}}\,. $$ For the above (MODL), an optimistic estimate has been introduced in Atzmueller et al. (2016). It can be derived based only on the number of edges mW within the community: $$\begin{array}{@{}rcl@{}} \text{oe}(\text{MODL})(W) = \left\{\begin{array}{ll} 0.25, & \text{if }m_{W} \geq \frac{m}{2}\,,\\ \frac{m_{W}}{m} - \frac{m_{W}^{2}}{m^{2}}, & \text{otherwise.} \end{array}\right. \end{array} $$ For a detailed discussion, the derivation of the local measure, and the respective proofs, we refer to Atzmueller et al. (2016). Local Pattern Exploration, Abstraction, and Selection Pattern mining commonly aims at discovering a set of novel, potentially useful, and ultimately interesting patterns from a given (large) data set (Fayyad et al. 1996). For pattern exploration, we apply local pattern mining, in particular, (abstract) closed pattern mining (Pasquier et al. 1999; Uno et al. 2004; Boley et al. 2010; Soldano and Santini 2014; Soldano et al. 2017) due to its efficient traversal of the search space for pattern enumeration and abstraction as discussed above. Regarding pattern selection, we discuss the choices of core abstraction and modularity-based selection in the following: In contrast to many methods used in network analysis and graph mining, pattern mining on attributed graphs specifically aims at a description-oriented view, by including patterns on attributes, but also considering the topological structure. Many community mining algorithms, for example, only collect sets of nodes denoting the individual communities thus merely focusing on structural/topological aspects of the graph; typically, then there is no simple and easily interpretable description, such that a community would be represented mainly as a set of IDs, cf., (Atzmueller et al. 2016). For local pattern mining, the goal is typically to detect a set of the most interesting patterns according to a given quality function, e.g., with a quality above a certain threshold, or the top-k patterns according to the ranking of the quality function denoting their interestingness. For subgroup discovery, as an exemplary instance, the goal is then to obtain the set of patterns covering subgroups that are "as large as possible and have the most unusual statistical characteristic with respect to the property of interest" (Wrobel 1997). Thus, the interestingness of a pattern can then be flexibly defined, e.g., by a significant deviation from a model that is derived from the total population (Morik 2002; Morik et al. 2005; Knobbe et al. 2008). Therefore, typically the size of a pattern or the size of its extension, respectively, and the deviation compared to some null-model specifies the interestingness which is formalized in the quality function for ranking the patterns. For pattern mining on networks and graphs, there exist several quality measures, usually taking into account the support of the pattern, i.e., its size, similar to the criteria discussed above. Furthermore, the topological structure of the subgraph induced by the pattern is also taken into account. Here, standard quality functions include the segregation index (Freeman 1978), the average out degree fraction (Yang and Leskovec 2012), the conductance (Leskovec et al. 2008) and the Modularity (Newman and Girvan 2004), as we have discussed in the previous section. In general, the core idea of the evaluation function is to apply an objective evaluation criterion, for example, for the Modularity the number of connections within the community compared to the statistically "expected" number based on all available connections in the network, and to prefer those communities that optimize the evaluation function. A thorough empirical analysis of the impact of different community mining algorithms and their corresponding objective function on the resulting community structures is presented in Leskovec et al. (2010), based on the analysis of community structure in graphs (as presented in Leskovec et al. (2008)). Furthermore, Atzmueller et al. (Atzmueller and Mitzlaff 2010; 2011; Atzmueller et al. 2016) have empirically investigated different community quality functions in the scope of local pattern mining. As shown there for the provided experiments, the local modularity quality function indicated the best results for pattern filtering and pruning in local pattern mining applications, since it provides large high quality communities, i.e., subgroups referring to the induced subgraphs, smaller patterns in terms of their description, as well as statistically significant patterns compared to the other mentioned quality functions which focus on smaller subgroups; those were typically also not statistically significant as specifically presented in Atzmueller et al. (2016). Furthermore, the local modularity quality function (see Eq. 2) intuitively provides the prominent property of assigning a higher ranking to larger (core) subgraphs under consideration, if these are considerably more densely connected than expected by chance. Therefore, these criteria conveniently capture the notion of larger subgraphs and having the most unusual statistical characteristics with respect to the null-model. In the following, we show how these criteria are directly implemented in the local modularity measure. Consider the local modularity MODL(W) of a subgraph W: $$\text{MODL}(W) = \frac{m_{W}}{m} - \sum_{u,v\in W}\frac{\mathrm{d}(u) \mathrm{d}(v)}{4m^{2}} = \frac{1}{m}\left(m_{W} - \sum_{u,v\in W}\frac{\mathrm{d}(u) \mathrm{d}(v)}{4m}\right)\,. $$ Since the first factor $\frac {1}{m}$ is a constant, we can consider the second factor of the former expression: It is easy to see that this factor itself is order equivalent to the local modularity function MODL, since it only depends on a fixed constant $\frac {1}{m}$; by not including that it is thus not normalized relatively to the number of edges of the graph. Instead, it focuses on the number of edges of the (core) subgraph (the minuend of the term) and its deviation assessed by the null-model which is captured by the subtrahend of that term. Thus, it is easy to see that the MODL function tends to focus on larger patterns (larger subgraphs) having the most unusual statistical characteristics with respect to the null-model. By utilizing appropriate constraints on the graph structure, e.g., using k-core abstractions we can further focus on the unusual distributional characteristics. By applying k-core abstractions, for example, with increasing k we tend to focus on increasingly denser pattern structures (subgraphs). We will also show this by our experiments in section "Experiments and Results" when we discuss our results. To sum up, we apply the local modularity measure MODL as introduced above for focusing pattern exploration on the statistically most unusual subgraphs. Applying k-core constraints helps due to its focus on denser subgraphs, as also theoretically analyzed in Peng et al. (2014) for k-cores. Overall, we specifically focus on "nuggets in the data" (Klösgen 1996), i.e., on exceptional patterns according to the principles of local pattern mining. In addition, the local modularity neglects the importance of a minimal support threshold which is typically applied in pattern mining, since it directly includes the size of the pattern as a criterion. This enables a very efficient pattern mining approach, given either a suitable threshold for the local modularity, or by targeting the top-k patterns. The MinerLSD Algorithm In the following, we describe our proposed novel method MinerLSD in detail. MinerLSD integrates core subgraph closed pattern mining with pattern selection according to the local modularity MODL function, and optimistic estimate pruning according to a specific optimistic estimator, i.e., oe(MODL). As input parameters, MinerLSD requires a graph G=(V,E), a set of items I, a dataset D describing vertices as itemsets and a core operator p. p depends on G and to any image p(X)=W we associate the core subgraph C whose vertex set is vs(C)=W. In our experiments, p(X) returns the k-core of X. As further parameters, MinerLSD considers the corresponding value k as well as a frequency threshold s (defaulting to 0) and a local modularity threshold lm. The algorithm outputs the frequent pairs (c,W) where c is a core closed pattern and W=p∘ext(c) its associated k-core. For evaluation purposes, we also count the number of patterns above the local modularity threshold (#lm), and the number of patterns for which their estimate is above the local modularity threshold (#lme). It is important to note, that in the enumeration step MinerLSD ensures that each pair (c,W) is enumerated (at most) once. We performed our experiments utilizing a variety of attributed graph datasets ranging from small to medium graphs with small to large sets of items. Table 1 depicts the main characteristics of these datasets (see also (Galbrun et al. 2014)), which have been previously used in pattern mining tasks on attributed graphs. For each dataset, we indicate the number of edges (|E|), vertices (|V|) and labels (|L|), the average vertex degree ($\overline {deg(v)}$) and average number of labels per vertex ($\overline {|l(v)|}$) in the table. Table 1 Datasets/Characteristics: Number of edges (|E|), vertices (|V|), labels (|L|), the average vertex degree ($\overline {deg(v)}$), and average number of labels per vertex ($\overline {|l(v)|}$) S50 is a standard attributed graph dataset used in a previous work about graph abstractions (Soldano and Santini 2014). Footnote 1 It represents 148 friendship relations between 50 pupils of a school in the West of Scotland; the labels concern the students' substance use (tobacco, cannabis and alcohol) and sporting activity. The values of the corresponding variables are ordered (see (Soldano and Santini 2014) for details). The Lawyers dataset concerns a network study of corporate law partnership that was carried out in a Northeastern US corporate law firm from 1988 to 1991 in New England (Lazega 2001). It concerns 71 attorneys (partners and associates) of this firm who are the vertices of four networks. In the resulting data, each attorney is described using various attributes. Footnote 2 We consider the advice network which is originally a directed graph in a undirected version, so that two lawyers are connected if at least one asks for advice to the other one. The CoExp dataset models a representative regulatory network for yeast obtained from Microarray expression data processed by the CoRegNet(Nicolle et al. 2015) program. In the CoExp dataset the vertices are co-regulators and they are linked if they share a common set of target genes. The vertices are labeled with their influence profile along a metabolic transition of the organism. Each influence value represents the regulation activity of the considered co-regulator at some instant of the metabolic transition. LastFM, DBLP.C and DBLP.XL were used in Galbrun et al. (2014). LastFM models the social network of last.fm where individuals are described by the artists or groups they have listened to. DBLP.C contains a co-authorship graph built from a set of publication references extract from DBLP of researchers that have published in the ICDM conference. The authors are labeled by keywords extracted from the papers' titles. DBLP.XL is the complete labeled DBLP co-authorship network used in Galbrun et al. (2014). DBLP.P was used in Bechara-Prado et al. (Bechara Prado et al. 2013). It represents a co-authorship graph built from a set of publication references extract from DBLP, published between January 1990 and February 2011 in the major conferences or journals of the Data Mining and Database communities. Three labels have been added to the original dataset based on the scope of the conferences and journals, respectively: DB (databases), DM (data mining) and AI (artificial intelligence). Delicious consists of the social (friendship) network of the resource sharing system delicious where individuals are described by their bookmarks' tags. The dataset is publicly available and was obtained from the HetRec workshop (Cantador et al. 2011) at Recsys 2011.Footnote 3 DBLP.S was used in Silva et al. (2012). It also represents a co-authorship network from a set of publication references extracted from DBLP. Experiments and Results In the following, we first summarize the applied baseline methods that were used in the comparison with the presented MinerLSD method. After that, we present our experimental results on the datasets described in "Datasets" section. Baseline Methods The applied set of baseline methods consists of MinerLC – an efficient algorithm for mining core closed patterns, and COMODO – an efficient algorithm for descriptive community detection using optimistic estimates. MinerLC MinerLCFootnote 4 (cf., (Soldano et al. 2017)) enumerates pairs (c,W) where GW is the core subgraph of pattern c, i.e., subgroup W=p∘ext(c) where ∘ is the composition operator, p is a core operator and c is the largest pattern that occurs in W and is called a core closed pattern. A threshold on the core sizes allows to select frequent core closed patterns and to accordingly prune the search. The selection process relies then partly on the anti-monotonic support constraint and partly on the fact that there are less pattern core subgraphs than pattern subgraphs as various pattern subgraphs Gext(q) may be reduced to the same core subgraph. The COMODO algorithmFootnote 5 presented in Atzmueller et al. (2016) performs description-oriented community detection in order to discover the top-k communities. In summary, COMODO enumerates pairs (c,W) where GW is the subgraph of pattern c for vertex subset W. It selects top k subgraphs according to an interestingness measure M of the subgraph and uses an efficient anti-monotonic optimistic estimate of M to prune the search. Additionally, a minimal support constraint can also be applied in order to improve the effectiveness of pruning. Similarities and Differences in Pattern Selection Both the considered baseline methods, i.e., MinerLC and COMODO output a set of pairs (pattern, vertex subset). However, in order to compare their outputs we have to consider the following differences: In COMODO the vertex subset W is obtained as the extremities of the set of edges in which a pattern occurs and a pattern occurs in an edge whenever it occurs, in the original dataset, in both connected vertices. That is, for each edge we assign the set of common items of both nodes, such that a pattern always covers two nodes connected by an edge. As a consequence, W ignores isolated nodes in which p occurs. To obtain the same vertex subset in MinerLC (and MinerLSD) it is necessary to remove isolated nodes, which is enabled by applying a 1-core graph abstraction. Since COMODO does not enumerate closed patterns, the same subgroup may be associated to several patterns. For that case, a post-processing is needed to eliminate the duplicates from the list of subgroups which may then be compared to the subgroups in the MinerLC pairs. This post-processing is one of the standard post-processing options of COMODO. MinerLC is run with a core definition while COMODO uses various parameters to limit the enumeration, as for instance the top-k parameter. To compare the results, MinerLC (as well as MinerLSD) should be run with the same minimum support threshold as COMODO and should only use a 1-core abstraction. The other parameters of COMODO should then have a value that does not limit the enumeration, e.g., by providing a sufficiently large top-k parameter to enable an exhaustive enumeration. Furthermore, MinerLC and COMODO select patterns according to different criteria. This is exemplified in Fig. 1, in which we have three graphs and three subgraphs induced by three vertices (in red). The subgraph G123 of the top graph G is a 2-core with a local modularity of 0.178. Within the central graph, the subgraph G123 is also a 2-core but with a low local modularity of -0.15. Finally, within the bottom graph, G123 is not a 2-core (since it has an empty 2-core subgraph) with a high local modularity of 0.16. Three graphs (top, center, bottom) each with a subgraph displayed in red. The two topmost subgraphs are 2-cores while the bottom subgraph has an empty 2-core. The top and bottom graphs have a local modularity above 0.15 while the central one has a negative local modularity score of -0.15 In our experiments below, we first investigate the impact of closure, before we focus on the k-core abstraction. We perform a detailed analysis of the efficiency of using the local modularity estimate for pruning the search space. Finally, we provide a structural pattern set analysis considering different metrics, and discuss exemplary patterns for illustrating the efficacy of the proposed approach. Parameters and Datasets For MinerLSD, it is important to note that in our experiments described below we did not have to use the minimal support s, since the local modularity threshold is efficient enough to strongly reduce the number of patterns. Below, we consider the following pattern quantities, where the (closed pattern, support set) pairs (c,e) are output by MinerLC unless specified; also, we consider a given local modularity threshold lm. #c the number of pairs (c,e). #lme: the number of pairs (c,e) such that oe(MODL)(e)≥lm. #nec: the number of (necessary) pairs (c,e) a top-down search has to consider to ensure that no pair with oe(MODL)(e)≥lm is lost. See "Pruning: Efficiency of the Local Modularity Estimate" section for details and results on #nec. #lm the number of pairs (c,e) such that MODL(e)≥lm #lmeSD: the number of pairs (c,e) such that oe(MODL)(e)≥lm as generated by COMODO. We ran the original COMODO and MinerLC programs as available. MinerLSD is derived from the sources of MinerLC and is to be found on the MinerLC web siteFootnote 6. A new MinerLC version integrates the MinerLSD developments. The experimental results presented here may then be obtained using appropriate parameters and options of the new software. Impact of Closed Patterns in Reducing the Search Space MinerLSD searches a space of closed patterns while COMODO searches the whole pattern space. Therefore, we will investigate the impact of the closure reduction, for each local modularity threshold lm. For that, we first consider the quantity #lme of core closed patterns with a local modularity estimate above lm, as provided by MinerLSD, when using 1-cores. We consider then the quantity #lmeSD of patterns developed by COMODO using the same threshold. Table 2 reports #lme and #lmeSD for our datasets under investigation. Table 2 Number of patterns to develop in MinerLSD and COMODO (according to the respective local modularity threshold 0.005... 0.15) using a 1-core abstraction for MinerLSD We observe two very different situations. In the Lawyers and CoExp datasets there is a large difference between #lmeSD and #lme, while there are considerable but not so strongly expressed differences in the other datasets compared to the former. Large differences typically occur when items have strong dependencies hence leading to a large reduction of the search space when applying a closure operator. For instance, in the Lawyers dataset vertices are described by various numeric attributes. In our representation, a single numeric attribute x leads to a set of x≤si and of x>si items with various thresholds si. This allows to include interval constraint as x∈]sj,sk] within patterns. However there are then several equivalent patterns in which the same interval is represented in various ways. For instance, consider 4 thresholds s1,…,s4, the interval x∈]s2s3] is represented by x>s2,x≤s3, x>s1,x>s2,x≤s3 and x>s1,x>s2,x≤s3,x≤s4. The latter is the only one found in a closed pattern. COMODO has then to generate many equivalent patterns while MinerLC, which applies a closure operator at each specialization step never generates two equivalent patterns, thus reducing the exploration of the pattern space effectively. In the DBLP.P datasets at the contrary the items are tags, with no taxonomic order relating them. Therefore, the values of #lme and #lmeSD are much closer, and even identical regarding the DBLP.C dataset. k-core sizes of the various networks Before considering how reducing support sets to k-cores affects the number of closed patterns in each dataset, we consider the various networks and compute their k-core sizes for a range of values of k. This pre-analysis aims to evaluate which level of k we should use in our experiments. For small datasets for which computing closed patterns does need much resources this is not that important. However, for large datasets with many attributes, i.e., potentially large numbers of closed patterns, it is much better to have a rough guideline for selecting appropriate parameters for optimizing the computational effort. In Fig. 2 we display the k-core sizes for a range of values of k, for each dataset. As we will see below, the small but densest networks for which local-modularity-based pruning has a weak efficiency, namely coExp and Lawyers, also exhibit a (relatively) slow decay with respect to increasing k values, whereas for the other (larger) datasets we observe a quite considerable decrease in terms of the k-core sizes. k-core sizes of the networks associated to our datasets versusk Modularity Distributions As a prerequisite for the further analysis of the local modularity optimistic estimate, we aimed to get a more detailed insight into the distribution, similar to our pre-analysis for the k-cores discussed above. Figures 3-4 show the detailed results. The plots indicate the "meaningful" values for estimating the local modularity thresholds, which support our selections of parameters in the subsequent evaluations. Furthermore, Fig. 3 also indicates the pruning potential of the local modularity threshold, even using our rather approximating sampling strategy. Detailed Estimated/Observed Modularity Distributions: We consider the unlabeled graph of the dataset. We generate 100 random subgraphs of the unlabelled graph picking randomly half of the vertices. For each random graph, we compute the local modularity of the abstract 5-core subgraph and we report the survival distribution of the local modularity over the 100 experiments (in orange), i.e., for each local modularity (lm) level, the probability of having at least that level in our sample. In blue, we report the (empirically observed) survival distribution of the local modularity, i.e., the respective MODL values of the core subgraphs of the abstract patterns discovered using the 5-core abstraction Distribution of the local modularity of the 5-core abstraction in samples of 100 unlabelled random subgraphs having half of the size (number of vertices) of the original graph Pruning: Efficiency of the Local Modularity Estimate For investigating the efficiency of pruning using the modularity estimate, we compare our proposed algorithm MinerLSD to the MinerLC algorithm, which applies no optimistic estimate pruning. For the other baseline, i.e., COMODO we already investigated the efficiency of MinerLSD which showed a considerable reduction in the number of considered patterns, cf., section "Impact of Closed Patterns in Reducing the Search Space". Regarding the number of output patterns, both actually yield the same numbers, if a postprocessing step of COMODO is applied for keeping only the subset of closed patterns (as discussed in section "Similarities and Differences in Pattern Selection"), i.e., by considering all pairs (c,e) with the same (vertex) subgroup e and only keeping the most specific ones. With this postprocessing COMODO returns exactly the same patterns as those output by MinerLSD in our experiments. However, this approach is quite inefficient, cf., section "Impact of Closed Patterns in Reducing the Search Space", since the number of considered patterns is typically considerably larger for COMODO compared to MinerLSD. Regarding the modularity estimate, we first investigate how the local modularity constraint affects the number of output pairs. In general, as oe(MODL) is an optimistic estimator, we may consider the best possible optimistic estimator which would only develop the #nec nodes that have at least a descendant (c,e) with local modularity MODL(e)≥lm. We have then #lm≤#nec≤#lme. Whenever #lm is far from #nec this means that there does not exist any good optimistic estimator. Whenever #lm is close to #nec which in turn is far from #lme this means that there could be some optimistic estimator that is much better than oe(MODL). By computing these numbers, we can then state separately for each dataset whether the oe(MODL) estimate is efficient in pruning the search with respect to the best possible estimator nec and whether nec would be efficient in pruning the search, if such an estimator would be found. Small Datasets In a first step, we first considered several rather small datasets using no minimal support parameters, and a 1-core abstraction in MinerLSD aiming to provide a comparable setting for COMODO. We also checked the number of patterns retrieved by COMODO with additional postprocessing as discussed above - only keeping the closed patterns. We used parameters that do not limit the enumeration in COMODO, i.e., for an exhaustive search only using the local modularity threshold for pruning. Likewise, for MinerLSD, we select and count vertex subgroups whose induced subgraphs satisfy a local modularity threshold lm. In this way, we could confirm (again) that the final number of output patterns is the same for both algorithms, as discussed above. Figure 5 depicts the results of the applied five datasets, with the detailed results in Table 3. Overall, the local modularity estimate is efficient in pruning the pattern exploration, on different levels. For instance, in the Lawyers dataset, MinerLSD finds #c=3221 patterns at level lm=0.005 and most of them, i.e., 2929, have an oe(MODL) value above 0.005, not too far from the #nec=1792 patterns any top-down search would have to develop anyway to select the 1238 patterns with local modularity MODL above 0.005. There is then a slow decrease of #lme while the decrease of #nec and #lm is much faster. Yet, pruning does still work, reducing the search effort considerably. Numbers of patterns with #lme, #nec and #lm values (on the Y-axis), above the local modularity threshold (on the X-axis) for 5 attributed networks, using a 1-core abstraction Table 3 Number of patterns total, developed, necessary and with required local modularity (according to the respective threshold 0.005... 0.15) using a 1-core abstraction In contrast, for the larger datasets, e.g., for DBLP.P among the #c=2396 patterns only 34 have a local modularity estimate above 0.005, 29 of them have to be developed and 28 do have a local modularity above 0.005. Furthermore, in the DBLP.C dataset among the #c=14820 patterns only 179 have a local modularity estimate above 0.005, 145 of them have to be developed and 144 do have a local modularity above 0.005. When the local modularity threshold increases, #lme keeps being close to #lm. Overall, the Lawyers dataset displays moderate pruning efficiency, still allowing to avoid to develop many nodes, and this is also the case for the S50 and CoExp datasets. In contrast, DBLP.C and DBLP.P indicate a very efficient optimistic pruning in terms of the numbers of patterns. Tables 4 and 5 show the runtime results of MinerLSD for the larger of the small datasets (Lawyers, CoExp, DBLP.C, DBLP.P, runtime in seconds). Here, we observe that MinerLSD is either in the same range or slightly faster than MinerLC for the small datasets, i.e., for Lawyers and CoExp. For DBLP.C, we observe a strongly reduced number of patterns, while the runtimes are always in the same range, especially for stronger (graph-)constraints. Here, we considered k-cores, k=1,2,3,5,7. Therefore, while strongly reducing the number of patterns the additional computation using the estimate still keeps the runtime of the algorithm in the same range as MinerLC most of the times. Table 4 MinerLSD #lm, #lme and execution time - small datasets, compared to #c of MinerLC for same core constraints Table 5 MinerLSD #lm, #lme and execution time - DBLP.C and DBLP.P, compared to #c of MinerLC for same core constraints In contrast to the other smaller datasets, for the larger DBLP.P dataset we observe an increase in the runtime of MinerLSD compared to MinerLC. However, this can be explained by some special characteristic of DBLP.P. The DBLP.P dataset contains an extremely limited number of labels (32) which are used in the dataset. Here, the extra effort of the estimation does not help too much in decreasing the runtime, because the enumeration in the label space is extremely fast, and hence the check of the patterns is mainly determined by the core abstraction. Medium Size Datasets Overall, MinerLSD detects closed patterns with the benefit of pruning using the oe(MODL)≥lm condition, i.e., only developing the #lme nodes according to Table 2. Furthermore, applying both the k-cores and local modularity constraints makes it possible to find some balance between the k-core and the local modularity constraint to apply when facing large datasets that are difficult to mine. This is investigated on the two datasets LastFM and Delicious, i.e., those with the largest number of closed core patterns when considering the 1-core and no local modularity thresholds – these were not investigated in Tables 2 and 3, respectively. For these medium sized datasets, we performed experiments using 1-cores, 2-cores, 3-cores, 5-cores and 7-cores with local modularity thresholds 0.01,0.02, 0.03, 0.04, 0.05, and 0.15; the results regarding the number of closed patterns and the total CPU time (including pruning/optimistic estimation) are shown in Fig. 6 (runtimes in seconds). Number of patterns and execution time of MinerLSD on the DBLP.C, DBLP.P, Delicious and LastFM datasets with 3-cores, 5-cores and 7-cores and local modularity thresholds ranging from 0.01 to 0.15. The Y-axis of the topmost figure represents the number of closed patterns outptut by MinerLSD while the bottom figure displays the CPU time. Both Y-axis are displayed using a logarithmic scale The benefit of applying local modularity constraints in the resulting number of closed patterns is, as expected, quite impressive. When no constraint (outside the 1-core) is applied, MinerLC in comparison finds 1,555,292 and 11,833,577 closed patterns, respectively. For MinerLSD, in the LastFM case there are no strong differences when using 1-cores, 2-cores and 3-cores while we know from Fig. 2 that using 4-cores does have an important effect. Corresponding results are also observed for larger sizes of the respective k-cores. Regarding the Delicious dataset, we observe a smaller number of patterns at local modularity levels 0.04 and 0.05 with 1-cores than with 2 and 3-cores. When no local modularity constraint is applied the closed patterns with 2 and 3-cores are a subset of the closed patterns with 1-cores, therefore the results seem counterintuitive at first. However, for the same pattern the 3-core subgraph is smaller than the 1-core subgraph and may have better local modularity, which happens in the Delicious case. Regarding the CPU times, we observe a considerable decrease using appropriate local modularity thresholds for both LastFM and Delicious which is especially important for weaker (graph-)constraints, i.e., with respect to the applied k-cores. Using the appropriate modularity thresholds the runtime can be considerably decreased which enables new approaches already for medium sized datasets, e.g., concerning pattern exploration. Specifically, if we compare the extra computation performed by MinerLSD for computing the estimate, in the Delicious case, the benefit is immediately obvious: MinerLSD is always much faster than MinerLC. The LastFM dataset shows a somewhat different picture: with weaker core-constraints and at local modularity level of 0.01 MinerLC (which does not consider local modularity) is (slightly) faster than MinerLSD. This is not that surprising, since MinerLSD has to compute local modularity estimates and local modularities for all the developed patterns during search. However, first this happens only for weak constraints, and second, when using MinerLC all these computations (in fact much more as there is no pruning), would have to be made anyway in post-processing fashion for obtaining the patterns according to a local modularity threshold. Furthermore, the runtime behavior of LastFM here is similar to DBLP.P and can also be explained by the smaller number of labels compared to Delicious. Overall, this shows that if we consider appropriate local modularity thresholds MinerLSD already allows the analysis of larger datasets, especially in terms of larger sizes of the labels, while comparable results (with respect to MinerLC) are usually obtained for weak (graph-)constraints. However, the efficient pruning of MinerLSD is important, e.g., for exploration, and also for the processing of larger datasets, as we will also discuss in the next section for large datasets. Detailed results are presented in Table 6 which also displays the #lme numbers. Table 6 MinerLSD #lm, #lme and execution time compared to #c of MinerLC for same core constraints Large Datasets In this section, we present experiments of MinerLSD on two large datasets, namely DBLP.S and DBLP.XL (see Table 1 for their characteristics) to further explore the scalability of MinerLSD when using both k-core and local modularity constraints. Again we do not use any threshold on the pattern supports. In Table 7, we report the results on DBLP.S and DBLP.XL with the same local modularity thresholds as in the previous section and applying k=1,2,3,5,7 and 7 k-core constraints, respectively. The scalability of MinerLSD depends obviously on the size and density of the network but also heavily depends on the size of the attribute set and on the average number of labels per vertex. DBLP.XL is then a real challenge as it is a large network made of 929,937 vertices related by 3,461,697 edges and described by more than 90,000 items, with an average number of 10.16 labels per vertex. The efficiency of the optimistic pruning is then of primary importance. As can be seen in the results table, optimistic estimate pruning using local modularity is quite effective in achieving an efficient pattern mining approach. For both datasets, we observe large reductions in the number of patterns, while focussing on the interesting ones according to the applied local modularity interestingness measure and the utilized local modularity thresholds. In particular, the results for DBLP.S indicate the enormous pruning efficiency - here the dataset for weaker constraints cannot be handled by MinerLC at all, where the computation did not terminate after 36 h. The DBLP.XL results indicate the same trend. Overall, this indicates the huge impact of optimistic estimate pruning using local modularity as provided by MinerLSD for handling large datasets. Structural Pattern Set Analysis In the following, we analyze the results of the proposed pattern mining method MinerLSD in more detail, focussing on different graph statistics. We report exemplary results on three datasets with different characteristics as outlined in section "Datasets", i.e., the Lawyers, the CoExp, and the DBLP.C datasets. We consider all patterns above a given local modularity threshold, combined with different core abstractions. For computing the graph statistics, we analyze the respective induced subgraph W of each pattern, and consider the following: (1) the vertex count NW, (2) the edge count EW, (3) the scaled density (cf., (Lancichinetti et al. 2010)) of subgraph W, i.e., the ratio of EW divided by the number of edges of a complete graph with the same number of vertices as W and multiplied (scaled) by the total number of vertices; this measure approximately estimates the average degree of the nodes contained in the community, cf., (Lancichinetti et al. 2010). (4) Furthermore, we also consider the fraction of outgoing edges, i.e., the edges connecting nodes contained in the pattern with others not being part of the pattern subgraph, to the set of edges EW. The results are shown in Tables 8, 9, 10 and 11. Table 8 Vertex Counts: Mean and standard deviation (in brackets) of the number of vertices of the pattern support, i.e., of the induced pattern subgraphs, for different values of k and the local modularity threshold lm Table 9 Edge Counts: Mean and standard deviation (in brackets) of the number of edges of the pattern support, i.e., of the induced pattern subgraphs, for different values of k and the local modularity threshold lm Table 10 Scaled Graph Densities: Mean and standard deviation (in brackets) of the scaled densities of the pattern support, i.e., of the induced pattern subgraphs, for different values of k and the local modularity threshold lm Table 11 Ratio of outgoing edges to edges in the pattern subgraph (in-edges): Mean and standard deviation (in brackets) of that ratio of the pattern support, i.e., of the induced pattern subgraphs, for different values of k and the local modularity threshold lm Considering the results shown in Tables 8 and 9 we observe that, as expected, increasing numbers of k tend to focus on larger communities, which is especially the case for weaker core constraints and larger local modularity thresholds. In particular, we observe those trends for the local modularity for the Lawyers and the DBLP.C datasets, while this is also pronounced for CoExp regarding stronger constraints. For the DBLP.C network, in particular, we observe a rather strong effect. Overall, with no constraints quite small patterns are detected. When the k-core constraint and the local modularity threshold are increased, then larger patterns are detected which are also considerably denser than those with no constraints. This can clearly be observed in Table 10 for increasing k-core and local modularity threshold values. Furthermore, when we consider the ratio of outgoing edges vs. in-edges of a pattern shown in Table 11, then we also observe the trend that the proposed approach focuses on selecting denser pattern subgraphs with a stronger connectivity structure in terms of the links within the subgraph, i.e., the in-edges. This is especially obvious for higher k-core and local modularity threshold values, as exemplified by the CoExp and DBLP.C datasets, e.g., for k=5 and lm=0.04 where the number of in-edges strongly "dominates" the number of outgoing edges. Pattern Selection and K-Core Abstraction In this section, we provide examples of patterns demonstrating the benefits of pattern selection using local modularity and k-core abstraction. In particular, we discuss illustrative examples from two different datasets – the Lawyers and the (larger) DBLP.C dataset. Lawyers Dataset In order to demonstrate the effectiveness of the pattern exploration and selection methodology using abstract closed pattern with k-cores and local modularity, we exemplify that with the two patterns shown in Fig. 7. Here, we show two similar patterns in terms of Jaccard similarity (0.52) considering the nodes of the respective pattern-induced subgraphs. While the patterns are very similar regarding the overlap and their size, they have quite different local modularity values referring to their connectivity structure. The left pattern described by 35<Age≤65 AND Seniority<5 AND Status=Partner, with a size=24 of the set of nodes in its subgraph, is considerably denser with a local modularity of MODL=0.058, compared to the pattern on the right; the latter is described by Age<40 AND Seniority≤30, with a size=23 of the pattern support and a local modularity of only MODL=0.013. Therefore, while both patterns are abstract closed patterns according to similar support criteria and the 5-core abstraction, a higher modularity threshold, e.g., MODL≥0.05 would only select the first (left pattern in Fig. 7) instead of the right pattern. From the description, we can also observe that the selected (left) pattern is more interesting, since it provides a more precise description. In the figures, we depict in red the edges and the vertices in the pattern subgraph; in gray, we show the out-edges of the pattern (i.e., one vertex of a gray edge is contained in the pattern extension and the other vertex is not); in light gray we depict the rest of the graph. Example patterns from the Lawyers dataset: Both patterns are similar 5-cores, with a Jaccard similarity considering the nodes of the respective pattern-induced subgraphs of 0.52. The pattern on the left (described by 35<Age≤65 AND Seniority<5 AND Status=Partner, with size=24) is considerably denser with a local modularity of MODL=0.058, compared to the pattern on the right (described by Age<40 AND Seniority≤30, with size=23) which only has a local modularity of MODL=0.013. In the figures, we depict in red the edges and the vertices in the pattern extension, in gray the out-edges of the pattern (i.e., one vertex of a gray edge is contained in the pattern subgraph and the other vertex is not) and in light gray the rest of the graph DBLP.C Dataset In order to show the impact of pattern selection and k-core abstraction, we first consider the local Modularities on k-cores with increasing k. For analyzing the impact of the k-cores we firstly consider the empty pattern, thus only focussing on the abstraction by the applied k-core. For the local modularity values of the empty pattern, for k=2,3,4,5 we observe MODL=0.0075,0.0430,0.0915,0.1223, respectively. Thus, we observe the clear trend that increasing k yields patterns with higher connectivity structures as shown by the increasing local modularity values; similar trends are obtained for the other datasets. This complements our results in the last section, where we discussed, how increasing k for the k-core abstraction together with increasing local modularity thresholds focuses on larger and more "interesting" patterns as measured by the local modularity quality function. Figure 8 illustrates these findings: The two left graphs show examples of the k-cores for the empty pattern, specifically, for the 5-core with the highest local modularity, and the corresponding 3-core pattern. Areas in red indicate the core graph – both vertices and edges, blue color shows the remaining edges incident to the nodes of the core graph, while gray depicts the edges of the rest of the graph. It is easy to see that both the 3-core (2223 vertices and 9399 edges) as well as the 5-core (904 vertices and 5621 edges) demonstrate a considerably strong connectivity structure. Finally, the graph plotted on the right of Fig. 8 shows a specialization of the empty pattern on the 3-core, i.e. the pattern given by the label "mine". This pattern is obviously smaller (covering 290 vertices and 1059 edges) than the empty pattern, while its modularity structure is slightly better (MODL=0.0503). The left plot in Fig. 9 shows the "mine" pattern in detail, as a "zoom-in" focussing on all edges incident to nodes contained in the pattern subgraph. Illustrative patterns (DBLP.C). Left: 5-core empty pattern with a local modularity of MODL=0.1223; middle: 3-core empty pattern with a local modularity of MODL=0.0430; right: 3-core "mine" pattern with a local modularity MODL=0.0503. In the plots, red color indicates the core graph (i.e., the in-edges of the pattern), blue color shows the edges incident to the nodes of the core graph, gray depicts the edges of the rest of the graph Detailed view:"mine" pattern (left), with local modularity MODL=0.0503 vs. the lower-quality "algorithm" pattern (right), MODL=0.0072. In-edges (red), out-edges (blue) Figure 9 illustrates the selection process for different 3-core patterns in detail, providing the "mine" pattern (covering 290 vertices and 1059 edges, MODL=0.0503) that is selected according to a local modularity threshold lm=0.04 and the "algorithm" pattern (covering 45 vertices and 93 edges, MODL=0.0072) which is a further specialization of the 3-core empty pattern. As we can clearly observe for the "mine" pattern, its structure is more interesting concerning its connectivity – i.e., its distributional unusualness compared to the expectation modeled by the null-model. This is a representative illustration, how the proposed approach using local modularity pruning achieves a better pattern selection method for the same core constraint(s). In this paper, we have proposed the novel MinerLSD method for efficient local pattern mining on attributed networks. It enumerates local patterns and associated subgroups in attributed networks, utilizing different pattern and graph mining techniques. In particular, MinerLSD is based on three main basic ideas: First, enumerating only closed patterns, which is particularly beneficial whenever items have dependencies. This occurs as soon as some attributes, either numeric or hierarchical, have to be translated into various items to express interesting patterns, e.g., interrelated intervals and hierarchical dependencies. Second, we focus on reducing pattern subgraphs to core subgraphs which allows both to strongly reduce the number of patterns and to focus on essential parts of graphs. Third, we select cohesive subgraphs during the search according to topological quantities as local modularity and, above all, to allow pruning by using optimistic estimates of the local modularity measure. We performed a set of experiments in order to estimate the impact of the investigated approaches, for which we included two baseline methods, i.e., MinerLC and COMODO for comparison. The purpose was then to investigate i) the pruning efficiency of MinerLSD using the local modularity estimate as implemented in COMODO, ii) the impact of searching for closed patterns (as implemented in MinerLC) and therefore enumerating only the cohesive subgraph associated to the patterns, and iii) the added potential for pattern selection based on the combination of both k-core abstraction and local modularity selection. The latter allows to strongly reduce the number of patterns while focussing on essential parts of the graph which leads to more interesting high quality patterns. For our experiments we used a number of datasets with different characteristics, also ranging from small to large datasets in order to estimate the scalability of MinerLSD. Overall the result indicated effects that were always positive, and sometimes even crucial, for allowing to handle even rather complex and large datasets with reasonable pattern set sizes and computational effort – without using any minimum support threshold. Specifically, the results of our experiments show the efficiency of the presented method. Furthermore, we have presented exemplary results showing the benefit of pattern selection and abstraction which demonstrate the efficacy of the proposed MinerLSD approach. Overall, by implementing the different ideas. and techniques summarized above in the novel MinerLSD method, i.e., utilizing closed patterns, graph abstractions, optimistic estimate pruning using local modularity), we obtain a very flexible tool that allows to handle large graphs with adequate constraints on the subgroups and patterns to discover. For future work, we intend to characterize the attributed graphs in terms of which pruning method is especially efficient, and to investigate other measures than local modularity in order to estimate their pruning efficiency. Furthermore, we aim to investigate other core definitions than k-cores as well. Also, focussing on sets of (local) patterns, and their relations, in order to obtain, e.g., the most diverse, representative, interesting, and relevant results, cf., (Knobbe and Ho 2006; Lemmerich et al. 2010; Van Leeuwen and Knobbe 2012; Atzmueller et al. 2015) is a further interesting research direction to consider. Datasets and the implementation of MinerLSD can be found at the following website: https://lipn.univ-paris13.fr/MinerLC/ http://www.stats.ox.ac.uk/~snijders/siena/s50_data.htm https://www.stats.ox.ac.uk/~snijders/siena/Lazega_lawyers_data.htm https://grouplens.org/datasets/hetrec-2011/ https://lipn.univ-paris13.fr/MinerLC/ http://www.vikamine.org(Atzmueller and Lemmerich 2012) Agrawal, R, Srikant R (1994) Fast Algorithms for Mining Association Rules In: Proc. VLDB, 487–499.. Morgan Kaufmann. Almendral, JA, Oliveira J, López L, Mendes J, Sanjuán MA (2007) The Network of Scientific Collaborations within the European Framework Programme. Phys A: Stat Mech Appl 384(2):675–683. Atzmueller, M (2014) Data Mining on Social Interaction Networks. JDMDH 1. Atzmueller, M (2015) Subgroup Discovery. WIREs DMKD 5(1):35–49. Atzmueller, M (2017) Onto Explicative Data Mining: Exploratory, Interpretable and Explainable Analysis In: Proc. Dutch-Belgian Database Day.. TU Eindhoven, NL. Atzmueller, M (2018) Compositional Subgroup Discovery on Attributed Social Interaction Networks In: Proc. International Conference on Discovery Science.. Springer, Berlin/Heidelberg. Atzmueller, M (2019) Onto Model-based Anomalous Link Pattern Mining on Feature-Rich Social Interaction Networks In: Proc. WWW 2019 (Companion).. IW3C2 / ACM. Atzmueller, M, Doerfel S, Mitzlaff F (2016) Description-Oriented Community Detection using Exhaustive Subgroup Discovery. Inf Sci 329:965–984. Atzmueller, M, Lemmerich F (2009) Fast Subgroup Discovery for Continuous Target Concepts In: Proc. 18th International Symposium on Methodologies for Intelligent Systems (ISMIS 2009), LNCS, 1–15.. Springer, Berlin/Heidelberg. Atzmueller, M, Lemmerich F (2012) VIKAMINE - Open-Source Subgroup Discovery, Pattern Mining, and Analytics In: Proc. ECML/PKDD.. Springer, Berlin/Heidelberg. Atzmueller, M, Mitzlaff F (2010) Towards Mining Descriptive Community Patterns In: Workshop on Mining Patterns and Subgroups.. Lorentz Center, Leiden. Atzmueller, M, Mitzlaff F (2011) Efficient Descriptive Community Mining In: Proc. FLAIRS, 459–464.. AAAI Press. Atzmueller, M, Mueller J, Becker M (2015) Mining, Modeling and Recommending 'Things? in Social Media, chap. Exploratory Subgroup Analytics on Ubiquitous Data. No. 8940 in LNAI.. Springer, Berlin/Heidelberg. Atzmueller, M, Puppe F (2006) SD-Map - A Fast Algorithm for Exhaustive Subgroup Discovery In: Proc. PKDD, 6–17.. Springer, Berlin/Heidelberg. Atzmueller, M, Soldano H, Santini G, Bouthinon D (2018) MinerLSD: Efficient Local Pattern Mining on Attributed Graphs In: Proc. 2018 IEEE International Conference on Data Mining Workshops (ICDMW).. IEEE Press, Boston. Balasubramanyan, R, Cohen WW (2011) Block-LDA: Jointly modeling entity-annotated text and entity-entity links In: Proc. SDM, 450–461. Batagelj, V, Zaversnik M (2002) Generalized Cores. CoRR cs.DS/0202039. Batagelj, V, Zaversnik M (2011) Fast Algorithms for Determining (Generalized) Core Groups in Social Networks. Adv Data Anal Classif 5(2):129–145. Bechara Prado, A, Plantevit M, Robardet C, Boulicaut JF (2013) Mining Graph Topological Patterns: Finding Co-variations among Vertex Descriptors. IEEE Trans Knowl Data Eng 25(9):2090–2104. Belfin, R, Bródka P, et al. (2018) Overlapping Community Detection using Superior Seed Set Selection in Social Networks. Comput Electr Eng 70:1074–1083. Bendimerad, AA, Plantevit M, Robardet C (2016) Unsupervised Exceptional Attributed Subgraph Mining in Urban Data In: Proc. ICDM, 21–30.. IEEE. Boley, M, Horváth T, Poigné A, Wrobel S (2010) Listing Closed Sets of Strongly Accessible Set Systems with Applications to Data Mining. TCS 411(3):691–700. Bothorel, C, Cruz JD, Magnani M, Micenkova B (2015) Clustering Attributed Graphs: Models, Measures and Methods. Netw Sci 3(03):408–444. Cantador, I, Brusilovsky P, Kuflik T (2011) 2nd Workshop on Information Heterogeneity and Fusion in Recommender Systems (HetRec) In: Proc. RecSys.. ACM, New York. Combe, D, Largeron C, Gėry M, Egyed-Zsigmond E (2015) I-louvain: An attributed graph clustering method In: Proc. IDA. Advances in Intelligent Data Analysis, 181–192.. Springer, Berlin/Heidelberg. Fayyad, UM, Piatetsky-Shapiro G, Smyth P (1996) From Data Mining to Knowledge Discovery: An Overview. In: Fayyad UM, Piatetsky-Shapiro, Smyth P, Uthurusamy R (eds)Advances in Knowledge Discovery and Data Mining, 1–34.. AAAI Press, Palo Alto. Fortunato, S (2010) Community Detection in Graphs. Phys Rep 486(3-5):75–174. Fortunato, S, Castellano C (2007) Encyclopedia of Complexity and System Science, chap. Community Structure in Graphs. Springer, Heidelberg. Freeman, L (1978) Segregation In Social Networks. Sociol Methods Res 6(4):411. Galbrun, E, Gionis A, Tatti N (2014) Overlapping Community Detection in Labeled Graphs. DMKD 28(5-6):1586–1610. Ganter, B, Wille R (1999) Formal Concept Analysis: Mathematical Foundations. Springer Verlag, Heidelberg. Ge, R, Ester M, Gao BJ, Hu Z, Bhattacharya BK, Ben-Moshe B (2008) Joint cluster analysis of attribute data and relationship data: The connected k-center problem, algorithms and applications. TKDD 2(2). Grosskreutz, H, Rüping S, Wrobel S (2008) Tight Optimistic Estimates for Fast Subgroup Discovery In: Proc. ECML/PKDD, LNCS, vol. 5211, 440–456.. Springer, Berlin/Heidelberg. Günnemann, S, Färber I, Boden B, Seidl T (2013) GAMer: A Synthesis of Subspace Clustering and Dense Subgraph Mining. KAIS 40(2):243–278. Han, J, Pei J, Yin Y (2000) Mining Frequent Patterns Without Candidate Generation In: Proc. ACM SIGMOD, 1–12.. ACM Press. Kanawati, R (2014) Seed-Centric Approaches for Community Detection in Complex Networks In: International Conference on Social Computing and Social Media, 197–208.. Springer, Berlin/Heidelberg. Kanawati, R (2014) Yasca: an ensemble-based approach for community detection in complex networks In: International Computing and Combinatorics Conference, 657–666.. Springer, Berlin/Heidelberg. Kaytoue, M, Plantevit M, Zimmermann A, Bendimerad A, Robardet C (2017) Exceptional Contextual Subgraph Mining. Mach Learn:1–41. Kibanov, M, Atzmueller M, Scholz C, Stumme G (2014) Temporal Evolution of Contacts and Communities in Networks of Face-to-Face Human Interactions. Sci China Inf Sci 57(3):1–17. Klösgen, W (1996) Explora: A Multipattern and Multistrategy Discovery Assistant In: Advances in Knowledge Discovery and Data Mining, 249–271.. AAAI Press, Palo Alto. Knobbe, AJ, Cremilleux B, Fu̇rnkranz J, Scholz M (2008) From Local Patterns to Global Models: The LeGo Approach to Data Mining In: From Local Patterns to Global Models: Proceedings of the ECML/PKDD-08 Workshop (LeGo-08), 1–16. Knobbe, AJ, Ho EK (2006) Pattern Teams In: Proc. PKDD, 577–584.. Springer, Berlin/Heidelberg. Kumar, R, Novak J, Tomkins A (2006) Structure and Evolution of Online Social Networks In: Proc. ACM SIGKDD, 611–617.. ACM. Lancichinetti, A, Fortunato S, Kertész J (2009) Detecting the Overlapping and Hierarchical Community Structure in Complex Networks. New J Phys 11(3). Lancichinetti, A, Kivelä M, Saramäki J, Fortunato S (2010) Characterizing the community structure of complex networks. PloS One 5(8):e11,976. Lazega, E (2001) The Collegial Phenomenon: The Social Mechanisms of Cooperation Among Peers in a Corporate Law Partnership. Oxford University Press. Lemmerich, F, Atzmueller M, Puppe F (2016) Fast Exhaustive Subgroup Discovery with Numerical Target Concepts. Data Min Knowl Discov 30:711–762. Lemmerich, F, Becker M, Atzmueller M (2012) Generic Pattern Trees for Exhaustive Exceptional Model Mining In: Proc. ECML PKDD, LNCS, vol. 7524, 277–292.. Springer, Berlin/Heidelberg. Lemmerich, F, Rohlfs M, Atzmueller M (2010) Fast Discovery of Relevant Subgroup Patterns In: Proc. FLAIRS, 428–433.. AAAI Press, Palo Alto. Leskovec, J, Lang KJ, Dasgupta A, Mahoney MW (2008) Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters. CoRR. abs/0810.1355. Leskovec, J, Lang KJ, Mahoney M (2010) Empirical Comparison of Algorithms for Network Community Detection In: Proc. WWW, 631–640.. ACM, New York. Mitzlaff, F, Atzmueller M, Benz D, Hotho A, Stumme G (2011) Community Assessment using Evidence Networks In: Analysis of Social Media and Ubiquitous Data, LNAI, vol. 6904.. Springer, Berlin/Heidelberg. Mitzlaff, F, Atzmueller M, Hotho A, Stumme G (2014) The Social Distributional Hypothesis. SNAM 4(216). Mitzlaff, F, Atzmueller M, Stumme G, Hotho A (2013) Semantics of User Interaction in Social Media In: Complex Networks IV, SCI, vol. 476.. Springer, Berlin/Heidelberg. Morik, K (2002) Detecting Interesting Instances. In: Hand D, Adams N, Bolton R (eds)Pattern Detection and Discovery, LNCS, vol. 2447, 13–23.. Springer, Berlin/Heidelberg. Morik, K, Boulicaut J, Siebes A (2005) Local Pattern Detection, International Seminar, Dagstuhl Castle, Germany, April 12-16, 2004, Revised Selected Papers, LNCS, vol. 3539. Springer, Berlin/Heidelberg. Moser, F, Colak R, Rafiey A, Ester M (2009) Mining Cohesive Patterns from Graphs with Feature Vectors In: Proc. SDM, 593–604. Newman, ME, Girvan M (2004) Finding and Evaluating Community Structure in Networks. Phys Rev E Stat Nonlin Soft Matter Phys 69(2):1–15. Newman, MEJ (2003) The Structure and Function of Complex Networks. SIAM Rev 45(2):167–256. Newman, MEJ (2004) Detecting Community Structure in Networks In: EPJ 38. Newman, MEJ (2006) Modularity and Community Structure in Networks. PNAS 103(23):8577–8582. Nicolle, R, Radvanyi F, Elati M (2015) Coregnet: Reconstruction and Integrated Analysis of Co-Regulatory Networks. Bioinformatics. Nicosia, V, Mangioni G, Carchiolo V, Malgeri M (2009) Extending the Definition of Modularity to Directed Graphs with Overlapping Communities. J Stat Mech. Palla, G, Derenyi I, Farkas I, Vicsek T (2005) Uncovering the Overlapping Community Structure of Complex Networks in Nature and Society. Nature 435(7043):814–818. Palla, G, Farkas IJ, Pollner P, Derenyi I, Vicsek T (2007) Directed Network Modules. New J Phys 9(6):186. Pasquier, N, Bastide Y, Taouil R, Lakhal L (1999) Efficient Mining of Association Rules using Closed Itemset Lattices. Inf Syst 24(1):25–46. Peng, C, Kolda TG, Pinar A (2014) Accelerating Community Detection by Using k-core Subgraphs 1403:2226. arXiv preprint arXiv. Pernelle, N, Rousset MC, Soldano H, Ventos V (2002) ZooM: A Nested Galois Lattices-Based System for Conceptual Clustering. J Exp Theor Artif Intell 2/3(14):157–187. Pool, S, Bonchi F, van Leeuwen M (2014) Description-driven Community Detection. ACM Trans Intell Syst Technol 5(2). Seidman, SB (1983) Network Structure and Minimum Degree. Soc Netw 5:269–287. Silva, A, Meira Jr. W, Zaki MJ (2012) Mining attribute-structure correlated patterns in large attributed graphs. Proc VLDB Endow 5(5):466–477. Silva, A, Meira W, Zaki MJ (2012) Mining Attribute-Structure Correlated Patterns in Large Attributed Graphs. Proc VLDB Endowment 5(5):466–477. Smith, LM, Zhu L, Lerman K, Percus AG (2014) Partitioning networks with node attributes by compressing information flow. CoRR. abs/1405.4332. Soldano, H, Santini G (2014) Graph Abstraction for Closed Pattern Mining in Attributed Networks In: Proc. ECAI, FAIA, vol. 263, 849–854.. IOS Press. Soldano, H, Santini G, Bouthinon D (2015) Local Knowledge Discovery in Attributed Graphs In: Proc. ICTAI, 250–257.. IEEE. Soldano, H, Santini G, Bouthinon D, Lazega E (2017) Hub-Authority Cores and Attributed Directed Network Mining In: Proc. ICTAI.. IEEE, Boston, MA. Steinhaeuser, K, Chawla NV (2008) Community detection in a large real-world social network In: Social computing, behavioral modeling, and prediction, 168–175.. Springer. Uno, T, Asai T, Uchida Y, Arimura H (2004) An Efficient Algorithm for Enumerating Closed Patterns in Transaction Databases In: Proc. Discovery Science, 16–31. Van Leeuwen, M, Knobbe A (2012) Diverse Subgroup Set Discovery. Data Min Knowl Discov 25(2):208–242. Wasserman, S, Faust K (1994) Social Network Analysis: Methods and Applications, 1 edn. No. 8 in Structural analysis in the social sciences. Cambridge University Press. Wille, R (1982) Restructuring Lattice Theory In: Symposium on Ordered Sets, 445–470.. University of Calgary, Boston. Wrobel, S (1997) An Algorithm for Multi-Relational Discovery of Subgroups In: Proc. PKDD, 78–87.. Springer, Berlin/Heidelberg. Xie, J, Kelley S, Szymanski BK (2013) Overlapping Community Detection in Networks: The State-of-the-art and Comparative Study. ACM Comput Surv 45(4):43:1–43:35. Xie, J, Szymanski BK (2013) LabelRank: A Stabilized Label Propagation Algorithm for Community Detection in Networks In: Proc. IEEE Network Science Workshop, West Point. Xu, Z, Ke Y, Wang Y, Cheng H, Cheng J (2012) A model-based approach to attributed graph clustering In: Proc. SIGMOD, 505–516. Yakoubi, Z, Kanawati R (2014) Licod: Leader-driven approaches for community detection. Vietnam J Comput Sci 1(4):241–256. Yang, J, Leskovec J (2012) Defining and Evaluating Network Communities Based on Ground-truth In: Proc. ACM SIGKDD Workshop on Mining Data Semantics, MDS '12, 3:1–3:8.. ACM, New York. Zhou, Y, Cheng H, Yu JX (2009) Graph clustering based on structural/attribute similarities. PVLDB 2(1):718–729. Zhu, L, Ng WK, Cheng J (2011) Structure and attribute index for approximate graph matching in large graphs. Inf Syst 36(6):958–972. Martin Atzmueller was supported in part by Université Sorbonne Paris Cité as a visiting professor. Tilburg University, Department of Cognitive Science and Artificial Intelligence, Warandelaan 2, Tilburg, 5037 AB, The Netherlands Martin Atzmueller Université Sorbonne Paris Cité, Paris, France LIPN, Université Paris-13, SPC UMR-CNRS 7030, Villetaneuse, 93430, France Henry Soldano , Guillaume Santini & Dominique Bouthinon ISYEB UMR 7205, Museum National d'Histoire Naturelle, Paris, France Search for Martin Atzmueller in: Search for Henry Soldano in: Search for Guillaume Santini in: Search for Dominique Bouthinon in: MA and HS conceived of the idea and study, interpretation of the data and drafted the manuscript. MA and GS ran the experiments. DB and GS implemented MinerLSD and related software. All authors read and approved the final manuscript. This work has been partially supported by the German Research Foundation (DFG) project "MODUS" (under grant AT 88/4-1). Furthermore, the research leading to these results has received funding from the Project Chistera Adalab (ANR-14-CHR2-0001-04). Correspondence to Martin Atzmueller. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Attributed networks Closed pattern mining Network analysis and mining Graph mining Community detection Modeling, Analyzing and Mining Feature-Rich Networks
CommonCrawl
Is density functional theory an ab initio method? The following comment by Wildcat made me think about whether density functional theory (DFT) can be considered an ab initio method. @Martin-マーチン, this is sort of nitpicking, but DFT (where the last "T" comes from "Theory") can be considered as an ab-initio method since the theory itself is built from the first principles. The problem with the theory is that the exact functional is unknown, and as a result, in practice we do DFA calculations ("A" from "Approximation") with some approximate functional. It it DFA which is not an ab-initio method then, not DFT. :) I always thought that ab initio refers to wave function based methods only. In principle the wave function is not necessary for the basis of DFT, but it was later introduced by Kohn and Sham for practical reasons. The IUPAC goldbook offers a definition of ab initio quantum mechanical methods: ab initio quantum mechanical methods Synonym: non-empirical quantum mechanical methods Methods of quantum mechanical calculations independent of any experiment other than the determination of fundamental constants. The methods are based on the use of the full Schroedinger equation to treat all the electrons of a chemical system. In practice, approximations are necessary to restrict the complexity of the electronic wavefunction and to make its calculation possible. According to this, most density functional approximations (DFA) cannot be termed ab initio since almost all involve some empirical parameters and/or fitting. DFT on the other hand is independent of any of this. What I have my problems with is the second sentence. It states, that treatment of all electrons is necessary. This is technically not the case for DFT, because here only the electron density is treated. All electrons and the wavefunction are implicitly treated. An earlier definition of ab initio can be found in Leland C. Allen and Arnold M. Karo, Rev. Mod. Phys., 1960, 32, 275. By ab initio we imply: First, consideration of all the electrons simultaneously. Second, use of the exact nonrelativistic Hamiltonian (with fixed nuclei), $$\mathcal{H} = -\frac12\sum_i{\nabla_i}^2 - \sum_{i,a}\frac{Z_a}{\mathbf{r}_{ia}} + \sum_{i>j}\frac{1}{\mathbf{r}_{ij}} + \sum_{a,b}\frac{Z_aZ_b}{\mathbf{r}_{ab}}$$ the indices $i$, $j$ and $a$, $b$ refer, respectively, to the electrons and to the nuclei with nuclear charges $Z_a$, and $Z_b$. Third, an effort should have been made to evaluate all integrals rigorously. Thus, calculations are omitted in which the Mulliken integral approximations or electrostatic models have been used exclusively. These approximate schemes are valuable for many purposes, but present experience indicates that they are not sufficiently accurate to give consistent results in ab initio work. This definition obviously does not include DFT, but this is probably due to the fact it was published before the Hohenberg-Kohn theorems. But in general this definition is still largely the same as in the goldbook. Another point which confuses me are titles like: "Potential Energy Surfaces of the Gas-Phase SN2 Reactions $\ce{X- + CH3X ~$=$~ XCH3 + X-}$ $\ce{(X ~$=$~ F, Cl, Br, I)}$: A Comparative Study by Density Functional Theory and ab Initio Methods" Liqun Deng , Vicenc Branchadell , Tom Ziegler, J. Am. Chem. Soc., 1994, 116 (23), 10645–10656. And then again we have titles like: "Ab Initio Density Functional Theory Study of the Structure and Vibrational Spectra of Cyclohexanone and its Isotopomers" F. J. Devlin and P. J. Stephens, J. Phys. Chem. A, 1999, 103 (4), 527–538. Unfortunately Koch and Holthausen, who wrote the probably most concise book on DFT, A Chemist's Guide to Density Functional Theory, never really refer to DFT as ab initio or clearly draw the line. The closest they come is on page 18: In the context of traditional wave function based ab initio quantum chemistry a large variety of computational schemes to deal with the electron correlation problem has been devised during the years. Since we will meet some of these techniques in our forthcoming discussion on the applicability of density functional theory as compared to these conventional techniques, we now briefly mention (but do not explain) the most popular ones. But that does not really answer my question. Throughout the book they use the term only in the form of conventional ab initio theory or in combination of explicitly stating wave function and variations thereof. In my quite extensive research about DFT selection criteria I never came about the term 'ab initio DFT'. So the question remains: theoretical-chemistry density-functional-theory ab-initio Martin - マーチン♦Martin - マーチン First note that the acronym DFA I used in my comment originates from Axel D. Becke paper on 50 year anniversary of DFT in chemistry: Let us introduce the acronym DFA at this point for "density-functional approximation." If you attend DFT meetings, you will know that Mel Levy often needs to remind us that DFT is exact. The failures we report at meetings and in papers are not failures of DFT, but failures of DFAs. Axel D. Becke, J. Chem. Phys., 2014, 140, 18A301. So, there are in fact two questions which must be addressed: "Is DFT ab initio?" and "Is DFA ab initio?" And in both cases the answer depend on the actual way ab initio is defined. If by ab initio one means a wave function based method that do not make any further approximations than HF and do not use any empirically fitted parameters, then clearly neither DFT nor DFA are ab initio methods since there is no wave function out there. But if by ab initio one means a method developed "from first principles", i.e. on the basis of a physical theory only without any additional input, then DFT is ab initio; DFA might or might not be ab initio (depending on the actual functional used). Note that the usual scientific meaning of ab initio is in fact the second one; it just happened historically that in quantum chemistry the term ab initio was originally attached exclusively to Hartree–Fock based (i.e. wave function based) methods and then stuck with them. But the main point was to distinguish methods that are based solely on theory (termed "ab initio") and those that uses some empirically fitted parameters to simplify the treatment (termed "semi-empirical"). But this distinction was done before DFT even appeared. So, the demarcation line between ab initio and not ab initio was drawn before DFT entered the scene, so that non-wave-function-based methods were not even considered. Consequently, there is no sense to question "Is DFT/DFA ab initio?" with this definition of ab initio historically limited to wave-function-based methods only. Today I think it is better to use the term ab initio in quantum chemistry in its more usual and more general scientific sense rather then continue to give it some special meaning which it happens to have just for historical reasons. And if we stick to the second definition of ab initio then, as I already said, DFT is ab initio since nothing is used to formulate it except for the same physical theory used to formulate HF and post-HF methods (quantum mechanics). DFT is developed from the quantum mechanical description without any additional input: basically, DFT just reformulates the conventional quantum mechanical wave function description of a many-electron system in terms of the electron density. But the situation with DFA is indeed a bit more involved. From the same viewpoint a DFA method with a functional which uses some experimental data in its construction is not ab initio. So, yes, DFA with B3LYP would not qualify as ab initio, since its parameters were fitted to a set of some experimentally measure quantities. However, a DFA method with a functional which does not involve any experimental data (except the values of fundamental constants) can be considered as ab initio method. Say, a DFA using some LDA functional constructed from a homogeneous electron gas model, is ab initio. It is by no means an exact method since it is based on a physically very crude approximation, but so does HF from the family of the wave function based methods. And if the later is considered to be ab initio despite the crudeness of the underlying approximation, why can't the former be also considered ab initio? WildcatWildcat $\begingroup$ I read and referred to that paper, too. I love that quote. I agree with you. (I still leave the accepting part for next week, to encourage more people to vote.) Would you say that the IUPAC definition should be updated, i.e. include the electron density explicitly in the last sentence? $\endgroup$ – Martin - マーチン♦ Jul 9 '15 at 12:13 $\begingroup$ @Martin-マーチン, it depends on how do you interpret this second sentence. In fact, I do not see any problem here with respect to DFT being ab initio method in accordance with this definition. Do we use the Schrödinger equation to treat the electrons? Yes, we do; rather indirectly, but we use it. We don't solve the Schrödinger equation, but we use it a starting point in the development of DFT: the key constituent, electron density, is defined in terms of the solution of the Schrödinger equation. $\endgroup$ – Wildcat Jul 9 '15 at 12:26 $\begingroup$ @Martin-マーチン, now, for the second part of this sentence that insists on treating "all the electrons", I again see no problem with DFT. We indeed treat all the electrons. Yes, we do so in a tricky way: throughout all the process we use only one-electron density without constructing any many-electron entity that describes our many-electron system as a whole (not like in HF where we construct the many-electron wave function out of one-electron function). But we do so because we already proved that many-electron systems can be treated in such a way: one-electron density is enough. $\endgroup$ – Wildcat Jul 9 '15 at 12:30 $\begingroup$ Yes, I had no problem with the second one, since HK clarifies that quite nicely. I was referring to the last sentence with limitations, considering that for example B3LYP is not ab initio. $\endgroup$ – Martin - マーチン♦ Jul 9 '15 at 12:37 $\begingroup$ @Martin-マーチン, got it. So, let us say that DFT is ab initio. Now the situation with DFA is indeed a bit more involved. From the same viewpoint a DFA method with a functional which uses some experimental data in its construction is not ab initio. So, yes, DFA with B3LYP would not qualify as ab initio, since its parameters were fitted to a set of some experimentally measure quantities. $\endgroup$ – Wildcat Jul 9 '15 at 13:00 The convention used by many is that ab initio refers solely to wave-function based methods of various sorts and that first principles refers to either wave-function or DFT methods with little approximation. I can't find a citation at the moment, but I know this convention is fairly widely used in, e.g., J. Phys. Chem. journals. The IUPAC gold book doesn't have "first principles," but Google Scholar gives over 224,000 hits for "first principles DFT". Geoff HutchisonGeoff Hutchison $\begingroup$ In physics, "from first principles" is a synonym for "ab initio"; just an English equivalent of the Latin phrase. In quantum chemistry we habitually avoid using "ab initio" term with DFT since it still can potentially cause some needless terminological battles due to historical strict meaning of "ab initio" term. $\endgroup$ – Wildcat Jul 9 '15 at 15:41 $\begingroup$ But you're perfectly right, of course. In QC the convention is to avoid calling DFT methods "ab initio" and exclusively use "from first principles" term for them, while both terms can be used for wf-based methods. $\endgroup$ – Wildcat Jul 9 '15 at 15:59 Thanks for contributing an answer to Chemistry Stack Exchange! Not the answer you're looking for? Browse other questions tagged theoretical-chemistry density-functional-theory ab-initio or ask your own question. DFT Functional Selection Criteria Why are there so many DFT Methods? Ab Initio and Molecular Orbitals Why is Density Functional Theory notoriously bad at describing oxygen molecules? DFT Calculations, Atomic Ionization Potentials — Which Exchange-Correlation Functional to Use, to Preserve Koopmans Theorem? Equivalent of Szabo and Ostlund book for DFT There are no wavefunctions in DFT Density Functional Definition Why are correlation consistent basis sets used with DFT calculations? Determining Kohn-Sham and Hartree Fock virtual orbitals: The underlying field Use of basis set in DFT (Density Functional Theory) What is density in density functional theory?
CommonCrawl
Physical applications of higher terms of Taylor series Depressingly many of the physical "applications" of Taylor series that I can find in textbooks and online are actually just applications of linear approximation, since they only take the constant and linear term of a Taylor series. For instance, this how you get the Newtonian limit of the kinetic energy of special relativity, the $1/r^3$ behavior of an electric dipole far away, and the standard approximation to the period of a pendulum. Since the whole point of a Taylor series is that we can go on from the linear approximation to get quadratic and higher approximations, I find this unsatisfying. So what are some good physical applications of the higher order terms in a Taylor expansion? (I'm looking for something more than just "we can get a better approximation than the linear one by including more terms" — ideally, a situation where the higher terms give some new qualitative insight.) calculus concept-motivation physical-sciences András Bátkai Mike ShulmanMike Shulman $\begingroup$ I am not sure what you want. If you regard Taylor series as approximation, then approximations are the only application you can get. If you want physical applications of higher derivatives, then this exists of course, but is not really about Taylor expansion per se. $\endgroup$ – user11235 May 5 '14 at 7:03 $\begingroup$ There are all sorts of uses for the fact that a function is linear to first order with a certain slope, above and beyond compting numerical values of the function. I'm looking for something similar for the higher terms. $\endgroup$ – Mike Shulman May 5 '14 at 13:59 $\begingroup$ For a suggestion, see my answer to the Math StackExchange question What are power series used for? (a reference request). Specifically, I suspect this is one of those things you'll probably have to try my "personal books and library books research collected into a folder" idea for. Earlier in my learning and teaching of math I encountered many situations like you're in, but worse as it was before the internet and I was often in very isolated rural areas, so I wound up making my own lists. $\endgroup$ – Dave L Renfro May 5 '14 at 14:40 $\begingroup$ Seeing the title of this question, I feel compelled to share the following (possibly apocryphal) anecdote: math.stackexchange.com/a/28899/37122 $\endgroup$ – Benjamin Dickman May 5 '14 at 17:37 $\begingroup$ @DaveLRenfro I am in the process of making my list. I already looked at all the calculus books on my shelf and didn't find anything. Unfortunately none of the questions at your link is what I'm looking for, either. $\endgroup$ – Mike Shulman May 5 '14 at 22:19 Here's one that I just thought of, by modifying a problem in the ODEs section of my textbook. Question: In the presence of air resistance, does a thrown ball take longer to go up or to come down? We assume that the force of air resistance is proportional to velocity, with constant of proportionality $p$. Thus, we have $$ F = m a = - p v - m g. $$ Since $a=v'$, this is a differential equation for $v$. For simplicity, let's divide out by $m$ and write $q=p/m$, so it becomes $$ v' = - q v - g. $$ The differential equation is separable, so we can solve it to get $$ v = \left(v_0 + \frac{g}{q}\right) e^{-q t} - \frac{g}{q}. $$ Integrating again, we get the height as a function of time: $$ y = \left(\frac{v_0}{q} + \frac{g}{q^2}\right)(1-e^{-q t}) - \frac{g t}{q}. $$ We can solve $v = 0$ to find the time $t_{\mathrm{max}}$ at which the ball reaches its maximum height, but it's not so easy to solve $y=0$ to find the time at which it reaches the ground. The book answers the question by using a clever argument to conclude that $y(2t_{\mathrm{max}})>0$. But we can also approximate $y$ by a power series in the small parameter $q$. Let's substitute the second-degree Taylor polynomial for $e^{-q t}$, namely $$1 - q t + \frac{1}{2} q^2 t^2,$$ into $v$. We get $$ v \approx \left(v_0 + \frac{g}{q}\right) \left(1 - q t + \frac{1}{2} q^2 t^2\right) - \frac{g}{q} $$ Multiplying this out, we get $$ v\approx v_0 + \frac{g}{q} - (v_0 q + g) t + \frac{1}{2} (v_0 q^2 + g q) t^2 - \frac{g}{q}. $$ Canceling the $\frac{g}{q}$s and discarding the term involving $q^2$ (since $q$ is small), we obtain $$ v\approx v_0 + (v_0 q + g) t + \frac{1}{2} g q t^2 . $$ to first order in $q$. Note that all the $q$s in the denominators canceled out, and this is a linear approximation to $v$ as a function of $q$. In principle, we could have derived it by simply taking the derivative of $v$ with respect to $q$ at $q=0$. However, it's not at all obvious from the formula for $v$ that it's even differentiable at $q=0$! (It's not even necessarily obvious that it has a limit as $q\to 0$, but of course the physical considerations imply that it must.) The power series expansion is much nicer. We can also integrate $v$ to get the height $$y \approx v_0 t + \frac{1}{2} (v_0 q + g) t^2 + \frac{1}{6} g q t^3. $$ Now we can answer the original question. Putting $v=0$ and solving for $t$ with the quadratic formula, we get $$t_{\mathrm{max}} \approx \frac{1}{q} + \frac{v_0}{g} \pm \frac{1}{q}\left(1+\frac{v_0^2 q^2}{g^2}\right)^{1/2}.$$ Using the linear Taylor polynomial for the binomial series, we get $$t_{\mathrm{max}} \approx \frac{1}{q} + \frac{v_0}{g} \pm \frac{1}{q}\left(1+\frac{v_0^2 q^2}{2g^2}\right).$$ We must use the minus sign to cancel out the impossible $1/q$s, yielding $$t_{\mathrm{max}} \approx \frac{v_0}{g} - \frac{v_0^2 q}{2 g}.$$ Of course, the first term, $\frac{v_0}{g}$, is the obvious value in the no-air-resistance case $v = v_0 - g t$. We can also set $y=0$ and solve for $t$ to find the time when the ball lands. The solution $t=0$ (when it was thrown) factors out and we can use the quadratic formula again: $$ t_{\mathrm{lands}} \approx \frac{3}{2q} + \frac{3v_0}{2g} \pm \frac{3}{2q}\left(1 - \frac{2qv_0}{3g} + \frac{v_0^2 q^2}{g^2}\right)^{1/2}.$$ Using the second-degree Taylor polynomial for the binomial series this time (since there is a power of $q$ rather than just $q^2$ inside the square root) but discarding the resulting $q^4$ term, we get $$ t_{\mathrm{lands}} \approx \frac{3}{2q} + \frac{3v_0}{2g} \pm \frac{3}{2q}\left(1 - \frac{qv_0}{3g} + \frac{v_0^2 q^2}{2g^2} - \frac{q^2 v_0^2}{18 g^2}\right).$$ Again we must take the minus sign to cancel the impossible $\frac{3}{2q}$, and we get $$ t_{\mathrm{lands}} \approx \frac{2 v_0}{g} - \frac{2 v_0^2 q}{3g^2}. $$ Observe that $t_{\mathrm{lands}} > 2 t_{\mathrm{max}}$. $\begingroup$ That's impressive but 10 equations too many! After your second equation, you can solve to get $v=(g/q)(-1+\exp(qt_{\max}-qt))$. So for $u>0$, $t=t_{\max}\pm u$, $v=(g/q)(-1+\exp(\mp qu))\simeq (g/q)(-1+1\mp qu +q^2u^2)$, and $|v|\simeq(g/q)\;|qu\mp q^2u^2|$. The speeds are therefore greater for $t=t_{\max}-u$ than for $t=t_{\max}+u$, which means that it goes up more quickly than it comes down. $\endgroup$ – user173 May 6 '14 at 3:28 $\begingroup$ @MattF. thanks! I do like the approach that actually solves for $t_{\mathrm{lands}}$ (approximately) as being more direct, even if there is some sneakier argument that is shorter. $\endgroup$ – Mike Shulman May 6 '14 at 15:05 First f all, check out this great paper. It has some interesting examples, especially in Appendix A. A typical example is relativistic mechanics: the usual first order approximation yields the unsatisfactory Newtonian mechanics (see for example here), which is in many applications not enough to explain what is happening. ADDED: Since OP asked for an example, calculations in special relativity usually boil down to the use of the binomial series, hence they are mathematically suitable for a calculus class. Whether people understand the underlying physics is a different question. For example, Section 15.5 in these great notes explains how in the relativistic harmonic oscillator the period depends on the amplitude (Section 15.5.1, page 311), Formula (15.132). András BátkaiAndrás Bátkai $\begingroup$ That does look like a really nice paper. Can you point to a specific example of a calculation in relativistic mechanics that uses a higher-order Taylor expansion (and, hopefully, that a calculus student could understand)? $\endgroup$ – Mike Shulman May 5 '14 at 23:45 $\begingroup$ @MikeShulman the relativistic energy is a function of velocity via the so-called gamma function; $\gamma = 1/\sqrt{1-v^2/c^2}$. The binomial series provides $\gamma = 1+ \frac{1}{2}\frac{v^2}{c^2}+ \cdots$. We sometimes hear $E=mc^2$, but, the real story is $E = \gamma mc^2$ where $m$ is the rest mass (it is a constant). So, the second order Taylor term in in the relativistic energy and the constant term are interesting; $E = mc^2 + \frac{m}{2}v^2+ \cdots$ we find the relativistic energy reveals the previously hidden rest energy $mc^2$ and the classical KE term. Higher order terms... $\endgroup$ – James S. Cook Jul 25 '17 at 4:35 $\begingroup$ @MikeShulman terms are physically relevant, we must keep all of them to give an entire account of the relativistic energy. Only when $v<<c$ do we have $v/c << 1$ and hence the classical KE term and the hidden rest energy give an apt description of physics. Of course, in as much as the rest energy is constant, we can analyze physical motion in terms of the classical KE $\frac{m}{2}v^2$ alone. $\endgroup$ – James S. Cook Jul 25 '17 at 4:37 Much of quantum mechanics is only known in a perturbative framework. Essentially, the basic objects of interest are series. Terms are calculated by Feynman diagrams of increasingly complex diagrams for higher order corrections. So, the series description is pragmatically fundamental. I suppose in principle, non-pertubative solutions exist, however, if no one can hack the math then for a physical purposes (literally in my answer) the series generated by Feynman diagrams represents the best view we have of quantum field theory. Also, low order applications are interesting. We use $F=mg$ to approximate $F = \frac{GmM}{(R_E+h)^2}$ at $h=0$. In fact, $F=mg$ is just the constant term in the Taylor series for gravity. I submit to you this is still interesting, and, in view of the final I just graded, still sufficiently challenging for many undergraduates. Applications where you use all the terms in a series are perhaps nicely found in math itself. For example, the alternating harmonic series convergence to $\ln(2)$ as seen from the expansion of $\ln(1+x)$. This is a nice example which uses all the terms in the series. Many examples of this nature are found in every textbook, they make formidable homework problems when taken out of context. James S. CookJames S. Cook $\begingroup$ Identifying the constant term is interesting, but specifically not what I asked for. I also have plenty of "applications" inside of math, but I wanted physical ones. Quantum mechanics is promising, but (say) getting Feynman diagrams out of path integrals requires a good deal more math than my calc 2 students have; can you give me a specific example? $\endgroup$ – Mike Shulman May 6 '14 at 15:09 $\begingroup$ @MikeShulman specific Feynman diagram calculation accessible to calculus II students... I don't have one handy. On the other hand, perhaps the energy eigenstates of the hydrogen atom might work for your purposes. I'll think about it, may I'll add something later. $\endgroup$ – James S. Cook May 6 '14 at 17:07 I don't know if this is the sort of thing you're thinking about -- it seems to me like a too-obvious example, but it seems to fit your question pretty well. Suppose you have a particle moving along some path $r(t)$, and suppose we expand $r(t)$ in a Taylor series as $r(t) = a_0 + a_1 t + (1/2) a_2 t^2 + (1/6) a_3 t^3 + ....$ Then $a_0$ tells us the initial location of the particle, $a_1$ its initial velocity, $a_2$ its initial acceleration, $a_3$ its initial jerk, $a_4$ the initial "jounce" or "snap" (beyond that the derivatives don't have standard names). A second example, one that maybe isn't "physical" enough but also might give insight into the higher-order terms: Compound interest. If a bank account balance is growing according to $B(t)=A e^{rt}$, and we expand the RHS as a Taylor series, the constant term tells you the initial balance, and the linear term tells you how the balance would grow if we were only considering simple interest; the higher-order terms take into account the effects of compounding. (You can adapt this example to deal with any kind of exponential growth or decay.) mweissmweiss $\begingroup$ The first example doesn't seem to really say anything about Taylor series; it's just a statement about higher derivatives which they've seen in calc 1. I like the second example better, but "the higher terms take compounding into effect" is a bit vague --- one might just as well say that "the difference between $A e^{rt}$ and its linear approximation" is what takes compounding into effect. What mileage do you get out of knowing that you can approximate the compounding by a polynomial? $\endgroup$ – Mike Shulman May 6 '14 at 15:07 $\begingroup$ In the first example, I think of the linear approximation (1st two terms) as "Where the particle would be if it were not accelerating"; the quadratic approx (1st 3 terms) as "Where the particle would be if it had constant acceleration"; etc. Each additional term is a correction that takes into account a change in the previous term. $\endgroup$ – mweiss May 6 '14 at 15:13 $\begingroup$ @MichaelE2 In calc 1 the students learned that the first derivative of position is velocity and the second derivative is acceleration. If I write down the Taylor series of the position function and say "see, you can tell by taking the derivative of the series that the first derivative is velocity and the second derivative is acceleration" then I think they're going to feel "so what, we knew that already; what did we gain by writing it as a power series?" But mweiss's second comment makes clearer what he meant, and that makes sense. $\endgroup$ – Mike Shulman May 7 '14 at 18:13 While this isn't a 'mainstream' topic, I feel like homotopy analysis methods are worth a glance here. The idea is to approximate some non-linear equation by a linear one, take a continuous connection between them, solve the linear problem, and 'slide' the solution over to the non-linear case. If $L\{u(t)\}=0$ is the linear operator and $N\{u(t)\}$ is the non-linear operator, we introduce a new parameter $q$ and consider the equation $$(1-q)L\{u(t;q)\}+qN\{u(t;q)\}=0$$ where the solution now depends on $q$. $q=0$ is the linear case and $q=1$ is the full non-linear equation you're trying to solve. What we do is expand the dependence on $q$ as a Taylor series $u(x;q)=\sum u_n(x)q^n$. This leads to a recursion relation for $u_n$ in terms of $u_{n-1}$ that can be solved iteratively. Fiddling around with parameters to ensure that the radius of convergence includes $q=1$ then gives you the non-linear solution as an infinite series. While not physics per se, it has major applications when dealing with nonlinear waves and similar things. The wikipedia page says it better than I can. It might not be appropriate to go through it in detail (it could be done with only an ODE course, but it would probably take as much time to understand as a major topic of the curriculum), but it's a good example of where arbitrarily high terms in the Taylor series become important. It's not just a formality; the first or second order approximations are rarely of any meaningful accuracy. Another major example (though I again apologize for it not being purely physics) would be in numerical approximations, where Taylor series are bread and butter. Being able to expand Taylor series is fundamental in deriving many methods for numerically solving ODEs, and is crucial for being able to estimate bounds and convergence orders. If you only take linear approximations you'll have a horrible time of everything. Robert MastragostinoRobert Mastragostino Thanks for contributing an answer to Mathematics Educators Stack Exchange! Not the answer you're looking for? Browse other questions tagged calculus concept-motivation physical-sciences or ask your own question. How to make Calculus II seem motivated, interesting, and useful? Direct applications and motivation of trig substitution for beginning calculus students Students using l'Hôpital's Rule on the terms of a series, instead of the Limit Comparison Test Applications of Calculus 2 to Physics How to prove Taylor formulas? What are some good or neat examples of computing a function's Taylor series? An intuitive derivation of Taylor polynomial coefficients What good is the phrase "Taylor series"? Making physical 3D models Student-friendly / efficient approach to computing Taylor coefficients of infinite binomial series expansions?
CommonCrawl
Azimi, M. (2017). Subspace-diskcyclic sequences of linear operators. Sahand Communications in Mathematical Analysis, 08(1), 97-106. doi: 10.22130/scma.2017.23850 Mohammad Reza Azimi. "Subspace-diskcyclic sequences of linear operators". Sahand Communications in Mathematical Analysis, 08, 1, 2017, 97-106. doi: 10.22130/scma.2017.23850 Azimi, M. (2017). 'Subspace-diskcyclic sequences of linear operators', Sahand Communications in Mathematical Analysis, 08(1), pp. 97-106. doi: 10.22130/scma.2017.23850 Azimi, M. Subspace-diskcyclic sequences of linear operators. Sahand Communications in Mathematical Analysis, 2017; 08(1): 97-106. doi: 10.22130/scma.2017.23850 Subspace-diskcyclic sequences of linear operators Article 8, Volume 08, Issue 1, Autumn 2017, Page 97-106 PDF (95.32 K) Document Type: Research Paper DOI: 10.22130/scma.2017.23850 Mohammad Reza Azimi Department of Mathematics, Faculty of Sciences, University of Maragheh, Maragheh, Iran. A sequence $\{T_n\}_{n=1}^{\infty}$ of bounded linear operators on a separable infinite dimensional Hilbert space $\mathcal{H}$ is called subspace-diskcyclic with respect to the closed subspace $M\subseteq \mathcal{H},$ if there exists a vector $x\in \mathcal{H}$ such that the disk-scaled orbit $\{\alpha T_n x: n\in \mathbb{N}, \alpha \in\mathbb{C}, | \alpha | \leq 1\}\cap M$ is dense in $M$. The goal of this paper is the studying of subspace diskcyclic sequence of operators like as the well known results in a single operator case. In the first section of this paper, we study some conditions that imply the diskcyclicity of $\{T_n\}_{n=1}^{\infty}$. In the second section, we survey some conditions and subspace-diskcyclicity criterion (analogue the results obtained by some authors in \cite{MR1111569, MR2261697, MR2720700}) which are sufficient for the sequence $\{T_n\}_{n=1}^{\infty}$ to be subspace-diskcyclic(subspace-hypercyclic). Sequences of operators; Diskcyclic vectors; Subspace-diskcyclicity; Subspace-hypercyclicity Main Subjects Functional analysis and operator theory [1] N. Bamerni, V. Kadets, and A. Kιlιçman, On subspaces diskcyclicity, arXiv:1402.4682 [math.FA], 1-11. [2] N. Bamerni, V. Kadets, A. Kιlιçman, and M.S.M. Noorani, A review of some works in the theory of diskcyclic operators, Bull. Malays. Math. Sci. Soc., Vol. 39 (2016) 723-739. [3] F. Bayart and ´E. Matheron, Dynamics of linear operators, Cambridge Tracts in Mathematics, Vol. 179, Cambridge University Press, Cambridge, 2009. [4] L. Bernal-Gonz´alez and K.-G. Grosse-Erdmann, The hypercyclicity criterion for sequences of operators, Studia Math., Vol. 157 No. 1 (2003) 17-32. [5] P.S. Bourdon, Invariant manifolds of hypercyclic vectors, Proc. Amer. Math. Soc., Vol. 118 No. 3 (1993) 845-847. [6] G. Godefroy and J.H. Shapiro, Operators with dense, invariant, cyclic vector manifolds, J. Funct. Anal., Vol. 98 No. 2 (1991) 229-269. [7] K-G. Grosse-Erdmann, Universal families and hypercyclic operators, Bull. Amer. Math. Soc., Vol. 36 No. 3 (1999) 345-381. [8] R.R. Jiménez-Munguía, R.A. Martínez-Avendaño, and A. Peris, Some questions about subspace-hypercyclic operators, J. Math. Anal. Appl., Vol. 408 No. 1 (2013) 209-212. [9] C. Kitai, Invariant closed sets for linear operators, ProQuest LLC, Ann Arbor, MI, Thesis (Ph.D.)–University of Toronto, Canada 1982. [10] F. León-Saavedra and V. Müller, Hypercyclic sequences of operators, Studia Math., Vol. 175 No.1 (2006) 1-18. [11] B.F. Madore and R.A. Martínez-Avendaño, Subspace hypercyclicity, J. Math. Anal. Appl., Vol. 373 No.2 (2011) 502-511. [12] H. Petersson, A hypercyclicity criterion with applications, J. Math. Anal. Appl., Vol. 327 No. 2 (2007) 1431-1443. [13] H. Rezaei, Notes on subspace-hypercyclic operators, J. Math. Anal. Appl., Vol. 397 No. 1 (2013) 428-433. [14] Z.J. Zeana, Cyclic Phenomena of operators on Hilbert space, Thesis, University of Baghdad, 2002. Article View: 594 PDF Download: 139
CommonCrawl
\begin{document} \title{Compositional Algorithms for Succinct Safety Games} \begin{abstract} We study the synthesis of circuits for succinct safety specifications given in the AIG format. We show how AIG safety specifications can be decomposed automatically into sub-specifications. Then we propose symbolic compositional algorithms to solve the synthesis problem compositionally starting for the sub-specifications. We have evaluated the compositional algorithms on a set of benchmarks including those proposed for the first synthesis competition organised in 2014 by the Synthesis Workshop affiliated to the CAV conference. We show that a large number of benchmarks can be decomposed automatically and solved more efficiently with the compositional algorithms that we propose in this paper. \end{abstract} \section{Introduction} We study the synthesis of circuits for succinct safety specifications given in the AIG format. An AIG file for synthesis describes a circuit that compactly defines a transition relation between valuations for latches, {\em uncontrollable} and {\em controllable} input signals. The circuit contains a special latch called the {\em error latch}. Initially, all latches are false, and the controller chooses values for the controllable input signals so as to always keep the error latch {\em low} (safety objective), no matter how the environment chooses values for the uncontrollable input signals. The AIG format is {\em monolithic} in the sense that it is not explicitly structured into subsystems. This is unfortunate as in general, complex systems or specifications are built of smaller sub-parts and taking into account this structure may be a definite advantage. {\em And-Inverter Graphs} (AIG) have been proposed as a way to provide a simple and compact file format for a model checking competition affiliated to CAV 2007 (see {\tt http://fmv.jku.at/aiger/FORMAT}). This format has been extended to be the input format for the 2014 {\em reactive synthesis competition}. Because the synthesis competition uses the AIG format, and this format is monolithic, all the tools that took part in the 2014 reactive synthesis competition solved the synthesis problems {\em monolithically}. Nevertheless, the specifications that were proposed during the 2014 synthesis competition are, for a large part of them, generated from higher level descriptions of systems that bear structure. For example, two of the most interesting sets of benchmarks, {\sf GenBuf} and {\sf AMBA}, are generated from Reactive(1) specifications (a tractable subset of LTL specifications)~\cite{BloemJPPS-jcss12}, or directly from LTL specifications that are conjunctions of smaller LTL sub-formulas. In this paper, we show that part of the structure lost during the AIG format translation can be recovered and used to solve the synthesis problem {\em compositionally}. First, we propose a static analysis of the AIG file that returns, when possible, a decomposition of the circuit into smaller sub-circuits with their own safety specifications. Then we provide three different algorithms that first solve the sub-games corresponding to the sub-circuits and then aggregate, following three different heuristics, the results obtained on the sub-games. Namely, once we have the solution of all the sub-games we aggregate them by \begin{inparaenum}[$(i)$] \item taking their intersection -- which, we show, over-approximates the actual solution of the general game -- and applying the usual fixpoint algorithm to it; \item assigning a score to each pair of solutions based on the number of variables shared and the size of the BDDs obtained after their intersection and using said score to aggregate (pair by pair) all the solutions; \item trying to refine them using information from a single step of the fixpoint computation on the general game (\ie projecting the resulting ``bad'' states onto each sub-game). \end{inparaenum} We have implemented the decomposition, the compositional synthesis algorithms, and evaluated the approach on the 2014 reactive synthesis competition benchmarks as well as on new benchmarks produced from large LTL specifications. \paragraph{Related Work.} In~\cite{FiliotJR10,FiliotJR11}, compositional algorithms are proposed for the LTL realizability problem. The LTL formulas considered there are assumed to be conjunctions of smaller LTL formulas, and so the structure of the specification is directly available to them, while in our case it has to be recovered. Also, the main data-structures used there are based on antichains while we use BDDs. In symbolic model checking algorithms, partitioned transition relations~\cite{burch1991symbolic} are widely used whenever the system is made of several components. Here, the goal is to compute the one-step successor states without explicitly computing the conjunction of the transition relations for each component. The image computation is rather done using \emph{quantification scheduling} heuristics which tries to apply variable quantification as early as possible inside the conjunction; see \eg \cite{wang2003compositional}. We also use partitioned transition relations in our algorithms: the next-state function for each latch is stored separately. Unlike forward model checking algorithms, synthesis algorithms proceed backwards, so we can use the \emph{composition} operation provided by BDD libraries to compute predecessors, and we do not need any early quantification heuristics. \paragraph{Structure of the paper.} In Section~\ref{sec:prelim}, we fix notation and recall the definitions needed to present our results. Then, in Section~\ref{sec:decomp}, we describe the class of decompositions our algorithms accept as input, we give some examples of how to decompose a succinct safety specification given by an extended AIGER file and outline the algorithm we implemented to get such a decomposition. Our algorithms are described in detail in Section~\ref{sec:algos} and the results of our tests are presented in Section~\ref{sec:experiments}. \section{Preliminaries}\label{sec:prelim} Let $\mathbb{B} = \{0,1\}$. Given a set of variables~$A$, a \emph{valuation over~$A$} is an element of~$\mathbb{B}^A$, and a set of valuations over~$A$ is represented by its characteristic function $f : \mathbb{B}^A \rightarrow \mathbb{B}$. We will write~$f(A)$ to make the dependency on the variables~$A$ explicit. Given two disjoint sets of variables $A,B$, let us write $\mathbb{B}^{A,B}$ for $\mathbb{B}^A \times \mathbb{B}^B$. Consider variable sets $A\subseteq B$. We define the \emph{projection} of a valuation~$v : \mathbb{B}^B$ to~$A$ as $v\downarrow_A : \mathbb{B}^A$, with $v\downarrow_A(a) =1$ if, and only if $v(a) = 1$. We extend this notation to functions $f : \mathbb{B}^B \rightarrow \mathbb{B}$ by $f \downarrow_A : \mathbb{B}^A \rightarrow \mathbb{B}$, defined as $f\downarrow_A(v)$ if, and only if $\exists v' \in \mathbb{B}^B, f(v')$, and $v = v' \downarrow_A$. We define the \emph{lifting} of a set~$f: \bool^A \rightarrow \bool$ in~$\bool^B$ by $f\uparrow_B(v) = 1$ if, and only if $f(v\downarrow_A) =1$. For a set of variables~$A=\{a_1,a_2,\ldots\}$, let us write~$A'=\{a_1',a_2',\ldots\}$ the set of \emph{primed variables}. For~$f(A)$, let $f(A')$ denote the characteristic function $f(A)$ where each variable $a \in A$ has been renamed as its primed copy $a' \in A'$. \paragraph{Symbolic Games.} We formalize the reactive synthesis problem as a two-player turn-based game with safety objective described symbolically. We consider games defined by sequential synchronous circuits, encoded in the AIGER format. More precisely, a \emph{game} is a tuple $G = \langle L, X_u, X_c, (f_l)_{l\in L}, \out\rangle$, where: \begin{enumerate} \item $X_u, X_c, L$ are finite disjoint sets of Boolean variables representing \emph{uncontrollable inputs}, \emph{controllable inputs}, and \emph{latches} respectively; \item for each latch $l\in L$, $f_l \colon \mathbb{B}^L \times \mathbb{B}^{X_u} \times \mathbb{B}^{X_c} \to \mathbb{B}$ is the \emph{transition function} that gives the valuation of~$l$ in the next step. In practice these functions will be given by And-Inverter Graphs (see below for a definition). \item $\out \in L$ is a distinguished latch which indicates whether an error has occurred. We will often modify the circuit by replacing~$\fbad$ by some other Boolean function~$e$, which we denote by $G[\fbad \leftarrow e]$. \end{enumerate} A \emph{state}~$q$ of game $G$ is a valuation of latches, that is an element of $\mathbb{B}^{L}$. A \emph{valuation}~$v$ in game $G$ is a valuation of latches and inputs, that is an element of $\mathbb{B}^{L,X_u,X_c}$. We denote the \emph{global transition function} $\delta \colon \mathbb{B}^L \times \mathbb{B}^{X_u} \times \mathbb{B}^{X_c} \to \mathbb{B}^L$ such that $\delta(v)(l) = f_l(v)$ for each latch~$l$. An \emph{execution} from valuation~$v$ of the game $G$ is a sequence of valuations $(v_i)_{i \in \mathbb{N}} \in \left(\mathbb{B}^{L, X_u, X_c}\right)^\omega$ such that $v_0 = v$ and for all $i$, $$ v_{i+1}\downarrow_L = \delta(v_i \downarrow_L ,v_i \downarrow_{X_u},v_i \downarrow_{X_c}). $$ The execution is \emph{safe} if, for all $i \ge 0$, we have that $v_i(\out) = 0$. Note that symbolic games define game arenas of exponential size but we will only work on their symbolic representations. \paragraph{Controller synthesis.} The goal of \emph{controller synthesis} is to find a strategy to determine the controllable inputs given uncontrollable inputs and the current state (\ie, valuation of the latches) to ensure that the error state is not reachable. A \emph{strategy} is a function~$\lambda\colon \mathbb{B}^{L, X_u} \to \mathbb{B}^{X_c}$. An execution $(v_i)_{i \in \mathbb{N}}$ is \emph{compatible} with $\lambda$ if for all $i \in \mathbb{N}$, \[ v_i\downarrow_{X_c} = \lambda(v_i\downarrow_{L},v_i\downarrow_{X_u}). \] A strategy $\lambda$ is \emph{winning} if all executions that are compatible with $\lambda$ are safe. A valuation $v$ is \emph{winning} if there exists a strategy $\lambda$ that is winning from $v$. We denote $W(L,X_u,X_c)$ the \emph{winning valuations} of $G$, that is the set of valuations that are winning. \paragraph{And-Inverter Graphs.} An \emph{And-Inverter Graph} (AIG) is a directed acyclic graph with two-input nodes representing logical conjunction (AND gates), terminal nodes representing inputs, and edges that are possibly \emph{inverted} to denote logical negation (NOT gate). Formally, an AIG is a tuple $G = \langle V,E,\iota \rangle$ such that $(V,E)$ is a directed graph with every vertex having $0$ or $2$ outgoing edges, and $\iota : E \to \mathbb{B}$ labels inverted edges with $1$. We depict edges (not) labelled by $\iota$ as arrows (not) marked with a dark dot. Figure~\ref{fig:example-aig-1} shows a simple AIG with Boolean variables $x_1,x_2,x_3,x_4$. Each node in the AIG defines a Boolean function. For example, $v_1$ defines the Boolean function $\phi_{v_1} \equiv x_1 \land \lnot \phi_{v_2}$, where $\phi_{v_2}$ is the corresponding formula defined by $v_2$, since the edge from $v_1$ to $v_2$ is marked as inverted. The AIGER format~({\tt http://fmv.jku.at/aiger/FORMAT}) was defined as a standard file format to describe sequential synchronous circuits (the logic defined as an AIG), and has been used in model checking and synthesis competitions. In the latter case, the inputs are partitioned into \emph{controllable} and \emph{uncontrollable}~({\tt http://www.syntcomp.org/wp-content/uploads/2014/02/Format.pdf}). This is the format that we will assume as representation of the input game for our algorithms. We call an \emph{AIG game}, a symbolic game described in the AIGER format. \paragraph{Binary Decision Diagrams.} Internally, our tool uses \emph{binary decision diagrams} (BDD)~\cite{bryant86} to represent Boolean functions used to represent sets of states or (parts of) transition relations. We use classical operations and notation on BDDs and refer the interested reader to~\cite{Andersen97anintroduction} for a gentle introduction to BDDs\@. Projection and lifting of functions are easily implemented with BDDs: projecting is done by an existential quantification and lifting is a trivial operation because it only extends the domain of the function but its logical representation, \ie\ its Boolean formula, stays the same. In our algorithms, we often use BDD operations which implement heuristics to reduce the size of the given BDD\@, namely, \emph{generalized cofactors}~\cite{tslbs-v90,shs-vb94}. A generalized cofactor $\hat{f}(X)$ of $f(X)$ with respect to $g(X)$ yields a BDD that matches~$f(X)$ inside~$g(X)$, and is defined arbitrarily outside~$g(X)$. This degree of freedom outside~$g(X)$ allows heuristics to reduce the BDD size. We write $\hat{f}(X) = \gencof{f(X)}{g(X)}$. Formally, we have that $\hat{f}(X) \land g(X) = f(X) \land g(X)$ and $\hat{f}$ has at most the size of $f$. BDD libraries implement the operations \emph{restrict} or \emph{constrain} (see, \eg~\cite{somenzi99}), which are specific generalized cofactors. \paragraph{Classical Algorithms to Solve Safety Games.} We recall the basic fixpoint computation for solving safety games, applied here on symbolic safety games. Let $G = \langle L, X_u, X_c, (f_l)_{l\in L},\out \rangle$ be a symbolic game. The complement of the set $W(L,X_u,X_c)\downarrow_L$ can be computed by iterating an \emph{uncontrollable predecessors} operator. For any set of states $S(L)$, the \emph{uncontrollable predecessors} of~$S$ is defined as \[ \upre_G(S) = \{q \in \mathbb{B}^L \st \exists x_u \in \mathbb{B}^{X_u}.\ \forall x_c \in \mathbb{B}^{X_c} : \delta(q, x_u, x_c) \in S\}; \] the dual \emph{controllable predecessors} operator is defined as \[ \cpre_G(S) = \{q \in \mathbb{B}^L \st \forall x_u \in \mathbb{B}^{X_u}.\ \exists x_c \in \mathbb{B}^{X_c} : \delta(q, x_u, x_c) \in S\}; \] We denote by $\upre_G^*(S) = \mu X. (S \cup \upre_G(X))$, the \emph{least fixpoint} of the function $F : X \to S \cup \upre_G(X)$ in the $\mu$-calculus notation (see \cite{ej91}). Note that $F$ is defined on the powerset lattice, which is finite. It follows from Tarski-Knaster theorem~\cite{tarski55} that, because $F$ is monotonic, the fixpoint exists and can be computed by iterating the application of $F$ starting from any value below it, \eg the least value of the lattice. Similarly, we denote by $\cpre_G^*(S) = \nu X. (S \cap \cpre_G(X))$, the \emph{greatest fixpoint} of the function $F : X \to S \cap \cpre_G(X))$. Dually, we have that, because $F$ is monotonic, the fixpoint exists and can be computed by iterating the application of $F$ starting from any value above it, \eg the greatest value of the lattice. When $G$ is clear from the context, we simply write $\upre$ ($\cpre$) instead of $\upre_G$ ($\cpre_G$). The Proposition follows from well-known results about the relationship between safety games and these operators (see, \eg,~\cite{ag11}). \begin{proposition} For any symbolic game $G = \langle L, X_u, X_c, (f_l)_{l \in L}, \out\rangle$, we have \begin{itemize} \item $\cpre^*( (\out \mapsto 0)\uparrow_L) = \cpre( W(L,X_u,X_c)\downarrow_L)$; dually, \item $\upre^*( (\out \mapsto 1)\uparrow_L) = {\lnot \cpre(W(L,X_u,X_c) \downarrow_L)} = {\upre(\lnot W(L,X_u,X_c)\downarrow_L)}$. \end{itemize} \end{proposition} In the rest of the paper, we assume a black-box procedure $\solve$ which, for a given symbolic game, computes the corresponding winning valuations. In practice, $\solve$ can be implemented using $\upre$ or $\cpre$. Formally, \[ \solve(G) = \{ (q,x_u,x_c) \in \mathbb{B}^{L,X_u,X_c} \st q(\out) = 0 \land \delta(q,x_u,x_c) \not\in \cpre^*( (\out \mapsto 0){\uparrow_L}) \}. \] Note that $\solve$ gives the set of winning valuations, and not the set of winning states. The interpretation of $\solve(G)$ is that it is the maximal permissive strategy: any strategy for the controller that ensures to stay within this set is a winning strategy. We also consider procedure $\solves(G) = \{ q \in \mathbb{B}^L \st q \in \cpre^*\left( (\out \mapsto 0){\uparrow_L} \right)\}$ which returns the set of winning states. \paragraph{Optimizations Using Generalized Cofactors.} Let us now establish the correctness of two optimizations we use in the sequel. We first formalize the dependence on latches as follows. The \emph{cone of influence} (see, \eg,~\cite{cgp01}) of~$e_i$, written $\texttt{cone}(e_i)$, is the set of variables on which~$e_i$ depends, that is, $\texttt{cone}(\Phi) \subseteq L \cup X_u \cup X_c$ is the minimal set of variables such that if $x \in \texttt{cone}(\Phi)$ then either $(\exists x: \Phi) \not \Leftrightarrow \Phi$ or $x \in \texttt{cone}(f_{y})$ for some $y \in \texttt{cone}(\Phi) \cap L$. For convenience, we denote by $\cone(\Phi)$ the set $\texttt{cone}(\Phi) \cap L$. Observe that we have defined the cone of influence of a Boolean function semantically. That is to say, a variable $x$ is in the cone of influence of a function $\Phi$ if and only if the set of valuations satisfying $\Phi$ changes for some fixed valuation of $x$. Since we consider functions given by AIGs, the cone of influence can be over-approximated by exploring the AIG starting from the vertex corresponding to function~$\Phi$, adding all latches and inputs visited and the cones of influence of the latches -- computed recursively. In our implementation we use this over-approximation when working on the AIG only and we use the definition on the semantics to obtain an algorithm on BDDs -- which we use when working with BDDs. Given an over-approximation $\Lambda$ of the winning valuations \begin{inparaenum}[$(i)$] \item we first simplify the transition relation and keep it precise only in $\Lambda$, \item we further modify the transition relation by making every transition not allowed by $\Lambda$ go to an error state, \ie change $\fbad$. \end{inparaenum} In fact, correctness of the first optimization requires that the second one be used as well. The following result summarizes the properties of these optimizations. \begin{lemma}\label{lem:correct-restrict} For any symbolic game $G = \langle L, X_u, X_c, (f_l)_{l \in L}, \out \rangle$, and any $\Lambda(L,X_u,X_c) \supseteq W(L,X_u,X_c)$, if we write $f'_l = \gencof{f_l}{\Lambda}$ for all $l \in L$, we have \[ \solve(G) = \solve(\langle \cone(\Lambda), X_u, X_c, (f'_l)_{l \in \cone(\Lambda)}\rangle[\fbad' \leftarrow \lnot \Lambda]) \uparrow_L. \] \end{lemma} \begin{proof} We first show that solving the game with error function~$\lnot \Lambda$ yields the same winning valuations as for~$\fbad$. For that we will use two basic properties of the winning valuations: first if $f \subseteq f'$ then \[ \solve(G[\fbad \leftarrow f']) \subseteq \solve(G[\fbad \leftarrow f]); \] secondly \[ \solve(G[\fbad \leftarrow \lnot \solve(G)]) = \solve(G), \] this is because if an execution compatible with strategy~$\lambda$ reaches $\lnot \solve(G)$, then by definition of winning valuations it can be extended from there to an execution compatible with $\lambda$ that is unsafe. Together with the fact that $\fbad \subseteq \lnot\Lambda \subseteq \lnot W$, these properties imply that $\solve(G) = \solve(G[\fbad \leftarrow \lnot \Lambda])$. It is clear that one can consider only the variables in $\cone(\Lambda)$ for this computation, and thus considering $H = (\langle \cone(\Lambda), X_u, X_c, (f_l)_{l \in \cone(\Lambda)}\rangle [\fbad \leftarrow \lnot \Lambda])$, we have \[ \solve(G) = \solve(H) \uparrow_L. \] It remains to show that the same set~$\solve(H)$ is obtained when the functions $f_l'$ are used transition functions~$f_l$. Let us denote $G' = (\langle \cone(\Lambda), X_u, X_c, (f_l')_{l \in \cone(\Lambda)}\rangle [\fbad' \leftarrow \lnot \Lambda])$. We note that, for any $u \supseteq \lnot \Lambda$, the following holds: \[ \upre_G'(u) \cup u = \upre_H(u) \cup u. \] Hence, it is straightforward to show by induction that $\solve(H) = \solve(G')$. \end{proof} \section{Decomposing the Specification}\label{sec:decomp} In this section, we describe how we decompose the error function~$\fbad$ of a given symbolic game into a disjunction \textit{\ie} $\fbad \equiv \big(\bigvee_{1 \leq i \leq n} e_i\big)$. Notice that if a strategy $\lambda(L, X_u, X_c)$ ensures that $\fbad$ is never true then it also ensures that $e_i$ is never true. We will then give algorithms that solve the game where each~$e_i$ is seen as the error function, and combine the obtained solutions into a global solution. The rationale behind this approach is that the functions~$e_i$ do not depend on all latches in general, so solving the game for~$e_i$ is often efficient. \paragraph{Sub-game.} Given a decomposition of $\fbad$, we define a \emph{sub-game} $G_i$ by replacing the error function by $e_i$ and considering only variables in its cone of influence. Formally, we write \[ G_i = \langle \cone(e_i), X_u, X_c, (f_l)_{l \in \cone(e_i)} \rangle[\fbad \leftarrow e_i]. \] We will often use the notation $G[\fbad \leftarrow e_i]$, which consists in replacing the function~$\fbad$ by~$e_i$. In practice, the size of the symbolic representation of the sub-games are often significantly smaller than that of the original game. Recall also that winning all the sub-games is necessary to win the global game. We write $W_i(\cone(e_i),X_u,X_c)$ for the winning valuations of $G_i$. In the implementation, $S_i$ and $S_i \uparrow_L$ are represented by the same BDD. \begin{figure} \caption{Example AIG} \label{fig:example-aig-1} \end{figure} \paragraph{Example 1.} Consider the AIG shown in Figure~\ref{fig:example-aig-1} where $x_1,x_2,x_3,x_4$ are all input variables. We would like to decompose the function defined by the sub-tree rooted at $v_1$ (\ie\ the whole tree) which we will denote by $\phi_{v_1}$. It should be clear that \( \phi_{v_1} \equiv x_1 \land \lnot \phi_{v_2} \) where $\phi_{v_2}$ is the function defined by the sub-tree rooted at $v_2$. In turn, we also have that \( \phi_{v_2} \equiv x_2 \land \lnot x_3 \land x_4. \) If we distribute the disjunction from $\lnot f_{v_2}$ we get that \( \phi_{v_1} \equiv (x_1 \land \lnot x_2) \lor (x_1 \land x_3) \lor (x_1 \land \lnot x_4). \) Thus, one possible decomposition of $\phi_{v_1}$ would be to take $e_1 = x \land \lnot x_2$, $e_2 = x_1 \land x_3$, and $e_3 = x_1 \land \lnot x_4$. The general steps followed in Example $1$ above can be generalized into an algorithm which outputs a decomposition of the error function whenever one exists. Intuitively, the algorithm consists in exploring all non-inverted edges of the AIG graph from the vertex which defines the error function. If there are no inverted edges which stopped the exploration, or if all of them lead to leaves, the error function is in fact a conjunction of Boolean variables and can clearly not be decomposed. Otherwise, there is at least one inverted edge leading to a node representing an AND gate. In this case, we can push the negation one level down and obtain a disjunction which can be distributed to obtain our decomposition. Algorithm~\ref{alg:decompose} details the procedure we have implemented. It takes as input an AIG, whether the error function is inverted, and the vertex $v_\out$ which defines the error function. It outputs a set of functions whose conjunction is logically equivalent to the error function. We have kept our description of Algorithm~\ref{alg:decompose} and Algorithm~\ref{alg:get-minput-and} (called by the former) informal. \begin{algorithm} \small $to\_visit$ := $\{v_0\}$\; $pos$ := $\{\}$\; $neg$ := $\{\}$\; \While{$|to\_visit| > 0$}{ Pop $u \in to\_visit$\; \eIf{$u$ is not a leaf}{ Let $e =(u,v)$ and $e' = (u,v')$ be s.t. $e,e' \in E$\; \eIf{$\iota(e) = 1$}{ $neg$ := $neg \cup \{v\}$; }{ $to\_visit$ := $to\_visit \cup \{v\}$; } \eIf{$\iota(e') = 1$}{ $neg$ := $neg \cup \{v\}$; }{ $to\_visit$ := $to\_visit \cup \{v\}$; } }{ $pos$ := $pos \cup \{u\}$; } } \Return{$(pos,neg)$} \caption{$\texttt{get\_minput\_and}(V,E,\iota,v_0)$} \label{alg:get-minput-and} \end{algorithm} \begin{algorithm} \small $(pos, neg)$ := $\texttt{get\_minput\_and}(V,E,\iota,v_\out)$\; \If{$inv = 1$} { \Return{ $\{\lnot \phi_v \st v \in pos\} \cup \{\phi_v \st v \in neg\}$ } } \If{$inv = 0$ and all $v \in neg$ are leaves} { \Return{ $\{ \phi_{v_\out} \}$; \tcc*[f]{\scriptsize No decomposition possible}} } Take $v_0 \in \argmax\{||\texttt{get\_minput\_and}(V,E,\iota,v)|| \st v \in neg\}$; \tcc*[f]{\scriptsize where $||(S_1,S_2)|| := |S_1| + |S_2|$}\\ $res$ := $\bigwedge_{u \in pos} \phi_u \land \bigwedge_{v \in neg \setminus \{v_0\}} \lnot \phi_v$\; $(pos, neg)$ := $\texttt{get\_minput\_and}(V,E,\iota,v_0)$\; \Return{ $\{res \land (\lnot \phi_v) \st v \in pos\} \cup \{ res \land \phi_v \st v \in neg\}$ } \caption{$\decompose(V,E,\iota,inv, v_\out)$} \label{alg:decompose} \end{algorithm} \paragraph{Example 2.} Consider a formula given by a set of assumption formulas $\{A_i(L,X_u) \st 1 \le i \le n\}$ and a set of guarantees $\{G_j(L,X_u,X_c) \st 1 \le j\le m\}$.\footnote{This is actually the way in which the error formula is stated for, \eg, the \textsf{AMBA} benchmarks.} The system we want to synthesize is expected to determine the controllable inputs in way such that if the assumptions are true, then the guarantees are met. This is formally stated as Equation~\ref{eqn:ass-gua}. \begin{equation}\label{eqn:ass-gua} \Phi = \big(\bigwedge_{1 \le i \le n} A_i\big) \implies \big( \bigwedge_{1 \le j \le m} G_j \big) \end{equation} A natural decomposition for the error function $\lnot \Phi$ would be the following: $\bigvee_{1 \le j \le m} \big(\lnot G_j \land \bigwedge_{1 \le i \le n} A_i \big)$. If $\lnot \Phi$ were given as the AIG depicted in Figure~\ref{fig:example-aig-2}, then it is not hard too see that Algorithm~\ref{alg:decompose} would yield a very similar decomposition. Indeed, as we have not assumed anything in particular about the formulas $A_i$ and $G_j$ we cannot tell whether Algorithm~\ref{alg:get-minput-and} will explore beyond each $G_j$, thus giving us more sub-games than the proposed decomposition. However, in practice, this is even better as smaller sub-games usually depend on less variables. This, in turn, could lead to them being easier to solve. \begin{figure} \caption{One possible AIG for Equation~\ref{eqn:ass-gua}} \label{fig:example-aig-2} \end{figure} \begin{lemma}\label{lem:nec-allsub} For each sub-game~$G_i$ with new error function $e_i$, we have that \[ W(L,X_u,X_c) \subseteq (W_i \uparrow_L)(L,X_u,X_c).\] \end{lemma} \begin{proof} For each valuation $v' \in W(L,X_u,X_c)\downarrow_{\cone(e_i) \cup X_u \cup X_c}$, we select a valuation $v \in W(L,X_u,X_c)$. Let $\lambda_v$ be a winning strategy in $G$ from $v$. Since there is no losing outcome for $\lambda_v$, for all $x_u \in \mathbb{B}^{X_u}$, $\lambda_v(\delta(v),x_u)$ is such that $(\delta(v),x_u,\lambda_v(\delta(v),x_u)) \in W(L,X_u,X_c)$. For all $x_u \in \mathbb{B}^{X_u}$, we fix $\lambda'(\delta(v'),x_u)$ to be $\lambda_v(\delta(v),x_u)$. We have that $(\delta(v'),x_u,\lambda'(\delta(v'),x_u)) \in W(L,X_u,X_c) \downarrow_{\cone(e_i) \cup X_u \cup X_c}$ because the transition relations of $G$ and $G_i$ coincide on $\cone(e_i) \cup X_u \cup X_c$. The strategy $\lambda'$ ensures that any execution which starts in $W(L,X_u,X_c)\downarrow_{\cone(e_i) \cup X_u \cup X_c}$ stays inside $W(L,X_u,X_c)\downarrow_{\cone(e_i) \cup X_u \cup X_c}$. Since $e_i$ evaluates to false on $W(L,X_u,X_c)$, these states are not error states in $G_i$. Therefore $\lambda'$ is winning for all states in $W(L,X_u,X_c)\downarrow_{\cone(e_i) \cup X_u \cup X_c}$. This implies that $W_i$ contains the projection of all winning states of $G$ and therefore $W \subseteq W_i\uparrow_L $. \end{proof} \section{Compositional Algorithms}\label{sec:algos} In this section, we give three algorithms to solve AIG games compositionally. Each algorithm first solves the sub-games, and then combines the solutions using different heuristics. We denote by $\decompose$ the procedure that implements the decomposition of~$\fbad$ described in Section~\ref{sec:decomp}, and returns the set of error functions~$e_i$. In all three algorithms, we start by solving each sub-game and obtaining the winning valuations~$W_i(L, X_u, X_c)$, for $1 \le i \le n$. These steps are given in lines $1$--\ref{alg1-loc:endofloop}, and are common to ll our algorithms; we assume that \solve{} raises an exception and terminates the program if the sub-game cannot be won. Otherwise, we aggregate the results and solve the global game; for the latter, we adopt a different approach in each of the three algorithms. \subsection{Global aggregation} \begin{algorithm} \small $\{e_1,\ldots,e_n\}$ := $\decompose(\fbad)$; \tcc*[f]{\scriptsize Formulas $e_i(L,X_u,X_c)$ s.t. $\fbad \equiv \bigvee_{1 \le i \le n} e_i$}\\ \For{$1 \le i \le n$} { $w_i(L, X_u, X_c)$ := $\solve(\langle \cone(e_i), X_u, X_c, (f_l)_{l \in \cone(e_i)}[\fbad \leftarrow e_i] \rangle){\uparrow_{L,X_u,X_c}}$\; \label{alg1-loc:endofloop} } $\Lambda(L, X_u, X_c)$ := $\bigwedge_{1 \le i \le n} w_i(L, X_u, X_c)$\; \lFor{$l \in \cone(\Lambda)$} { $f_l'(L, X_u, X_c)$ := $\gencof{f_l(L, X_u, X_c)}{\Lambda(L, X_u, X_c)}$ } \label{alg1-loc:beforeret} \Return $\solve(\langle \cone(\Lambda), X_u, X_c, (f'_l)_{l \in \cone(\Lambda)} \rangle [\fbad' \leftarrow \lnot \Lambda]){\uparrow_{L,X_u,X_c}}$\; \caption{\texttt{comp\_1}$(\langle L, X_u, X_c, (f_l)_{l\in L}\rangle)$} \label{alg:algo1} \end{algorithm} In Algorithm~\ref{alg:algo1}, we start by computing the intersection of the winning valuations: $\Lambda = \bigwedge_{1\leq i \leq n} W_i$. In fact, any valuation that is not in~$\Lambda$ is losing in one of the sub-games; thus in the global game. Conversely, a strategy that stays in~$\Lambda$ is winning for each sub-game. Therefore, we solve the global game with the new safety objective of avoiding $\lnot \Lambda$. Before solving the global game, the algorithm attempts to reduce the size of the transition relations by virtue of Lemma~\ref{lem:correct-restrict}. \begin{theorem} \label{thm:algo1-correct} Algorithm~\ref{alg:algo1} computes the winning valuations of the given AIG game. \end{theorem} \begin{proof} We prove first that $W \subseteq \Lambda$ (that is for all valuation~$v$, $W(v) \Rightarrow \Lambda(v)$). Since $\lnot e_i \supseteq W(L,X_u,X_c)$, we get -- by Lem.~\ref{lem:correct-restrict} -- that each~$w_i(L,X_u,X_c)$ is $W_i\uparrow_L$ where $W_i$ is the winning valuations of the sub-game~$G_i$. If $q \not\in \Lambda(L, X_u, X_c)$, there is a sub-game $G_i$ such that $\pi_i(q)$ is not winning. By Lem.~\ref{lem:nec-allsub}, this implies that $q$ is not winning in $G$, hence $q\not\in W(L, X_u, X_c)$. From Lem.~\ref{lem:correct-restrict} it then follows that $\solve(G) = \solve(G')\uparrow_L$ and therefore the algorithm computes the correct result. \end{proof} \subsection{Incremental aggregation} \begin{algorithm} \small $\{e_1,\ldots,e_n\}$ := $\decompose(\fbad)$; \tcc*[f]{\scriptsize Formulas $e_i(L,X_u,X_c)$ s.t. $\fbad \equiv \bigvee_{1 \le i \le n} e_i$}\\ \For{$1 \le i \le n$} { $w_i(L, X_u, X_c)$ := $\solve(\langle \cone(e_i), X_u, X_c, (f_l)_{l \in \cone(e_i)}[\fbad \leftarrow e_i] \rangle){\uparrow_{L,X_u,X_c}}$\; \label{alg2-loc:endofloop} } $E$ := $\{ w_i \st 1 \le i \le n\}$\; \While{$|E| > 1$} { $\begin{array}{ll} (r,s) := \argmax_{(i,j) \in |E|^2 : i \neq j } & \{{\alpha} \cdot \texttt{bddsize}(\lnot (w_i \land w_j))\\ &+ \beta |\cone(w_i) \cap \cone(w_j)|\\ &+ \gamma |\cone(w_i) \cup \cone(w_j)|\}; \end{array}$ \label{algo2:combination} \lFor{$l \in \cone(w_r \land w_s)$} { $f_l'(L,X_u,X_c)$ := $\gencof{f_l(L,X_u,X_c)}{ (w_r \land w_s)}$ } $w(L,X_u,X_c)$ := $\solve(\langle \cone(w_r \land w_s),X_u,X_c, (f_l')_{l \in \cone(w_r \land w_s)}\rangle[ \fbad \leftarrow\lnot w_r \lor \lnot w_s]){\uparrow_{L,X_u,X_c}}$\; \label{line:algo2:solve} Remove $w_r,w_s$ and add $w$ to $E$\; \label{line:addrs} } \Return last $w(L,X_u,X_c) \in E$\; \caption{\texttt{comp\_2}$(\langle L, X_u, X_c, (f_l)_{l\in L}\rangle, \alpha, \beta, \gamma)$} \label{alg:algo2} \end{algorithm} In Algorithm~\ref{alg:algo2}, we aggregate the results of the sub-games \emph{incrementally}: given the list of winning valuations $w_i$ for the sub-games, at each iteration, we choose and remove two sub-games~$i$ and~$j$, solve their conjunction (as in Algorithm~\ref{alg:algo1}, with error function $\lnot (w_i \land w_j)$), and add the newly obtained winning valuations back in the list. To choose the sub-games, we use the following heuristics; we assign a score to each pair of sub-games based on the size of the BDD of the error function $\lnot (w_i\land w_j)$, and on the number of shared latches, and the number of the latches that appear in either of the sub-games. Intuitively, we prefer to work with small BDDs, and to merge sub-games that share a lot of latches, while yielding a small number of total latches. We thus use a linear combination at line~\ref{algo2:combination} to choose the best scoring pair. In our experiments, we used $\alpha=-2, \beta = 1, \gamma = -1$. \begin{theorem}\label{thm:algo2-correct} Algorithm~\ref{alg:algo2} computes the winning valuations of the given AIG game. \end{theorem} \begin{proof} Let us denote by $w^i_1, \dots, w^i_{n_i}$ the content of~$E$ at the beginning of iteration~$i$. We define a function $F$ from winning valuations $w^i_j$ to subsets of $\{1,\ldots,n\}$. Intuitively, $F(w^i_j)$ is the set of sub-games that were solved to obtain~$w^i_j$. For instance, at the first iteration, if sub-games~$r,s$ are combined -- and the result, $w$, is added to $E$ -- then we get $F( w ) = \{r,s\}$. For convenience, we assume that $w$ is appended at the end of the sequence $w^i_1, \dots, w^i_{n_i}$ at line \ref{line:addrs}. We proceed by induction on~$i$ to define~$F$. Initially $F( w_i^1 ) = \{ i \}$ for all~$1\leq i \leq n$. For $i>1$, for all~$j \neq r,s$, the element $w^i_j$ remains in the list so~$F$ is already defined on $w^i_j$. For the newly element $w^i_{n_i}$ we let $F( w^i_{n_i} ) = F( w^{i-1}_r ) \cup F(w^{i-1}_s)$. We claim that at any iteration~$i$, $w^i_j$ is the winning valuations of the game whose error function is the disjunction of the negation of the winning valuations of the sub-games in $F(w_j^i)$. More precisely, \[ w_j^i = \solve(\langle L, X_u, X_c,(f_l)_{l \in L}\rangle[\fbad \leftarrow \bigvee_{k \in F(w^i_j)} e_k]). \] The correctness of the algorithm will follow since the sets $F(\cdot)$ are merged at each iteration, and the algorithm always stops with~$|E|=1$ and $F(w) = \{1,\ldots,n\}$. The condition holds initially as shown in Theorem~\ref{thm:algo1-correct}. Let~$i>1$. As shown in Lem.~\ref{lem:correct-restrict}, the generalized cofactor operation applied before the call to \texttt{solve} does not affect the returned set. Let us denote $E_r = \bigvee_{k \in F(w^{i-1}_r)} e_k$ and $E_s = \bigvee_{k \in F(w^{i-1}_s)} e_s$. Let us write $\mathcal{E}= E_r \lor E_s$. We have $E_r \Rightarrow \lnot w_r$ by induction, and similarly $E_s \Rightarrow \lnot w_s$; thus $\mathcal{E} \Rightarrow \lnot w_r \lor \lnot w_s$. Moreover, for any $q(L,X_u)$ if the controller plays strategy~$x_u \in \mathbb{B}^{X_c}$ with $\lnot w_r(q,x_u)$, or $\lnot w_s(q,x_u)$, then he loses for the error function defined by $\mathcal{E}$. In other terms, $\lnot w_r \lor \lnot w_s$ is a subset of losing valuations for error function~$\mathcal{E}$, and contains $\mathcal{E}$, the set of states losing in one step. It follows that $w(L,X_u,X_c)$ computed at step~\ref{line:algo2:solve} is the winning valuations for the error function~$\mathcal{E}$. \end{proof} \subsection{Back-and-forth} \begin{algorithm} \small $\{e_1,\ldots,e_n\}$ := $\decompose(\fbad)$; \tcc*[f]{\scriptsize Formulas $e_i(L,X_u,X_c)$ s.t. $\fbad \equiv \bigvee_{1 \le i \le n} e_i$}\\ \For{$1 \le i \le n$} { $w_i(L,X_u,X_c)$ := $\solve(\langle \cone(e_i), X_u, X_c, (f_l)_{l \in \cone(e_i)}[\fbad \leftarrow e_i] \rangle){\uparrow_{L,X_u,X_c}}$\; $s_i(L)$ := $\solves(\langle \cone(e_i), X_u, X_c, (f_l)_{l \in \cone(e_i)}[\fbad \leftarrow e_i] \rangle){\uparrow_{L}}$\; \label{alg3-loc:endofloop} } $\Lambda(L, X_u, X_c)$ := $\bigwedge_{1 \le i \le n} w_i(L, X_u, X_c)$\; $G'$ := $\langle \cone(\Lambda), X_u, X_c, (f_l)_{l \in \cone(\Lambda)}\rangle[\fbad \leftarrow \lnot \Lambda ]$\; $u(L)$ := $0$\; $u'(L)$ := $\bigvee_{1 \le i \le n} \lnot s_i(L)$; \tcc*[f]{The union of all the losing states}\\ \While{$u \neq u'$} { $u(L)$ := $u'(L)$\; $u'(L)$ := $u(L) \lor \upre_{G'}(u)$\; \label{line:upre} \For{$1 \le i \le n$} { $p_i(L)$ := $\forall L \setminus \cone(e_i) : u'(L)$; \tcc*[f]{Universal projection of latches not present in local sub-game}\\ \If{$p_i \land s_i \neq 0$} { \lFor{$l \in \cone(p_i)$} { $f_l'(L,X_u,X_c)$ := $\gencof{f_l(L,X_u,X_c)}{ \lnot p_i(L)}$ } $s_i(L)$ := $\solves(\langle X_u, X_c, \cone(p_i), (f'_l)_{l \in \cone(p_i)}\rangle[\fbad \leftarrow p_i\uparrow_{L,X_u,X_c}]){\uparrow_{L}}$\; } } $u'(L)$ := $u'(L) \lor \lnot \bigwedge_{1 \le i \le n} (s_i(L) \downarrow_L)$\; } \Return $\lnot u(L)$\; \caption{\texttt{comp\_3}$(\langle L, X_u, X_c, (f_l)_{l\in L}\rangle)$} \label{alg:algo3} \end{algorithm} In Algorithm~\ref{alg:algo3}, we interleave the analysis of the global game (with objective $\Lambda$) and the analysis of the sub-games. At each iteration, we extend the losing states $u(L)$ by one step, by applying once the $\upre$ operator. We then consider each sub-game, and check whether the new set $u'(L)$ of losing states (projected on the sub-game), changes the local winning states. Here, $p_i(L)$ is this projection on the local state-space of sub-game $i$. We update the strategies~$\lambda_i$ of the sub-games when necessary, and restart until stabilization. Because analyzing the sub-games is often more efficient than analyzing the global game, this algorithm improves over Algorithm~\ref{alg:algo1} in some cases (see the experiments' section). A similar idea was used in \cite{FiliotJR11} for the problem of synthesis from LTL specifications. \begin{theorem} Algorithm~\ref{alg:algo3} computes the winning valuations of the given AIG game. \end{theorem} \begin{proof} Let~$W(L)$ denote the set of winning states of the game~$G$. We consider the following invariant. \begin{equation} \begin{array}{l} \forall i\in\{1,\ldots,n\}, W(L) \subseteq s_i(L), \\%\subseteq \lnot u'(L).\\ \out \subseteq u'(L) \subseteq \lnot W(L). \end{array} \label{eqn:invar3} \end{equation} In words, in every iteration, $u'(L)$ is contained in the losing valuations of the global game, and each $s_i(L)$ contains the winning valuations of~$G_i$. Initially, by Algorithm~\ref{alg:algo1}, $W \subseteq s_i(L)$ for all~$i$, and we have $\out \subseteq \lnot s_i(L)$. So $\out \subseteq \lnot \land_i s_i(L)$. Thus, $\out \subseteq u'(L)$. Moreover, since $\lor_i \lnot s_i(L) \subseteq \lnot W$, we have that $u'(L) \subseteq \lnot W$. Consider now iteration~$i>1$, and assume the invariant holds at the beginning. $u'(L)$ is updated at line~\ref{line:upre}. The property $\out \subseteq u'(L) \subseteq \lnot W$ still holds by the definition of the $\upre$ operation, and by the fact that the set $u'$ can only grow at this step (because of the union). We consider now the for loop, and show that $W \subseteq s_i$ after each iteration. Assume $p_i \cap s_i \neq \emptyset$ since otherwise~$s_i$ is not modified. By definition $p_i \subseteq u'$ thus $p_i \subseteq \lnot W$. Then the \texttt{solve} function computes the set of states from which the controller can avoid $\out \lor p_i$. Since $\out \lor p_i \subseteq \lnot W$, we get that $s_i \subseteq W$. It follows that $\lnot \bigwedge_{i=1}^n s_i \subseteq \lnot W$. Thus, at the last line of the while loop, we have $\out \subseteq u'(L) \subseteq \lnot W$. Now, line~\ref{line:upre} ensures that after iteration~$i$, $u'(L)$ contains the $i$-th iteration of the $\upre$ fixpoint computation. Hence, the test $u\neq u'$ of the while loop ensures that the while loop terminates with~$u(L)$ being equal to~$\upre^*(G)$. \end{proof} \section{Experiments}\label{sec:experiments} We implemented our algorithms in the synthesis tool AbsSynthe~\cite{bprs14}. We compare their running times against the most efficient algorithm of AbsSynthe that implements a backward fixpoint algorithm.\footnote{The new version of AbsSynthe with the implementation of the compositional algorithms can be fetched from \url{https://github.com/gaperez64/abssynthe}.} This algorithm was the winner of the \textbf{$2014$ Synthesis Competition} synthesis track, and the winner of the realizability track at the same competition implemented a similar backward algorithm. Let us first illustrate the advantage of the compositional approach with two examples. In the first set of benchmarks we consider, the controller is to compute the multiplication of two Boolean matrices given as (uncontrollable) input. Since each cell of the resulting matrix depends only on a subset of inputs, namely, on one row and one column, these benchmarks are well adapted for compositional algorithms. Figure~\ref{fig:mult} compares the performances of the classical algorithm with Algorithm~\ref{alg:algo1}. The classical algorithm was able to solve 36 instances, while the compositional algorithm solved all 75 instances and was significantly faster. The x-axis shows the number of solved benchmarks within the running time given by the y-axis. The second set of benchmarks we consider consist in a washing system made of~$n$ tanks. An uncontrollable input can request at any time the tank to be activated, at which point the controller should fill the tank with water, and empty it after at least~$k$ steps. Moreover, some subsets of tanks cannot be filled at the same time, and a light is to be on if at least one tank is filled with water. Note that the control strategy for each tank is not independent due to mutual exclusion constraints, and to the light indicator. Algorithm~\ref{alg:algo1} was also more efficient on these benchmarks, as shown on Fig.~\ref{fig:sched}. The classical algorithm solved 132 benchmarks out of 256, while Alg.~\ref{alg:algo1} solved 152. \begin{figure} \caption{Performances for 75 Boolean matrix multiplication benchmarks for Algorithm~\ref{alg:algo1} and the classical algorithm. } \label{fig:mult} \caption{Peformances for the 256 washing system benchmarks for Algorithm~\ref{alg:algo1} and the classical algorithm.} \label{fig:sched} \end{figure} We now evaluate all three compositional algorithms and compare them with the classical algorithm on a large benchmark set of 674 benchmarks. $562$ of these benchmarks were provided for the \textbf{$2014$ Synthesis Competition} and $105$ have been generated by the new version of LTL2AIG~\cite{ltl2aig} which translates conjunctions of LTL specifications into AIG.\footnote{A collection of benchmarks, including the ones mentioned here, can be fetched from \url{https://github.com/gaperez64/bench-syntcomp14} and \url{https://github.com/gaperez64/bench-ulb-syntcomp15}.} Among those benchmarks, $351$ are decomposable by our static analysis into at least $2$ smaller sub-games. More specifically, the average number of sub-games our decomposition algorithm outputs is $29$; the median is $21$. In general, the performances of the three compositional algorithms can differ, but they are complementary. Figures~\ref{fig:load} to~\ref{fig:amba} show the performances of the algorithms on several sets of benchmarks. All benchmarks in Figures~\ref{fig:load} and~\ref{fig:gb} are decomposable. Figure~\ref{fig:best} shows all the benchmarks we used and Figure~\ref{fig:amba} shows only those benchmarks from last year's synthesis competition which were based on specifications of the \textsf{AMBA} arbiter. \paragraph{{\bf Conclusion.}} Even if AIG synthesis problems are monolithic, the experiments show that our compositional approach was able to solve problems that can not be handled by the monolithic backward algorithm; our compositional algorithms are sometimes much more efficient. There are also examples that can be decomposed but which are not solved more efficiently by the compositional algorithms. So, it is often a good idea to apply all the algorithms in parallel. This portfolio approach improved the performance and was able to solve 20 benchmarks that could not be solved by the fastest algorithms of last year's reactive synthesis competition. \begin{figure} \caption{{Performances for 68 load-balancing benchmarks translated from LTL. The classical algorithm solves 38 benchmarks, comp.1 44, comp.2 45, comp.3 45. In total there are 46 benchmarks that can be solved. The largest example that can be solved has 4005 latches and the smallest example that cannot be solved has 670 latches.} } \label{fig:load} \caption{{Performances for 46 generalized buffer benchmarks translated from LTL. The classical algorithm solves 6 benchmarks, comp.1 10, comp.2 15, comp.3 11. In total there are 18 benchmarks that can be solved. The largest example that can be solved has 22662 latches and the smallest example that cannot be solved has 590 latches.} } \label{fig:gb} \caption{{Performances for the 674 benchmarks. The classical algorithm was able to solve 572 benchmarks. 20 more benchmarks were solved by one of the three compositional algorithms.} } \label{fig:best} \caption{{Performances for 108 AMBA benchmarks. The classical algorithm was able to solve 106 benchmarks, comp.1 84, comp.2 76, comp.3 93.}} \label{fig:amba} \end{figure} \end{document}
arXiv
Question 152 NPV, Annuity The following cash flows are expected: 10 yearly payments of $80, with the first payment in 3 years from now (first payment at t=3). 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? (a) $1,006.25 (b) $846.78 (c) $761.47 (d) $741.87 (e) $724.54 Question 729 book and market values, balance sheet, no explanation If a firm makes a profit and pays no dividends, which of the following accounts will increase? (a) Asset revaluation reserve. (b) Foreign currency translation reserve. (c) Retained earnings, also known as retained profits. (d) Contributed equity, also known as paid up capital. (e) General reserve. Question 730 DDM, income and capital returns, no explanation A stock's current price is $1. Its expected total return is 10% pa and its long term expected capital return is 4% pa. It pays an annual dividend and the next one will be paid in one year. All rates are given as effective annual rates. The dividend discount model is thought to be a suitable model for the stock. Ignore taxes. Which of the following statements about the stock is NOT correct? (a) The expected dividend yield is 6% pa. (b) The expected growth rate of the dividend is 4% pa. (c) The dividend in one year is expected to be $0.06. (d) The price is expected to be $1.06 in one year, just before the dividend is paid. (e) The price is expected to be $1.04 in one year, just after the dividend is paid. In the dividend discount model (DDM), share prices fall when dividends are paid. Let the high price before the fall be called the peak, and the low price after the fall be called the trough. ###P_0=\dfrac{C_1}{r-g}### Which of the following statements about the DDM is NOT correct? (a) In between dividends, the stock price is expected to grow by the total return 'r'. (b) From trough to trough, the stock price is expected to grow by the capital return 'g'. (c) From peak to peak, the stock price is expected to grow by the capital return 'g'. (d) Dividends are expected to grow by the total return 'r'. (e) If the stock's dividends are re-invested by using the cash to buy more stocks, then the growth rate of the shareholder's wealth will be the total return 'r'. Question 732 real and nominal returns and cash flows, inflation, income and capital returns An investor bought a bond for $100 (at t=0) and one year later it paid its annual coupon of $1 (at t=1). Just after the coupon was paid, the bond price was $100.50 (at t=1). Inflation over the past year (from t=0 to t=1) was 3% pa, given as an effective annual rate. Which of the following statements is NOT correct? The bond investment produced a: (a) Nominal income return of 1% pa ##\left( =\dfrac{1}{100} \right)##. (b) Nominal capital return of 0.5% pa ##\left( =\dfrac{100.5-100}{100} \right)##. (c) Nominal total return of 1.5% pa ##\left( =\dfrac{100.5-100+1}{100} \right)##. (d) Real income return of 0.9708738% pa##\left( =\dfrac{ 1/(1+0.03)^1}{100} \right)##. (e) Real capital return of 0.4854369% pa ##\left( =\dfrac{ (100.5-100)/(1+0.03)^1}{100} \right) ##. Question 733 DDM, income and capital returns A share's current price is $60. It's expected to pay a dividend of $1.50 in one year. The growth rate of the dividend is 0.5% pa and the stock's required total return is 3% pa. The stock's price can be modeled using the dividend discount model (DDM): ##P_0=\dfrac{C_1}{r-g}## Which of the following methods is NOT equal to the stock's expected price in one year and six months (t=1.5 years)? Note that the symbolic formulas shown in each line below do equal the formulas with numbers. The formula is just repeated with symbols and then numbers in case it helps you to identify the incorrect statement more quickly. (a) ##P_{1.5}=P_0 (1+g)^1 (1+r)^{0.5}=60(1+0.005)^1 (1+0.03)^{0.5}## (b) ##P_{1.5}=(P_0 (1+r)^1-C_1 ) (1+r)^{0.5}=(60(1+0.03)^1-1.5) (1+0.03)^{0.5}## (c) ##P_{1.5}=\dfrac{C_1}{r-g} (1+r)^1 (1+g)^{0.5}=\dfrac{1.5}{0.03-0.005} (1+0.03)^1 (1+0.005)^{0.5}## (d) ##P_{1.5}=\dfrac{C_1 (1+g)^1}{r-g} (1+r)^{0.5}=\dfrac{1.5(1+0.005)^1}{0.03-0.005} (1+0.03)^{0.5}## (e) ##P_{1.5}=\dfrac{C_1 (1+g)^2}{r-g}/(1+r)^{0.5} +C_1 (1+g)^1/(1+r)^{0.5} =\dfrac{1.5(1+0.005)^2}{0.03-0.005}/(1+0.03)^{0.5} +1.5(1+0.005)^1/(1+0.03)^{0.5} ## Question 734 real and nominal returns and cash flows, inflation, DDM, no explanation An equities analyst is using the dividend discount model to price a company's shares. The company operates domestically and has no plans to expand overseas. It is part of a mature industry with stable positive growth prospects. The analyst has estimated the real required return (r) of the stock and the value of the dividend that the stock just paid a moment before ##(C_\text{0 before})##. What is the highest perpetual real growth rate of dividends (g) that can be justified? Select the most correct statement from the following choices. The highest perpetual real expected growth rate of dividends that can be justified is the country's expected: (a) Inflation rate. (b) Nominal GDP growth rate. (c) Real GDP growth rate. (d) Nominal total return on the share market index. (e) Real total return on the share market index. Question 476 income and capital returns, idiom The saying "buy low, sell high" suggests that investors should make a: (a) Positive income return. (b) Positive capital return. (c) Negative income return. (d) Negative capital return. (e) Positive total return. Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? An asset's total expected return over the next year is given by: ###r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0} ### Where ##p_0## is the current price, ##c_1## is the expected income in one year and ##p_1## is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? (a) ##c_1## (b) ##p_1-p_0## (c) ##\dfrac{c_1}{p_0} ## (d) ##\dfrac{p_1}{p_0} -1## (e) ##\dfrac{p_1}{p_0} ## A share was bought for $30 (at t=0) and paid its annual dividend of $6 one year later (at t=1). Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: ##r_\text{total}## , ##r_\text{capital}## , ##r_\text{dividend}##. (a) -0.1, -0.3, 0.2. (b) -0.1, 0.1, -0.2. (c) 0.1, -0.1, 0.2. (d) 0.1, 0.2, -0.1. (e) 0.2, 0.1, -0.1. Question 404 income and capital returns, real estate One and a half years ago Frank bought a house for $600,000. Now it's worth only $500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $18,617.27. The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. (a) 3.1029% (b) 3.3201% (c) 3.7235% (d) 3.9841% (e) 7% Question 542 price gains and returns over time, IRR, NPV, income and capital returns, effective return For an asset price to double every 10 years, what must be the expected future capital return, given as an effective annual rate? (b) 0.116123 (c) 0.082037 (d) 0.071773 (e) 0.06054 Question 278 inflation, real and nominal returns and cash flows Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? Question 353 income and capital returns, inflation, real and nominal returns and cash flows, real estate A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. (a) 3.9216%, 2.9412%, 0.9804%. (b) 3.9216%, 0.9804%, 2.9412%. (c) 3.9216%, 0.9804%, 0.9804%. (d) 1.9804%, 1.0000%, 0.9804%. (e) 1.9608%, 0.9804%, 0.9804%. Question 295 inflation, real and nominal returns and cash flows, NPV When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? (a) I only. (b) III only. (c) IV only. (d) I and IV only. (e) II and III only. Question 526 real and nominal returns and cash flows, inflation, no explanation How can a nominal cash flow be precisely converted into a real cash flow? (a) ##C_\text{real, t}=C_\text{nominal,t}.(1+r_\text{inflation})^t## (b) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{(1+r_\text{inflation})^t} ## (c) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{r_\text{inflation}} ## (d) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation} ## (e) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation}.t## You expect a nominal payment of $100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct? (a) The nominal cash flow of $100 in 5 years is equivalent to a real cash flow of $86.2609 in 5 years. This means that $86.2609 will buy the same amount of goods and services now as $100 will buy in 5 years. (b) The real discount rate of 10% pa is equivalent to a nominal discount rate of 13.3333% pa. (c) The nominal price of goods and services will increase by 3% every year. (d) The real price of goods and services will increase by 3% every year. (e) The present value of your payment will increase by the nominal discount rate every year. On his 20th birthday, a man makes a resolution. He will put $30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. (a) $21,600, $95,035.46 (b) $21,600, $49,515.44 (c) $21,600, $4,909.33 (d) $21,600, $2,557.86 (e) $11,254.05, $2,557.86 Question 221 credit risk You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? (a) Junior debt is the safest. Preference shares will have the highest returns. (b) Preference shares are the safest. Ordinary shares will have the highest returns. (c) Senior debt is the safest. Ordinary shares will have the highest returns. (d) Junior debt is the safest. Ordinary shares will have the highest returns. (e) Senior debt is the safest. Junior debt will have the highest returns. Question 466 limited liability, business structure Which business structure or structures have the advantage of limited liability for equity investors? (a) Sole traders. (b) Partnerships. (c) Corporations. (d) All of the above. (e) None of the above Question 531 bankruptcy or insolvency, capital structure, risk, limited liability Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately. (a) Alice has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $10,000 and liabilities of $3,000. (b) Billy has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a corporate business with assets worth $3,000 and liabilities of $10,000. (c) Carla has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a corporate business with assets worth $10,000 and liabilities of $3,000. (d) Darren has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $3,000 and liabilities of $10,000. (e) Ernie has $1,000 cash, lent $3,000 to his friend, and doesn't have any personal debt or own any businesses. Question 467 book and market values Which of the following statements about book and market equity is NOT correct? (a) The market value of equity of a listed company's common stock is equal to the number of common shares multiplied by the share price. (b) The book value of equity is the sum of contributed equity, retained profits and reserves. (c) A company's book value of equity is recorded in its income statement, also known as the 'profit and loss' or the 'statement of financial performance'. (d) A new company's market value of equity equals its book value of equity the moment that its shares are first sold. From then on, the market value changes continuously but the book value which is recorded at historical cost tends to only change due to retained profits. (e) To buy all of the firm's shares, generally a price close to the market value of equity will have to be paid. Question 473 market capitalisation of equity The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? (a) $431.18 billion (b) $429 billion (c) $134.07 billion (d) $8.44 billion (e) $3.21 billion Question 445 financing decision, corporate financial decision theory The financing decision primarily affects which part of a business? (a) Assets. (b) Liabilities and owner's equity. (c) Current assets and current liabilities. (d) Dividends and buy backs. (e) Net income, also known as earnings or net profit after tax. Question 443 corporate financial decision theory, investment decision, financing decision, working capital decision, payout policy Business people make lots of important decisions. Which of the following is the most important long term decision? (a) Investment decision. (b) Financing decision. (c) Working capital decision. (d) Payout policy decision. (e) Capital or labour decision. Question 59 NPV The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) (a) -100 (d) 21 Question 182 NPV, IRR, pay back period A project's NPV is positive. Select the most correct statement: (a) The project should be rejected. (b) The project's IRR is more than its required return. (c) The project's IRR is less than its required return. (d) The project's IRR is equal to its required return. (e) The project will never pay itself off, assuming that the discount rate is positive. For an asset price to triple every 5 years, what must be the expected future capital return, given as an effective annual rate? (e) 0.219755 You're considering a business project which costs $11m now and is expected to pay a single cash flow of $11m in one year. So you pay $11m now, then one year later you receive $11m. Assume that the initial $11m cost is funded using the your firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about the net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? (a) The NPV is negative $1m. (b) The IRR is 0% pa, less than the 10% cost of capital. (c) The payback period is one year. (d) The project should be rejected. (e) If the project is accepted now then the market value of the firm's assets will fall by $11m. Question 43 pay back period A project to build a toll road will take 3 years to complete, costing three payments of $50 million, paid at the start of each year (at times 0, 1, and 2). After completion, the toll road will yield a constant $10 million at the end of each year forever with no costs. So the first payment will be at t=4. The required return of the project is 10% pa given as an effective nominal rate. All cash flows are nominal. What is the payback period? (a) Negative since the NPV is negative. (b) Zero since the project's internal rate of return is less than the required return. (c) 15 years. (d) 18 years. (e) Infinite, since the project will never pay itself off. A firm is considering a business project which costs $10m now and is expected to pay a single cash flow of $12.1m in two years. Assume that the initial $10m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? (a) The NPV is zero. (b) The IRR is 10% pa, equal to the 10% cost of capital. (c) The payback period is two years assuming that the whole $12.1m cash flow occurs at t=2, or 1.826 years if the $12.1m cash flow is paid smoothly over the second year. (d) The project could be accepted or rejected, the owners would be indifferent. (e) If the project is accepted then the market value of the firm's assets will increase by $2.1m more than it would otherwise if the project was rejected. Which of the following equations is NOT equal to the total return of an asset? Let ##p_0## be the current price, ##p_1## the expected price in one year and ##c_1## the expected income in one year. (a) ##r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0} ## (b) ##r_\text{total} = \dfrac{c_1+p_1}{p_0} - 1## (c) ##r_\text{total} = \dfrac{c_1}{p_0} + \dfrac{p_1-p_0}{p_0}## (d) ##r_\text{total} = \dfrac{c_1}{p_0} + \dfrac{p_1}{p_0} ## (e) ##r_\text{total} = \dfrac{c_1}{p_0} + \dfrac{p_1}{p_0} - 1## Question 120 credit risk, payout policy A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? (a) Preferred stock only. (b) The senior and junior bonds only. (c) Common stock only. (d) The senior and junior bonds and the preferred stock. (e) No payments on any security is required since the firm made a loss. The below screenshot of Microsoft's (MSFT) details were taken from the Google Finance website on 28 Nov 2014. Some information has been deliberately blanked out. What was MSFT's market capitalisation of equity? (a) $395.11 million (b) $21.01 billion (d) $393.95 billion (e) $1.02935 trillion Question 524 risk, expected and historical returns, bankruptcy or insolvency, capital structure, corporate financial decision theory, limited liability Which of the following statements is NOT correct? (a) Stocks are higher risk investments than debt. (b) Stocks have higher expected returns than debt. (c) Firms' past realised stock returns are always higher than their past realised debt returns. (d) In the event of bankruptcy, stock holders are paid after debt holders are fully paid. (e) Stock holders have a residual claim on the firm's assets. Apples and oranges currently cost $1 each. Inflation is 5% pa, and apples and oranges are equally affected by this inflation rate. Note that when payments are not specified as real, as in this question, they're conventionally assumed to be nominal. (a) A payment of $105 now has a real value of 105 apples or oranges now. (b) A payment of $1 now has a real value of one apple or orange now. (c) A payment of $105 in one year has a real value of 100 apples or oranges in one year. (d) A payment of $105 in one year is worth $105 in real terms today. (e) A payment of one apple now has a real value of one orange now. Which of the following statements about inflation is NOT correct? (a) Real returns approximately equal nominal returns less the inflation rate. (b) Constant prices are the same as real prices. (c) Current prices are the same as nominal prices. (d) If your nominal wage grows by inflation, then your real wage won't change because you will be able to buy the same amount of goods and services as before. (e) Interest rates advertised at the bank are usually quoted in real terms. What is the present value of a nominal payment of $1,000 in 4 years? The nominal discount rate is 8% pa and the inflation rate is 2% pa. (a) $795.62 (b) $792.0937 (c) $735.0299 (d) $731.7721 (e) $683.0135 Question 522 income and capital returns, real and nominal returns and cash flows, inflation, real estate A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 2.5% pa. Inflation is expected to be 2.5% pa. All of the above are effective nominal rates and investors believe that they will stay the same in perpetuity. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. (a) 3.4146%, 3.4146%, 0%. (c) 3.4146%, 0%, 3.4146%. (d) 0.9878%, 0.5%, 0.4878%. Question 523 income and capital returns, real and nominal returns and cash flows, inflation A low-growth mature stock has an expected nominal total return of 6% pa and nominal capital return of 2% pa. Inflation is expected to be 3% pa. What are the stock's expected real total, capital and income returns? (a) 2.9126%, 3.8835%, -0.9709%. (b) 2.9126%, -0.9709%, 3.8835%. (c) 2.9126%, -0.9709%, 0.9709%. (d) -0.0291%, -1%, 0.9709%. (e) 0%, -0.9709%, 0.9709%. Question 739 real and nominal returns and cash flows, inflation There are a number of different formulas involving real and nominal returns and cash flows. Which one of the following formulas is NOT correct? All returns are effective annual rates. Note that the symbol ##\approx## means 'approximately equal to'. (a) ##r_\text{real} \approx r_\text{nominal} - r_\text{inflation}## (b) ##(1+r_\text{real})^t = \left( \dfrac{1+r_\text{nominal}}{1+r_\text{inflation}} \right)^t ## (c) ##V_\text{t,real} = V_\text{t,nominal}.(1+r_\text{inflation})^t## (d) ##V_\text{0} = \dfrac{V_\text{t,nominal}}{(1+r_\text{nominal})^t}## (e) ##V_\text{0} = \dfrac{V_\text{t,real}}{(1+r_\text{real})^t}## Question 216 DDM A stock just paid its annual dividend of $9. The share price is $60. The required return of the stock is 10% pa as an effective annual rate. What is the implied growth rate of the dividend per year? (a) -0.8565 (b) -0.0500 (c) -0.0435 (d) 0.0000 (e) 0.1500 Question 498 NPV, Annuity, perpetuity with growth, multi stage growth model A business project is expected to cost $100 now (t=0), then pay $10 at the end of the third (t=3), fourth, fifth and sixth years, and then grow by 5% pa every year forever. So the cash flow will be $10.5 at the end of the seventh year (t=7), then $11.025 at the end of the eighth year (t=8) and so on perpetually. The total required return is 10℅ pa. Which of the following formulas will NOT give the correct net present value of the project? (a) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^3} \right)}{(1+0.1)^2} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## (b) ##-100+ \dfrac{10}{(1+0.1)^3} +\dfrac{10}{(1+0.1)^4} +\dfrac{10}{(1+0.1)^5} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## (c) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^4} \right)}{(1+0.1)^2} +\dfrac{\left(\dfrac{10.5}{0.1-0.05}\right)}{(1+0.1)^6} ## (d) ##-100+ \dfrac{10}{(1+0.1)^3} +\dfrac{10}{(1+0.1)^4} +\dfrac{10}{(1+0.1)^5} +\dfrac{10}{(1+0.1)^6} +\dfrac{\left(\dfrac{10.5}{0.1-0.05}\right)}{(1+0.1)^6} ## (e) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^3} \right)}{(1+0.1)^3} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## Question 50 DDM, stock pricing, inflation, real and nominal returns and cash flows Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a $0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? (a) $57.1734 (b) $28.0394 (c) $27.7723 (d) $27.5000 (e) $27.2330 Question 250 NPV, Loan, arbitrage table Your neighbour asks you for a loan of $100 and offers to pay you back $120 in one year. You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates. Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs. The Net Present Value (NPV) of lending to your neighbour is $9.09. Describe what you would do to actually receive a $9.09 cash flow right now with zero net cash flows in the future. (a) Borrow $109.09 from the bank and lend $100 of it to your neighbour now. (b) Borrow $100 from the bank and lend it to your neighbour now. (c) Borrow $209.09 from the bank and lend $100 to your neighbour now. (d) Borrow $120 from the bank and lend $100 of it to your neighbour now. (e) Borrow $90.91 from the bank and lend it to your neighbour now. Question 126 IRR What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. (a) 0.21 (b) 0.105 (c) 0.1111 (e) 0 Question 37 IRR If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: (a) Positive infinity (##+\infty##) (b) Zero (0). (c) Less than the project's required return. (d) More than the project's required return. (e) Equal to the project's required return. The required return of a project is 10%, given as an effective annual rate. What is the payback period of the project in years? Assume that the cash flows shown in the table are received smoothly over the year. So the $121 at time 2 is actually earned smoothly from t=1 to t=2. (a) 2.7355 (b) 2.3596 Question 190 pay back period A project has the following cash flows: Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the $500 at time 2 is actually earned smoothly from t=1 to t=2. (a) -0.80 (c) 1.20 (d) 1.80 (e) 2.20 Question 500 NPV, IRR The below graph shows a project's net present value (NPV) against its annual discount rate. For what discount rate or range of discount rates would you accept and commence the project? All answer choices are given as approximations from reading off the graph. (a) From 0 to 10% pa. (b) From 0 to 5% pa. (c) At 5.5% pa. (d) From 6 to 20% pa. (e) From 0 to 20% pa. (a) When the project's discount rate is 18% pa, the NPV is approximately -$30m. (b) The payback period is infinite, the project never pays itself off. (c) The addition of the project's cash flows, ignoring the time value of money, is approximately $20m. (d) The project's IRR is approximately 5.5% pa. (e) As the discount rate rises, the NPV falls. Question 251 NPV You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end (t=1). How much can you consume at each time? (a) $57,619.0476 (b) $55,000 (c) $53,809.5238 (d) $52,380.9524 (e) $50,000 Question 502 NPV, IRR, mutually exclusive projects An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects now ($) Sale price in one year ($) IRR (% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? (a) Petrol station. (b) Car wash. (c) Car park. (d) None of the projects. (e) All of the projects. Question 579 price gains and returns over time, time calculation, effective rate How many years will it take for an asset's price to double if the price grows by 10% pa? (a) 1.8182 years (b) 3.3219 years (c) 7.2725 years (d) 11.5267 years (e) 13.7504 years How many years will it take for an asset's price to quadruple (be four times as big, say from $1 to $4) if the price grows by 15% pa? (d) 9.919 years Question 288 Annuity There are many ways to write the ordinary annuity formula. Which of the following is NOT equal to the ordinary annuity formula? (a) ##V_0 = \dfrac{C_1}{r} \left(1-\dfrac{1}{(1+r)^T} \right) ## (b) ##V_0 = \dfrac{C_1 \left(1-\dfrac{1}{(1+r)^T} \right)}{r}## (c) ##V_0 = C_1\dfrac{\left(1-(1+r)^{-T} \right)}{r}## (d) ##V_0 = C_1 \left(1-(1+r)^{-T} \right) r^{-1}## (e) ##V_0 = C_1 \left(r^{-1}-(1+r)^{-T-1} \right)## Question 58 NPV, inflation, real and nominal returns and cash flows, Annuity A project to build a toll bridge will take two years to complete, costing three payments of $100 million at the start of each year for the next three years, that is at t=0, 1 and 2. After completion, the toll bridge will yield a constant $50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: (a) -$112,496,484 (b) -$32,260,693 (c) -$19,645,987 (d) $5,222,533 (e) $200,000,000 The first payment of a constant perpetual annual cash flow is received at time 5. Let this cash flow be ##C_5## and the required return be ##r##. So there will be equal annual cash flows at time 5, 6, 7 and so on forever, and all of the cash flows will be equal so ##C_5 = C_6 = C_7 = ...## When the perpetuity formula is used to value this stream of cash flows, it will give a value (V) at time: (a) 0, so ##V_0=\dfrac{C_5}{r}## (b) 1, so ##V_1=\dfrac{C_5}{r}## (c) 4, so ##V_4=\dfrac{C_5}{r}## (d) 5, so ##V_5=\dfrac{C_5}{r}## (e) 6, so ##V_6=\dfrac{C_5}{r}## Question 352 income and capital returns, DDM, real estate Two years ago Fred bought a house for $300,000. Now it's worth $500,000, based on recent similar sales in the area. Fred's residential property has an expected total return of 8% pa. The future value of 12 months of rental payments one year ahead is $25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? (a) -0.3426% (b) 0% (e) 3.3652% The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. ###p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}### Which expression is NOT equal to the expected capital return? (a) ## g_\text{eff} ## (b) ## \dfrac{p_1}{p_0} -1 ## (c) ## \dfrac{d_5}{d_4} -1 ## (d) ## \dfrac{d_1}{p_0} - 1 ## (e) ## \dfrac{p_1-p_0}{p_0} ## Question 358 PE ratio, Multiples valuation Estimate the Chinese bank ICBC's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that the renminbi (RMB) is the Chinese currency, also known as the yuan (CNY). The 4 major Chinese banks ICBC, China Construction Bank (CCB), Bank of China (BOC) and Agricultural Bank of China (ABC) are comparable companies; ICBC 's historical earnings per share (EPS) is RMB 0.74; CCB's backward-looking PE ratio is 4.59; BOC 's backward-looking PE ratio is 4.78; ABC's backward-looking PE ratio is also 4.78; Note: Figures sourced from Google Finance on 25 March 2014. Share prices are from the Shanghai stock exchange. (a) RMB 6.4595 (b) RMB 6.3739 (c) RMB 6.3311 (d) RMB 3.4903 (e) RMB 3.3966 Private equity firms are known to buy medium sized private companies operating in the same industry, merge them together into a larger company, and then sell it off in a public float (initial public offering, IPO). If medium-sized private companies trade at PE ratios of 5 and larger listed companies trade at PE ratios of 15, what return can be achieved from this strategy? Assume that: The medium-sized companies can be bought, merged and sold in an IPO instantaneously. There are no costs of finding, valuing, merging and restructuring the medium sized companies. Also, there is no competition to buy the medium-sized companies from other private equity firms. The large merged firm's earnings are the sum of the medium firms' earnings. The only reason for the difference in medium and large firm's PE ratios is due to the illiquidity of the medium firms' shares. Return is defined as: ##r_{0→1} = (p_1-p_0+c_1)/p_0## , where time zero is just before the merger and time one is just after. (a) 300%. (b) 200% (c) 33.33% (d) 30% (e) 20% Question 333 DDM, time calculation When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently $1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? (a) 1,443 years (b) 1,199 years (c) 955 years (d) 674 years (e) 148 years Question 548 equivalent annual cash flow, time calculation, no explanation An Apple iPhone 6 smart phone can be bought now for $999. An Android Kogan Agora 4G+ smart phone can be bought now for $240. If the Kogan phone lasts for one year, approximately how long must the Apple phone last for to have the same equivalent annual cost? Assume that both phones have equivalent features besides their lifetimes, that both are worthless once they've outlasted their life, the discount rate is 10% pa given as an effective annual rate, and there are no extra costs or benefits from either phone. (a) 11 years (b) 9 years (c) 7 years (d) 5 years (e) 3 years Question 505 equivalent annual cash flow A low-quality second-hand car can be bought now for $1,000 and will last for 1 year before it will be scrapped for nothing. A high-quality second-hand car can be bought now for $4,900 and it will last for 5 years before it will be scrapped for nothing. What is the equivalent annual cost of each car? Assume a discount rate of 10% pa, given as an effective annual rate. The answer choices are given as the equivalent annual cost of the low-quality car and then the high quality car. (a) $100, $490 (b) $909.09, $608.5 (c) $1,000, $980 (d) $1,000, $1578.3 (e) $1,100, $1,292.61 You're advising your superstar client 40-cent who is weighing up buying a private jet or a luxury yacht. 40-cent is just as happy with either, but he wants to go with the more cost-effective option. These are the cash flows of the two options: The private jet can be bought for $6m now, which will cost $12,000 per month in fuel, piloting and airport costs, payable at the end of each month. The jet will last for 12 years. Or the luxury yacht can be bought for $4m now, which will cost $20,000 per month in fuel, crew and berthing costs, payable at the end of each month. The yacht will last for 20 years. What's unusual about 40-cent is that he is so famous that he will actually be able to sell his jet or yacht for the same price as it was bought since the next generation of superstar musicians will buy it from him as a status symbol. Bank interest rates are 10% pa, given as an effective annual rate. You can assume that 40-cent will live for another 60 years and that when the jet or yacht's life is at an end, he will buy a new one with the same details as above. Would you advise 40-cent to buy the or the ? Note that the effective monthly rate is ##r_\text{eff monthly}=(1+0.1)^{1/12}-1=0.00797414## You just bought a nice dress which you plan to wear once per month on nights out. You bought it a moment ago for $600 (at t=0). In your experience, dresses used once per month last for 6 years. Your younger sister is a student with no money and wants to borrow your dress once a month when she hits the town. With the increased use, your dress will only last for another 3 years rather than 6. What is the present value of the cost of letting your sister use your current dress for the next 3 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new dress when your current one wears out; your sister will only use the current dress, not the next one that you will buy; and the price of a new dress never changes. (b) 283.1403 (c) 282.2849 (d) 257.4003 (e) 225.3944 Question 2 NPV, Annuity Katya offers to pay you $10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you $5,000 now (t=0), and in return she wants you to pay her back $1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of $1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. (a) -$1,000 (d) $1,040.6721 (e) $1,400.611 Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five $10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? (a) $0 (b) $10 (c) $50 (d) Positive infinity (e) Priceless Question 479 perpetuity with growth, DDM, NPV Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? (a) ##V_0=\dfrac{C_t}{(1+r)^t} ## (b) ##V_0=\dfrac{C_1}{r}.\left(1-\dfrac{1}{(1+r)^T} \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t}{(1+r)^t} \right) ## (c) ##V_0=\dfrac{C_1}{r-g}.\left(1-\left(\dfrac{1+g}{1+r}\right)^T \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## (d) ##V_0=\dfrac{C_1}{r} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t}{(1+r)^t} \right) ## (e) ##V_0=\dfrac{C_1}{r-g} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## A stock is expected to pay its next dividend of $1 in one year. Future annual dividends are expected to grow by 2% pa. So the first dividend of $1 will be in one year, the year after that $1.02 (=1*(1+0.02)^1), and a year later $1.0404 (=1*(1+0.02)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. (a) $10 (b) $12.254902 (c) $12.5 (d) $12.75 (e) $13.75 A stock just paid a dividend of $1. Future annual dividends are expected to grow by 2% pa. The next dividend of $1.02 (=1*(1+0.02)^1) will be in one year, and the year after that the dividend will be $1.0404 (=1*(1+0.02)^2), and so on forever. Question 4 DDM For a price of $13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? For a price of $1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be ##100(1+0.05)^1=$105.00##, and the year after it will be ##100(1+0.05)^2=110.25## and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? The perpetuity with growth formula, also known as the dividend discount model (DDM) or Gordon growth model, is appropriate for valuing a company's shares. ##P_0## is the current share price, ##C_1## is next year's expected dividend, ##r## is the total required return and ##g## is the expected growth rate of the dividend. The below graph shows the expected future price path of the company's shares. Which of the following statements about the graph is NOT correct? (a) Between points A and B, the share price is expected to grow by ##r##. (b) Between points B and C, the share price is expected to instantaneously fall by ##C_1##. (c) Between points A and C, the share price is expected to grow by ##g##. (d) Between points B and D, the share price is expected to grow by ##g##. (e) Between points D and E, the share price is expected to instantaneously fall by ##C_1.(1+r)^1##. Question 28 DDM, income and capital returns ### P_{0} = \frac{C_1}{r_{\text{eff}} - g_{\text{eff}}} ### What would you call the expression ## C_1/P_0 ##? (a) The expected total return of the stock. (b) The expected income return of the stock. (c) The expected capital return of the stock. (d) The expected growth rate of the dividend. (e) The expected growth rate of the stock price. The following is the Dividend Discount Model (DDM) used to price stocks: If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: (a) Dividend growth rate is equal to the long term expected growth rate of the stock price. (b) Dividend growth rate is equal to the long term expected capital return of the stock. (c) Dividend growth rate is equal to the long term expected dividend yield. (d) Total return of the stock is equal to its long term required return. (e) Total return of the stock is equal to the company's long term cost of equity. Question 289 DDM, expected and historical returns, ROE In the dividend discount model: ###P_0 = \dfrac{C_1}{r-g}### The return ##r## is supposed to be the: (a) Expected future total return of the market price of equity. (b) Expected future total return of the book price of equity. (c) Actual historical total return on the market price of equity. (d) Actual historical total return on the book price of equity. (e) Actual historical return on equity (ROE) defined as (Net Income / Owners Equity). Question 36 DDM, perpetuity with growth A stock pays annual dividends which are expected to continue forever. It just paid a dividend of $10. The growth rate in the dividend is 2% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? (a) $127.5 (b) $125 (c) $102 (e) $100 A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; the dividend at t=5 will be $1.15(1+0.05), the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? (e) $3.6341 ### p_0 = \frac{d_1}{r - g} ### Which expression is NOT equal to the expected dividend yield? (a) ## r-g ## (b) ## \dfrac{d_1}{p_0} ## (c) ## \dfrac{d_5}{p_4} ## (d) ## \dfrac{d_5(1+g)^2}{p_6} ## (e) ## \dfrac{d_3}{p_0(1+r)^2} ## A fairly valued share's current price is $4 and it has a total required return of 30%. Dividends are paid annually and next year's dividend is expected to be $1. After that, dividends are expected to grow by 5% pa in perpetuity. All rates are effective annual returns. What is the expected dividend income paid at the end of the second year (t=2) and what is the expected capital gain from just after the first dividend (t=1) to just after the second dividend (t=2)? The answers are given in the same order, the dividend and then the capital gain. (a) $1.3, $0.26 (b) $1.25, $0.25 (c) $1.1025, $0.2205 (d) $1.05, $0.21 (e) $1, $0.2 Question 488 income and capital returns, payout policy, payout ratio, DDM Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts. BigDiv pays large dividends and ZeroDiv doesn't pay any dividends. Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk. Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV. All things remaining equal, which of the following statements is NOT correct? (a) BigDiv is expected to have a lower capital return than ZeroDiv in the future. (b) BigDiv is expected to have a lower total return than ZeroDiv in the future. (c) ZeroDiv's assets are likely to grow faster than BigDiv's. (d) ZeroDiv's share price will increase faster than BigDiv's. (e) BigDiv currently has a higher payout ratio than ZeroDiv. Question 217 NPV, DDM, multi stage growth model A stock is expected to pay a dividend of $15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; JP Morgan Chase's historical earnings per share (EPS) is $4.37; Citi Group's share price is $50.05 and historical EPS is $4.26; Wells Fargo's share price is $48.98 and historical EPS is $3.89. Note: Figures sourced from Google Finance on 24 March 2014. Question 341 Multiples valuation, PE ratio Estimate Microsoft's (MSFT) share price using a price earnings (PE) multiples approach with the following assumptions and figures only: Apple, Google and Microsoft are comparable companies, Apple's (AAPL) share price is $526.24 and historical EPS is $40.32. Google's (GOOG) share price is $1,215.65 and historical EPS is $36.23. Micrsoft's (MSFT) historical earnings per share (EPS) is $2.71. Source: Google Finance 28 Feb 2014. (a) $63.15 (b) $61.67 (c) $30.83 (e) $8.60 Question 180 equivalent annual cash flow, inflation, real and nominal returns and cash flows Details of two different types of light bulbs are given below: Low-energy light bulbs cost $3.50, have a life of nine years, and use about $1.60 of electricity a year, paid at the end of each year. Conventional light bulbs cost only $0.50, but last only about a year and use about $6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. (a) 1.4873, 6.7857 (b) 1.6525, 6.7857 (c) 2.1415, 7.1250 (d) 14.8725, 6.7857 (e) 2.0924, 7.1250 Carlos and Edwin are brothers and they both love Holden Commodore cars. Carlos likes to buy the latest Holden Commodore car for $40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for $20,000. Carlos never has to bother with paying for repairs since his cars are brand new. Edwin also likes Commodores, but prefers to buy 4-year old cars for $20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for $2,000 and buys another 4-year old second hand car, and so on. Every time Edwin buys a second hand 4 year old car he immediately has to spend $1,000 on repairs, and then $1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for $2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. (a) $13,848.99 (b) $13,106.61 (c) $8,547.50 (d) $4,238.08 (e) -$103.85 You own some nice shoes which you use once per week on date nights. You bought them 2 years ago for $500. In your experience, shoes used once per week last for 6 years. So you expect yours to last for another 4 years. Your younger sister said that she wants to borrow your shoes once per week. With the increased use, your shoes will only last for another 2 years rather than 4. What is the present value of the cost of letting your sister use your current shoes for the next 2 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new pair of shoes when your current pair wears out and your sister will not use the new ones; your sister will only use your current shoes so she will only use it for the next 2 years; and the price of new shoes never changes. (a) $164.6662 An industrial chicken farmer grows chickens for their meat. Chickens: Cost $0.50 each to buy as chicks. They are bought on the day they're born, at t=0. Grow at a rate of $0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6). Grow at a rate of $0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they're older and grow more slowly. Feed costs are $0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs $0.30, and so on. Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above). The required return of the chicken farm is 0.5% given as an effective weekly rate. Ignore taxes and the fixed costs of the factory. Ignore the chicken's welfare and other environmental and ethical concerns. Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks. (a) $0.3651, $0.2374 (b) $0.3172, $0.3506 (d) $0.3050, $0.2142 (e) $0.0157, $0.0491 Question 372 debt terminology Which of the following statements is NOT correct? Borrowers: (a) Receive cash at the start and promise to pay cash in the future, as set out in the debt contract. (b) Are debtors. (c) Owe money. (d) Are funded by debt. (e) Buy debt. Which of the following statements is NOT correct? Lenders: (a) Are long debt. (b) Invest in debt. (c) Are owed money. (d) Provide debt funding. (e) Have debt liabilities. Question 581 APR, effective rate, effective rate conversion A home loan company advertises an interest rate of 6% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. (a) The APR compounding monthly is 6.0000% per annum. (b) The effective monthly rate is 0.5000% per month. (c) The effective annual rate is 6.1678% per annum. (d) The effective 6 month rate is 3.0000% per six months. (e) The APR compounding semi-annually is 6.0755% per annum. A semi-annual coupon bond has a yield of 3% pa. Which of the following statements about the yield is NOT correct? All rates are given to four decimal places. (a) The APR compounding semi-annually is 3.0000% per annum. (b) The effective 6 month rate is 1.5000% per six months. (d) The effective monthly rate is 0.2500% per month. (e) The APR compounding monthly is 2.9814% per annum. Question 16 credit card, APR, effective rate A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: ### r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily} ### (a) 0.0072, 0.09, 0.0002. (b) 0.0139, 0.18, 0.0005. (c) 0.0139, 6.2876, 0.0055. (d) 0.015, 0.1956, 0.0005. (e) 0.015, 0.1956, 0.006. Question 131 APR, effective rate Calculate the effective annual rates of the following three APR's: A credit card offering an interest rate of 18% pa, compounding monthly. A bond offering a yield of 6% pa, compounding semi-annually. An annual dividend-paying stock offering a return of 10% pa compounding annually. ##r_\text{credit card, eff yrly}##, ##r_\text{bond, eff yrly}##, ##r_\text{stock, eff yrly}## (a) 0.1956, 0.0609, 0.1. (b) 0.015, 0.09, 0.1. (d) 0.1956, 0.0617, 0.1047. (e) 6.2876, 0.1236, 0.1. Question 19 fully amortising loan, APR You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). (b) 2,700 (c) 2,722.1 (d) 2,843.71 (e) 34,424.99 Question 134 fully amortising loan, APR You want to buy an apartment worth $400,000. You have saved a deposit of $80,000. The bank has agreed to lend you the $320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (b) $1,600.00 (e) $23,247.65 You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 246,567.70, 93,351.63 (b) 246,567.70, 235,741.91 (c) 248,563.73, 96,346.75 (d) 248,563.73, 238,323.24 (e) 256,580.38, 245,314.97 How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 184,925.77, 164,313.82 (c) 186,422.80, 166,717.43 You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. (a) $320,725.47, $284,977.19 (b) $310,704.66, $277,862.39 (c) $310,704.66, $197,354.23 (d) $308,209.62, $273,856.37 (e) $308,209.62, $192,529.73 Question 29 interest only loan You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). Question 107 interest only loan You want to buy an apartment worth $300,000. You have saved a deposit of $60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 17,435.74 (b) 1,438.92 (c) 1,414.49 (e) 666.67 An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? You deposit cash into your bank account. Have you or your money? You deposit cash into your bank account. Have you or debt? You deposit cash into your bank account. Does the deposit account represent a debt or to you? You owe money. Are you a or a ? You are owed money. Are you a or a ? You own a debt asset. Are you a or a ? Which of the following statements is NOT correct? Bond investors: (a) Buy debt. (c) Have debt assets. (e) Are lenders. A credit card company advertises an interest rate of 18% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. (a) The APR compounding monthly is 18.0000% per annum. (c) The effective annual rate is 19.5618% per annum. (d) The effective quarterly rate is 4.5678% per quarter. (e) The APR compounding quarterly is 13.7035% per annum. You deposit money into a bank. Which of the following statements is NOT correct? You: (a) Are a lender. (b) Issued debt. (c) Bought debt. (d) Are a debt holder. (e) Own a debt asset. Question 509 bond pricing Calculate the price of a newly issued ten year bond with a face value of $100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid annually. So there's only one coupon per year, paid in arrears every year. Calculate the price of a newly issued ten year bond with a face value of $100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid semi-annually. So there are two coupons per year, paid in arrears every six months. Question 616 idiom, debt terminology, bond pricing "Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices. Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to: (a) Buy at low yields, sell at high yields. (b) Buy at high yields, sell at low yields. (c) Buy at high yields, sell at high yields. (d) Buy at low yields, sell at low yields. (e) There is no preferable yield to buy or sell fixed-coupon debt. Question 23 bond pricing, premium par and discount bonds Bonds X and Y are issued by the same US company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X and Y's coupon rates are 8 and 12% pa respectively. Which of the following statements is true? (a) Bonds X and Y are premium bonds. (b) Bonds X and Y are discount bonds. (c) Bond X is a discount bond but bond Y is a premium bond. (d) Bond X is a premium bond but bond Y is a discount bond. (e) Bonds X and Y are par bonds. Question 48 IRR, NPV, bond pricing, premium par and discount bonds, market efficiency The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? (a) The internal rate of return (IRR) of buying a fairly priced bond is equal to the bond's yield. (b) The Present Value of a fairly priced bond's coupons and face value is equal to its price. (c) If a fairly priced bond's required return rises, its price will fall. (d) Fairly priced premium bonds' yields are less than their coupon rates, prices are more than their face values, and the NPV of buying them is therefore positive. (e) The NPV of buying a fairly priced bond is zero. Question 63 bond pricing, NPV, market efficiency (a) The internal rate of return (IRR) of buying a bond is equal to the bond's yield. (c) If the required return of a bond falls, its price will fall. (d) Fairly priced discount bonds' yield is more than the coupon rate, price is less than face value, and the NPV of buying them is zero. Question 178 bond pricing, premium par and discount bonds Which one of the following bonds is trading at a discount? (a) a ten-year bond with a $4000 face value whose yield to maturity is 6.0% and coupon rate is 6.5% paid semi-annually. (b) a 6-year bond with a principal of $40,000 and a price of $45,000. (c) a 15-year bond with a $10,000 face value whose yield to maturity is 8.0% and coupon rate is 10.0% paid semi-annually. (d) a two-year bond with a $50,000 face value whose yield to maturity is 5.2% and coupon rate is 5.2% paid semi-annually. (e) None of the above bonds are discount bonds. Question 620 bond pricing, income and capital returns Let the 'income return' of a bond be the coupon at the end of the period divided by the market price now at the start of the period ##(C_1/P_0)##. The expected income return of a premium fixed coupon bond is: (a) Less than its expected total return. (b) Equal to its expected total return. (c) More than its expected total return. (d) More than its coupon rate. (e) Equal to its coupon rate. Which one of the following bonds is trading at a premium? (a) a ten-year bond with a $4,000 face value whose yield to maturity is 6.0% and coupon rate is 5.9% paid semi-annually. (b) a fifteen-year bond with a $10,000 face value whose yield to maturity is 8.0% and coupon rate is 7.8% paid semi-annually. (c) a five-year bond with a $2,000 face value whose yield to maturity is 7.0% and coupon rate is 7.2% paid semi-annually. (e) None of the above bonds are premium bonds. An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of $1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa. A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price. (a) The zero-coupon bond and the 7% semi-annual coupon bond were both discount bonds but now they are both premium bonds. (b) The zero-coupon bond and the 7% semi-annual coupon bond were both premium bonds but now they are both discount bonds. (c) When yields fell, the investor made a capital loss on both bonds. (d) When yields fell, the investor made a capital gain on both bonds. (e) When yields fell, the investor made a capital gain on the zero coupon bond but a loss on the 7% semi-annual coupon bond. A bond maturing in 10 years has a coupon rate of 4% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value of the bond is $100. What is its price? A three year bond has a fixed coupon rate of 12% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value is $100. What is its price? A 10 year bond has a face value of $100, a yield of 6% pa and a fixed coupon rate of 8% pa, paid semi-annually. What is its price? (d) $126.628 Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive. Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures. (a) A bullet loan has no interest payments but it does have a face value. Therefore it's a discount loan. (b) A fully amortising loan has interest payments but does not have a face value. Therefore it's a premium loan. (c) An interest only loan has interest payments and its price and face value are equal. Therefore it's a par loan. (d) A zero coupon bond has no coupon payments but it does have a face value. Therefore it's a premium bond. (e) A balloon loan has interest payments and its face value is more than its price. Therefore it's a discount loan. Question 254 time calculation, APR Your main expense is fuel for your car which costs $100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month). You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? (a) In 23 months (t=23 months). (b) In 24 months (t=24 months). (c) In 25 months (t=25 months). (d) In 26 months (t=26 months). (e) In 27 months (t=27 months). Question 32 time calculation, APR You really want to go on a back packing trip to Europe when you finish university. Currently you have $1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost $2,000, how long will it take for your bank account to reach that amount? (a) -3.74 years (b) 1.81 years (c) 3.33 years (d) 3.61 years (e) 3.74 years Question 330 APR, effective rate, debt terminology Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) Effective rates compound once over their time period. So an effective monthly rate compounds once per month. (b) APR's compound more than once per year. So an APR compounding monthly compounds 12 times per year. The exception is an APR that compounds annually (once per year) which is the same thing as an effective annual rate. (c) To convert an effective rate to an APR, multiply the effective rate by the number of time periods in one year. So an effective monthly rate multiplied by 12 is equal to an APR compounding monthly. (d) To convert an APR compounding monthly to an effective monthly rate, divide the APR by the number of months in one year (12). (e) To convert an APR compounding monthly to an effective weekly rate, divide the APR by the number of weeks in one year (approximately 52). You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) $ 1,250.00 (b) $ 2,250.00 (c) $ 2,652.17 (d) $ 2,697.98 (e) $ 32,692.01 You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as a fully amortising loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (e) $1,250.00 Question 204 time calculation, fully amortising loan, APR To your surprise, you can actually afford to pay $2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? (a) 38.87 months, which is 3.24 yrs. (b) 47.91 months, which is 3.99 yrs. (c) 160.72 months, which is 13.39 yrs. (d) 164.65 months, which is 13.72 yrs. (e) None of the above. A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow (##V_\text{before}##), so: ###\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}} ### Interest rates are expected to be constant over the life of the loan. Loans are interest-only and have a life of 30 years. Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month. (a) 0.055679 Question 56 income and capital returns, bond pricing, premium par and discount bonds Which of the following statements about risk free government bonds is NOT correct? (a) Premium bonds have a positive expected capital return. (b) Discount bonds have a positive expected capital return. (c) Par bonds have a zero expected capital return. (d) Par bonds have a total expected yield equal to their coupon yield. (e) Zero coupon bonds selling at par would have zero expected total, income and capital yields. Hint: Total return can be broken into income and capital returns as follows: ###\begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} ### The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. Question 38 bond pricing A two year Government bond has a face value of $100, a yield of 0.5% and a fixed coupon rate of 0.5%, paid semi-annually. What is its price? Question 230 bond pricing, capital raising A firm wishes to raise $10 million now. They will issue 6% pa semi-annual coupon bonds that will mature in 8 years and have a face value of $1,000 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? (a) 9,022.2 bonds (b) 10,000.0 bonds (c) 11,484.5 bonds (d) 12,712.9 bonds (e) 12,767.4 bonds Bonds X and Y are issued by the same US company. Both bonds yield 6% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 8% pa and bond Y pays coupons of 12% pa. Which of the following statements is true? Question 328 bond pricing, APR A 10 year Australian government bond was just issued at par with a yield of 3.9% pa. The fixed coupon payments are semi-annual. The bond has a face value of $1,000. Six months later, just after the first coupon is paid, the yield of the bond decreases to 3.65% pa. What is the bond's new price? (a) $1,000 (b) $1,019.9181 (c) $1,033.8330 (e) $1,060.5226 Question 213 income and capital returns, bond pricing, premium par and discount bonds The coupon rate of a fixed annual-coupon bond is constant (always the same). What can you say about the income return (##r_\text{income}##) of a fixed annual coupon bond? Remember that: ###r_\text{total} = r_\text{income} + r_\text{capital}### ###r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}### Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures. Select the most correct statement. From its date of issue until maturity, the income return of a fixed annual coupon: (a) Premium bond will increase. (b) Premium bond will decrease. (c) Premium bond will remain constant. (d) Par bond will increase. (e) Par bond will decrease. Question 35 bond pricing, zero coupon bond, term structure of interest rates, forward interest rate A European company just issued two bonds, a 1 year zero coupon bond at a yield of 8% pa, and a 2 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. An Australian company just issued two bonds: A 1 year zero coupon bond at a yield of 8% pa, and A 2 year zero coupon bond at a yield of 10% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. (a) 6.01% (b) 6.02% (c) 9.20% (d) 12.02% (e) 18.40% Question 300 NPV, opportunity cost What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed. Assume the following: The degree takes 3 years to complete and all students pass all subjects. There are 2 semesters per year and 4 subjects per semester. University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. There are 52 weeks per year. The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. Working full time at the grocery store instead of studying full-time pays $20/hr and you can work 35 hours per week. Wages are paid at the end of each week. Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: (c) $75,130.29 (d) $54,018.93 Question 176 CFFA Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) CapEx is added in the Net Income (NI) equation so it needs subtracting in the CFFA equation. (b) CapEx is a financing cash flow that needs to be ignored. Therefore it should be subtracted. (c) CapEx is not a cash flow, it's a non-cash expense made up by accountants that needs to be subtracted. (d) CapEx is subtracted to account for the net cash spent on capital assets. (e) CapEx is subtracted because it's too hard to predict, therefore we exclude it. A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation. (a) Buy less land, buildings and trucks than what was planned. Assume that this has no impact on revenue. (b) Pay less cash to creditors by refinancing the firm's existing coupon bonds with zero-coupon bonds that require no interest payments. Assume that there are no transaction costs and that both types of bonds have the same yield to maturity. (c) Change the depreciation method used for tax purposes from diminishing value to straight line, so less depreciation occurs this year and more occurs in later years. Assume that the government's tax department allow this. (d) Buying more inventory than was planned, so there is an increase in net working capital. Assume that there is no increase in sales. (e) Raising new equity through a rights issue. Assume that all of the money raised is spent on new capital assets such as land and trucks, but they will be fitted out and delivered in one year so no new cash will be earned from them. Over the next year, the management of an unlevered company plans to: Make $5m in sales, $1.9m in net income and $2m in equity free cash flow (EFCF). Pay dividends of $1m. Complete a $1.3m share buy-back. All amounts are received and paid at the end of the year so you can ignore the time value of money. The firm has sufficient retained profits to legally pay the dividend and complete the buy back. The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? (a) $2m (b) $1m (c) $0.4m (d) $0.3m (e) No new shares need to be issued, the firm will be sufficiently financed. Question 491 capital budgeting, opportunity cost, sunk cost A man is thinking about taking a day off from his casual painting job to relax. He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work. But he's thinking about the hours that he could work today (in the future) which are: (a) A sunk cost. (b) An opportunity cost. (c) A negative side effect. (d) A capital expense. (e) A depreciation expense. A man has taken a day off from his casual painting job to relax. It's the end of the day and he's thinking about the hours that he could have spent working (in the past) which are now: Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Candys Corp Income Statement for year ending 30th June 2013 COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77 as at 30th June 2013 2012 $m $m Current assets 220 180 Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480 Note: all figures are given in millions of dollars ($m). (e) 52 To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position. (a) Net income, depreciation and interest expense. (b) Depreciation and capital expenditure. (c) Current assets, current liabilities and cost of goods sold (COGS). (d) Current assets, current liabilities and capital expenditure. (e) Current assets, current liabilities and depreciation expense. Cash Flow From Assets (CFFA) can be defined as: (a) Cash available to distribute to creditors and stockholders. (b) Cash flow to creditors minus cash flow to stockholders. (c) Net income (or earnings) plus depreciation plus interest expense. (d) Net income minus the increase in net working capital. (e) Net income minus net capital spending minus the increase in net working capital. Question 349 CFFA, depreciation tax shield Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? ###NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) An increase in revenue (Rev). (b) An increase in rent expense (part of fixed costs, FC). (c) An increase in depreciation expense (Depr). (d) An decrease in net working capital (ΔNWC). (e) An increase in dividends. Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - ΔNWC+IntExp### (b) An increase in rent expense (a type of recurring fixed cost, FC). (d) An increase in inventories (a current asset). (e) An decrease in interest expense (IntExp). Question 377 leverage, capital structure Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Question 94 leverage, capital structure, real estate Your friend just bought a house for $400,000. He financed it using a $320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is $80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So ##V=D+E##. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. ### r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0} ### where ##r_{0-1}## is the return (percentage change) of an asset with price ##p_0## initially, ##p_1## one period later, and paying a cash flow of ##c_1## at time ##t=1##. (a) -100% (b) -50% (c) -12.5% (d) -10% (e) -8% Question 406 leverage, WACC, margin loan, portfolio return One year ago you bought $100,000 of shares partly funded using a margin loan. The margin loan size was $70,000 and the other $30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. (e) 11.7067% Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Question 206 CFFA, interest expense, interest tax shield Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: (a) the bond's face value multiplied by its annual yield to maturity. (b) the bond's face value multiplied by its annual coupon rate. (c) the bond's market price at the start of the year multiplied by its annual yield to maturity. (d) the bond's market price at the start of the year multiplied by its annual coupon rate. (e) the future value of the actual cash payments of the bond over the year, grown to the end of the year, and grown by the bond's yield to maturity. Question 506 leverage, accounting ratio A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio? (a) 20% (b) 36% (c) 60% Question 766 CFFA, WACC, interest tax shield, DDM Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other). Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name ##\text{CFFA}_\text{U}## $100m Cash flow from assets excluding interest tax shields (unlevered) ##\text{CFFA}_\text{L}## $112m Cash flow from assets including interest tax shields (levered) ##g## 0% pa Growth rate of cash flow from assets, levered and unlevered ##\text{WACC}_\text{BeforeTax}## 7% pa Weighted average cost of capital before tax ##\text{WACC}_\text{AfterTax}## 6.25% pa Weighted average cost of capital after tax ##r_\text{D}## 5% pa Cost of debt ##r_\text{EL}## 9% pa Cost of levered equity ##D/V_L## 50% pa Debt to assets ratio, where the asset value includes tax shields ##t_c## 30% Corporate tax rate What is the value of the levered firm including interest tax shields? (a) $431.111m (b) $444.444m (c) $485m (d) $515.464m (e) $1600m Question 113 WACC, CFFA, capital budgeting The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. Motorola had a 20% after-tax WACC before it merged with Google. Google and Motorola have the same level of gearing. Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: (a) Unlevered CFFA should be discounted by Google's 10% WACC after tax. (b) Unlevered CFFA should be discounted by Motorola's 20% WACC after tax. (c) Levered CFFA should be discounted by Google's 10% WACC after tax. (d) Levered CFFA should be discounted by Motorola's 20% WACC after tax. (e) Unlevered CFFA by 15%, the average of Google and Motorola's WACC after tax. Question 375 interest tax shield, CFFA One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). ###\begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\### Does this annual FFCF or the annual interest tax shield? Question 115 capital structure, leverage, WACC A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar risk to the company's existing projects. Assume a classical tax system. Which statement is correct? (a) The debt-to-assets (D/V) ratio will decrease. (b) The debt-to-equity ratio (D/E) will decrease. (c) The firm's cost of equity will decrease. (d) The company's after-tax WACC will decrease. (e) The company's before-tax WACC will decrease. Question 379 leverage, capital structure, payout policy Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? Question 301 leverage, capital structure, real estate Your friend just bought a house for $1,000,000. He financed it using a $900,000 mortgage loan and a deposit of $100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is $100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? No income (rent) was received from the house during the short time over which house prices fell. Your friend will not declare bankruptcy, he will always pay off his debts. (a) -1,000% (b) -150% (c) -100% (e) -10% Question 67 CFFA, interest tax shield Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)### ###CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp### What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and ##r_D## is the cost of debt. (a) ##D(1+r_D)## (b) ##D/(1+r_D) ## (c) ##D.r_D ## (d) ##D / r_D## (e) ##NI.r_D## Question 296 CFFA, interest tax shield (a) An increase in revenue (##Rev##). (b) An decrease in revenue (##Rev##). (c) An increase in rent expense (part of fixed costs, ##FC##). (d) An increase in interest expense (##IntExp##). (c) 37.5% (e) 6.25% ##\text{CFFA}_\text{U}## $48.5m Cash flow from assets excluding interest tax shields (unlevered) ##\text{CFFA}_\text{L}## $50m Cash flow from assets including interest tax shields (levered) ##\text{WACC}_\text{BeforeTax}## 10% pa Weighted average cost of capital before tax ##\text{WACC}_\text{AfterTax}## 9.7% pa Weighted average cost of capital after tax ##r_\text{EL}## 11.25% pa Cost of levered equity (b) $500m (e) $431.111m Question 89 WACC, CFFA, interest tax shield A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. (a) Discount the project's unlevered CFFA by the furniture manufacturing firms' 30% WACC after tax. (b) Discount the project's unlevered CFFA by the company's 20% WACC after tax. (c) Discount the project's levered CFFA by the company's 20% WACC after tax. (d) Discount the project's levered CFFA by the furniture manufacturing firms' 30% WACC after tax. (e) The methods outlined in answers (a) and (c) will give the same valuations, both are correct. Question 238 CFFA, leverage, interest tax shield A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: (a) The company is increasing its debt-to-assets and debt-to-equity ratios. These are types of 'leverage' or 'gearing' ratios. (b) The company will pay less tax to the government due to the benefit of interest tax shields. (c) The company's net income, also known as earnings or net profit after tax, will fall. (d) The company's expected levered firm free cash flow (FFCF or CFFA) will be higher due to tax shields. (e) The company's expected levered equity free cash flow (EFCF) will not change. A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned}### One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense ##(IntExp)## is zero: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned}### Does this annual FFCF with zero interest expense or the annual interest tax shield? Question 121 capital structure, leverage, financial distress, interest tax shield Fill in the missing words in the following sentence: All things remaining equal, as a firm's amount of debt funding falls, benefits of interest tax shields __________ and the costs of financial distress __________. (a) Fall, fall. (b) Fall, rises. (c) Rise, fall. (d) Rise, rise. (e) Remain unchanged, remain unchanged. Question 69 interest tax shield, capital structure, leverage, WACC Which statement about risk, required return and capital structure is the most correct? (a) The before-tax cost of debt is less than the before-tax cost of equity. Therefore debt is a cheaper form of financing than equity so companies should try to finance their projects with debt only. (b) Debt makes a firm's equity more risky. Therefore the higher the amount of debt, the higher the cost of equity. (c) The more debt a firm has, the higher its tax shields. Therefore firms should seek to have as much debt and as little equity as possible. (d) The more debt, the lower the firm's after tax WACC. The after tax WACC is the discount rate that discounts the firm's cash flows, so the lower it is the more the firm is worth. Therefore firms should try to make their after tax WACC as low as possible by using as much debt as possible. (e) The less debt, the lower the chance of bankruptcy. Therefore firms should try to pay off all of their debt so that they are financed by equity only. Question 84 WACC, capital structure, capital budgeting A firm is considering a new project of similar risk to the current risk of the firm. This project will expand its existing business. The cash flows of the project have been calculated assuming that there is no interest expense. In other words, the cash flows assume that the project is all-equity financed. In fact the firm has a target debt-to-equity ratio of 1, so the project will be financed with 50% debt and 50% equity. To find the levered value of the firm's assets, what discount rate should be applied to the project's unlevered cash flows? Assume a classical tax system. (a) The required return on equity, ##r_E## (b) The required return on debt, ##r_D## (c) The after-tax required return on debt, ##r_D.(1-t_c)## (d) The after-tax WACC, ##\text{WACC after tax}=\frac{D}{V_L}.r_D.(1-t_c )+\frac{E_L}{V_L}.r_E## (e) The pre-tax WACC, ##\text{WACC before tax}=\frac{D}{V_L}.r_D+\frac{E_L}{V_L}.r_E## Question 99 capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. The firm and individual investors can borrow at the same rate and have the same tax rates. The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium. There are no market frictions relating to debt such as asymmetric information or transaction costs. Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered. According to Miller and Modigliani's theory, which statement is correct? (a) The firm's share price and shareholder wealth will both decrease. This is because the firm will have more debt and therefore more risk so the discount rate applied to its cash flows will be higher, decreasing the value of the firm and therefore the value of the firm's equity and share price. (b) The firm's share price and shareholder wealth will both increase. This is because the firm will have more debt which will amplify the returns of equity investors. This will mean that returns on equity can be much higher and investors will pay a premium for this, leading to an increase in the stock price. (c) The firm's share price and shareholder wealth will both increase since it has more debt and therefore more tax shields. (d) The firm's share price will increase due to the higher value of tax shields. But shareholder wealth will remain unchanged because capital structure is irrelevant when investors can use home-made leverage to create tax-shields themselves. (e) The firm's share price and shareholder wealth will both increase. This is because the cost of debt is cheaper than equity, leading to a lower (before and after tax) WACC. This lower WACC will lead to a higher value of the firm and a higher share price. You bought a house, primarily funded using a home loan from a bank. Which of the following statements is NOT correct? (a) The home loan is a debt liability to the bank. (b) The bank invested in your debt. (c) You are indebted to the bank. (d) You owe the bank principal and interest payments. (e) You sold your promise to the bank to pay the principal and interest. You're trying to save enough money for a deposit to buy a house. You want to buy a house worth $400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other $320,000 that you need. You currently have no savings, but you just started working and can save $2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the $80,000 deposit? Round your answer up to the nearest month. (a) 27 months (t=27 months). (b) 38 months (t=38 months). (c) 40 months (t=40 months). (d) 43 months (t=43 months). (e) 79 months (t=79 months). Question 490 expected and historical returns, accounting ratio Which of the following is NOT a synonym of 'required return'? (a) total required yield (b) cost of capital (c) discount rate (d) opportunity cost of capital (e) accounting rate of return A stock was bought for $8 and paid a dividend of $0.50 one year later (at t=1 year). Just after the dividend was paid, the stock price was $7 (at t=1 year). What were the total, capital and dividend returns given as effective annual rates? The choices are given in the same order: ##r_\text{total}##, ##r_\text{capital}##, ##r_\text{dividend}##. (a) 0.0625, -0.0625, -0.125. (b) 0.0625, 0.125, -0.0625. (c) -0.0625, 0.0625, -0.125. (d) -0.0625, -0.125, 0.0625. (e) -0.125, -0.1875, 0.0625. Question 21 income and capital returns, bond pricing A fixed coupon bond was bought for $90 and paid its annual coupon of $3 one year later (at t=1 year). Just after the coupon was paid, the bond price was $92 (at t=1 year). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: ## r_\text{total},r_\text{capital},r_\text{income} ##. (a) -0.0556, -0.0222, -0.0333 (b) 0.0222, -0.0111, 0.0333. (e) 0.0556, 0.0333, 0.0222. Question 456 inflation, effective rate In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions: In 1969 he demands a ransom of $1 million (=10^6), and again; In 1997 he demands a ransom of $100 billion (=10^11). If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as $100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997? The answer choices below are given as effective annual rates: (a) 0.5086% pa (b) 1.5086% pa (c) 5.0859% pa (d) 50.8591% pa (e) 150.8591% pa (a) 5.8824%, 0.9804%, 4.902%. Question 461 book and market values, ROE, ROA, market efficiency One year ago a pharmaceutical firm floated by selling its 1 million shares for $100 each. Its book and market values of equity were both $100m. Its debt totalled $50m. The required return on the firm's assets was 15%, equity 20% and debt 5% pa. In the year since then, the firm: Earned net income of $29m. Paid dividends totaling $10m. Discovered a valuable new drug that will lead to a massive 1,000 times increase in the firm's net income in 10 years after the research is commercialised. News of the discovery was publicly announced. The firm's systematic risk remains unchanged. Which of the following statements is NOT correct? All statements are about current figures, not figures one year ago. (a) The book value of equity would be larger than the market value of equity. (b) The book ROA from accounting would be larger than the required return on assets from finance. (c) The book ROE from accounting would be larger than the required return on equity from finance. (d) The book ROE would be larger than the book ROA. (e) The required return on equity would be larger than the required return on assets. Hint: Book return on assets (ROA) and book return on equity (ROE) are ratios that accountants like to use to measure a business's past performance. ###\text{ROA}= \dfrac{\text{Net income}}{\text{Book value of assets}}### ###\text{ROE}= \dfrac{\text{Net income}}{\text{Book value of equity}}### The required return on assets ##r_V## is a return that financiers like to use to estimate a business's future required performance which compensates them for the firm's assets' risks. If the business were to achieve realised historical returns equal to its required returns, then investment into the business's assets would have been a zero-NPV decision, which is neither good nor bad but fair. ###r_\text{V, 0 to 1}= \dfrac{\text{Cash flow from assets}_\text{1}}{\text{Market value of assets}_\text{0}} = \dfrac{CFFA_\text{1}}{V_\text{0}}### Similarly for equity and debt. Question 446 working capital decision, corporate financial decision theory The working capital decision primarily affects which part of a business? Question 447 payout policy, corporate financial decision theory Payout policy is most closely related to which part of a business? Question 3 DDM, income and capital returns The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: ### P_0 = \frac{ C_1 }{ r - g } ### What is ##g##? The value ##g## is the long term expected: (a) Income return of the stock. (b) Capital return of the stock. (c) Total return of the stock. (d) Dividend yield of the stock. (e) Price-earnings ratio of the stock. A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be $10.20 in six months. The required return of the stock 10% pa, given as an effective annual rate. What is the price of the share now? The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? Question 51 DDM A stock pays semi-annual dividends. It just paid a dividend of $10. The growth rate in the dividend is 1% every 6 months, given as an effective 6 month rate. You estimate that the stock's required return is 21% pa, as an effective annual rate. Using the dividend discount model, what will be the share price? Question 270 real estate, DDM, effective rate conversion You own an apartment which you rent out as an investment property. What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation? You just signed a contract to rent the apartment out to a tenant for the next 12 months at $2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment. The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year. So rental payments will increase at the start of the 13th month (t=12) to be $2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months. Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on. The required return of the apartment is 8.732% pa, given as an effective annual rate. Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments. (a) $415,149.4048 (b) $438,252.6707 (c) $441,067.7356 (d) $444,155.5276 (e) $455,263.0844 Question 465 NPV, perpetuity The boss of WorkingForTheManCorp has a wicked (and unethical) idea. He plans to pay his poor workers one week late so that he can get more interest on his cash in the bank. Every week he is supposed to pay his 1,000 employees $1,000 each. So $1 million is paid to employees every week. The boss was just about to pay his employees today, until he thought of this idea so he will actually pay them one week (7 days) later for the work they did last week and every week in the future, forever. Bank interest rates are 10% pa, given as a real effective annual rate. So ##r_\text{eff annual, real} = 0.1## and the real effective weekly rate is therefore ##r_\text{eff weekly, real} = (1+0.1)^{1/52}-1 = 0.001834569## All rates and cash flows are real, the inflation rate is 3% pa and there are 52 weeks per year. The boss will always pay wages one week late. The business will operate forever with constant real wages and the same number of employees. What is the net present value (NPV) of the boss's decision to pay later? (e) $1,000,000.00 Question 215 equivalent annual cash flow, effective rate conversion You're about to buy a car. These are the cash flows of the two different cars that you can buy: You can buy an old car for $5,000 now, for which you will have to buy $90 of fuel at the end of each week from the date of purchase. The old car will last for 3 years, at which point you will sell the old car for $500. Or you can buy a new car for $14,000 now for which you will have to buy $50 of fuel at the end of each week from the date of purchase. The new car will last for 4 years, at which point you will sell the new car for $1,000. Bank interest rates are 10% pa, given as an effective annual rate. Assume that there are exactly 52 weeks in a year. Ignore taxes and environmental and pollution factors. Should you buy the or the ? Details of two different types of desserts or edible treats are given below: High-sugar treats like candy, chocolate and ice cream make a person very happy. High sugar treats are cheap at only $2 per day. Low-sugar treats like nuts, cheese and fruit make a person equally happy if these foods are of high quality. Low sugar treats are more expensive at $4 per day. The advantage of low-sugar treats is that a person only needs to pay the dentist $2,000 for fillings and root canal therapy once every 15 years. Whereas with high-sugar treats, that treatment needs to be done every 5 years. The real discount rate is 10%, given as an effective annual rate. Assume that there are 365 days in every year and that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the equivalent annual cash flow (EAC) of the high-sugar treats and low-sugar treats, including dental costs. The below choices are listed in that order. Ignore the pain of dental therapy, personal preferences and other factors. (a) -$332.8709, -$68.2065 (b) -$327.595, -$62.9476 (c) -$828.9808, -$808.5709 (d) -$829.6304, -$810.2613 (e) -$1,093.4153, -$1,594.5881 Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive earnings, disregard firms with negative earnings and therefore negative PE ratios. (a) Illiquid small private companies. (b) High growth technology firms. (c) Firms expected to have temporarily low earnings over the next year, but with higher earnings later. (d) Firms with a very low level of systematic risk. (e) Firms whose assets include a very large proportion of cash. Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? (a) Common equity in a listed public mining extraction company that will cease operations, wind up, and return all capital to shareholders in one year when its sole gold mine becomes depleted. (b) Common equity in a small private owner-operated mining services company whose main asset is its sole tanker truck which delivers fuel to mines. The firm's shares are 100% owned by Bob, the driver of the tanker truck. (c) Common equity in a large listed public company in the banking industry. (d) Residential apartment real estate. (e) Commercial warehouse real estate. Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. (a) Debt coupon rate. (b) Required return on debt. (c) Total return on debt. (d) Opportunity cost of debt. (e) Cost of debt capital. Question 49 inflation, real and nominal returns and cash flows, APR, effective rate In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. (a) -1.3529627% (b) -0.4977348% (c) 0.4977348% (d) 1.3529627% (e) 1.3621776% You want to buy an apartment worth $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 1,500.00 You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. (b) 166,791.61, 90,073.45 (d) 165,177.97, 88,321.04 You want to buy a house priced at $400,000. You have saved a deposit of $40,000. The bank has agreed to lend you $360,000 as a fully amortising loan with a term of 30 years. The interest rate is 8% pa payable monthly and is not expected to change. (c) $2,400 You just signed up for a 30 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). Question 459 interest only loan, inflation In Australia in the 1980's, inflation was around 8% pa, and residential mortgage loan interest rates were around 14%. In 2013, inflation was around 2.5% pa, and residential mortgage loan interest rates were around 4.5%. If a person can afford constant mortgage loan payments of $2,000 per month, how much more can they borrow when interest rates are 4.5% pa compared with 14.0% pa? Give your answer as a proportional increase over the amount you could borrow when interest rates were high ##(V_\text{high rates})##, so: ###\text{Proportional increase} = \dfrac{V_\text{low rates}-V_\text{high rates}}{V_\text{high rates}} ### Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates (APR's) compounding per month. For a price of $95, Sherylanne will sell you a share which is expected to pay its first dividend of $10 in 7 years (t=7), and will continue to pay the same $10 dividend every year after that forever. For a price of $95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to the bond or politely ? Question 52 IRR, pay back period A three year project's NPV is negative. The cash flows of the project include a negative cash flow at the very start and positive cash flows over its short life. The required return of the project is 10% pa. Select the most correct statement. (a) The payback period is negative. (b) The project's IRR is negative. (d) The project's IRR is more than its required return. (e) The project's IRR is equal to its required return. A two year Government bond has a face value of $100, a yield of 2.5% pa and a fixed coupon rate of 0.5% pa, paid semi-annually. What is its price? (a) 90.6421 (b) 95.1524 (c) 95.2055 (d) 96.1219 Bonds A and B are issued by the same Australian company. Both bonds yield 7% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond A pays coupons of 10% pa and bond B pays coupons of 5% pa. Which of the following statements is true about the bonds' prices? (a) The prices of bonds A and B will be more than $100. (b) The prices of bonds A and B will be less than $100. (c) Bond A will have a price more than $100, and bond B will have a price less than $100. (d) Bond A will have a price less than $100, and bond B will have a price more than $100. (e) Bonds A and B will both have a price of $100. Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100) and maturity (3 years). The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true? (e) Bonds X and Y have the same price. Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100), maturity (3 years) and yield (10%) as each other. Which of the following statements is true? A four year bond has a face value of $100, a yield of 6% and a fixed coupon rate of 12%, paid semi-annually. What is its price? A five year bond has a face value of $100, a yield of 12% and a fixed coupon rate of 6%, paid semi-annually. What is the bond's price? (e) 87.3629 A firm wishes to raise $8 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. (a) 107,441 (b) 98,393 (c) 90,480 (d) 80,000 (e) 64,039 Question 207 income and capital returns, bond pricing, coupon rate, no explanation For a bond that pays fixed semi-annual coupons, how is the annual coupon rate defined, and how is the bond's annual income yield from time 0 to 1 defined mathematically? Let: ##P_0## be the bond price now, ##F_T## be the bond's face value, ##T## be the bond's maturity in years, ##r_\text{total}## be the bond's total yield, ##r_\text{income}## be the bond's income yield, ##r_\text{capital}## be the bond's capital yield, and ##C_t## be the bond's coupon at time t in years. So ##C_{0.5}## is the coupon in 6 months, ##C_1## is the coupon in 1 year, and so on. (a) coupon rate = ##\dfrac{C_{0.5}+C_{1}}{F_T}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}(1+r_\text{total})^{0.5}+C_{1}}{P_0}## (b) coupon rate = ##\dfrac{2 \times C_{0.5}}{F_T}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}(1+r_\text{capital})^{0.5}+C_{1}}{P_0}## (c) coupon rate = ##\dfrac{2 \times C_{1}}{P_0}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}+C_{1}}{P_0}## (d) coupon rate = ##\dfrac{2 \times C_{1}}{P_0}##, ##r_\text{income, 0 to 1}=\dfrac{2 \times C_{1}}{P_0}## (e) coupon rate = ##\dfrac{2 \times C_{1}}{F_T}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}(1+r_\text{total})^{0.5}+C_{1}}{F_T}## A four year bond has a face value of $100, a yield of 9% and a fixed coupon rate of 6%, paid semi-annually. What is its price? (c) $72.592 Bonds X and Y are issued by the same company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 6% pa and bond Y pays coupons of 8% pa. Which of the following statements is true? A 30 year Japanese government bond was just issued at par with a yield of 1.7% pa. The fixed coupon payments are semi-annual. The bond has a face value of $100. Six months later, just after the first coupon is paid, the yield of the bond increases to 2% pa. What is the bond's new price? Question 1 NPV Jan asks you for a loan. He wants $100 now and offers to pay you back $120 in 1 year. You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Remember: ### V_0 = \frac{V_t}{(1+r_\text{eff})^t} ### Will you or Jan's deal? Read the following financial statements and calculate the firm's free cash flow over the 2014 financial year. UBar Corp COGS 200 Rent expense 15 Gas expense 8 EBIT 60 Taxable income 60 Cash 30 29 Accounts receivable 5 7 Pre-paid rent expense 1 0 Inventory 50 46 PPE 290 300 Trade payables 20 18 Accrued gas expense 3 2 Non-current liabilities 0 0 Retained profits 136 150 Asset revaluation reserve 5 0 The firm's free cash flow over the 2014 financial year was: (d) $61 (e) $62 Find Trademark Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Trademark Corp Operating expense 5 Income before tax 30 Tax at 30% 9 Current assets 120 80 Carrying amount 90 100 Current liabilities 75 65 Non-current liabilities 75 55 Contributed equity 50 50 (a) -19 Find UniBar Corp's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. UniBar Corp Net income 7 Current liabilities 110 60 (c) -8 (d) -18 Find Piano Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Find World Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. World Bar Retained earnings 100 100 Note: all figures above and below are given in millions of dollars ($m). (e) -20 Find Scubar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Scubar Corp Taxes at 30% 27 Trade debtors 19 6 Rent paid in advance 3 2 Trade creditors 10 8 Bond liabilities 200 190 The cash flow from assets was: (a) $40m (b) $41m (c) $54m (d) $60m (e) $74m Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? (a) An increase in net capital spending. (b) A decrease in the cash flow to creditors. (c) An increase in interest expense. (d) An increase in net working capital. (e) A decrease in dividends paid. Question 761 NPV, annuity due, no explanation The phone company Optus have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a: 'Bring Your Own' (BYO) mobile service plan, costing $80 per month. There is no phone included in this plan. The other plan is a: 'Bundled' mobile service plan that comes with the latest smart phone, costing $100 per month. This plan includes the latest smart phone. Neither plan has any additional payments at the start or end. Assume that the discount rate is 1% per month given as an effective monthly rate. The only difference between the plans is the phone, so what is the implied cost of the phone as a present value? Given that the latest smart phone actually costs $600 to purchase outright from another retailer, should you commit to the BYO plan or the bundled plan? (a) $480, so the bundled plan is best. (b) $462.94, so the BYO plan is best. (c) $429.12, so the bundled plan is best. (d) $424.87, so the BYO plan is best. (e) $420.66, so the bundled plan is best. Question 737 financial statement, balance sheet, income statement Where can a publicly listed firm's book value of equity be found? It can be sourced from the company's: (a) Income statement (b) Balance sheet (c) Cash flow statement (d) Stock exchange data (e) It's not published, it can only be estimated using multiples or discounted cash flow methods. Question 46 NPV, annuity due The phone company Telstra have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a: 'Bundled' mobile service plan that comes with the latest smart phone, costing $71 per month. This plan includes the latest smart phone. Neither plan has any additional payments at the start or end. The only difference between the plans is the phone, so what is the implied cost of the phone as a present value? Assume that the discount rate is 2% per month given as an effective monthly rate, the same high interest rate on credit cards. A home loan company advertises an interest rate of 4.5% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? (a) The APR compounding monthly is 4.5% pa. (b) The effective monthly rate is 0.3675% pa. (c) The effective annual rate is 4.594% pa. (d) The effective 6 month rate is 2.2712% pa. (e) The APR compounding semi-annually is 4.5424% pa. Question 742 price gains and returns over time, no explanation For an asset's price to quintuple every 5 years, what must be its effective annual capital return? Note that a stock's price quintuples when it increases from say $1 to $5. (a) 20% pa. (b) 37.973% pa. (c) 43.0969% pa. (d) 56.9031% pa. (e) 80% pa. How many years will it take for an asset's price to triple (increase from say $1 to $3) if it grows by 5% pa? (a) 17.7643 years (b) 22.5171 years (c) 29.2722 years (d) 40 years (e) 60 years If someone says "my shares rose by 10% last year", what do you assume that they mean? (a) The historical nominal effective annual total return was 10%. (b) The historical real effective annual total return was 10%. (c) The expected nominal effective annual total return was 10%. (d) The historical nominal effective annual capital return was 10%. (e) The expected real effective annual dividend return was 10%. A stock is expected to pay a dividend of $1 in one year. Its future annual dividends are expected to grow by 10% pa. So the first dividend of $1 is in one year, and the year after that the dividend will be $1.1 (=1*(1+0.1)^1), and a year later $1.21 (=1*(1+0.1)^2) and so on forever. Its required total return is 30% pa. The total required return and growth rate of dividends are given as effective annual rates. The stock is fairly priced. Calculate the pay back period of buying the stock and holding onto it forever, assuming that the dividends are received as at each time, not smoothly over each year. (a) 1 year. (b) 2 years. (c) 3 years. (d) 4 years. (e) 5 years. Question 747 DDM, no explanation A share will pay its next dividend of ##C_1## in one year, and will continue to pay a dividend every year after that forever, growing at a rate of ##g##. So the next dividend will be ##C_2=C_1 (1+g)^1##, then ##C_3=C_2 (1+g)^1##, and so on forever. The current price of the share is ##P_0## and its required return is ##r## Which of the following is NOT equal to the expected share price in 2 years ##(P_2)## just after the dividend at that time ##(C_2)## has been paid? (a) ##P_0 (1+g)^2## (b) ##P_0 (1+g)^2 - C_1 - C_2## (c) ##(P_0 (1+r)^1 - C_1 ) (1+g)^1## (d) ##(P_0 (1+r)^1 - C_1 ) (1+r)^1 - C_2## (e) ##(P_0 (1+g)^1 ) (1+r)^1 - C_2## Itau Unibanco is a major listed bank in Brazil with a market capitalisation of equity equal to BRL 85.744 billion, EPS of BRL 3.96 and 2.97 billion shares on issue. Banco Bradesco is another major bank with total earnings of BRL 8.77 billion and 2.52 billion shares on issue. Estimate Banco Bradesco's current share price using a price-earnings multiples approach assuming that Itau Unibanco is a comparable firm. Note that BRL is the Brazilian Real, their currency. Figures sourced from Google Finance on the market close of the BVMF on 24/7/15. (a) BRL 28.87 (b) BRL 25.372 (c) BRL 22.1 (d) BRL 21.653 (e) BRL 21.528 Telsa Motors advertises that its Model S electric car saves $570 per month in fuel costs. Assume that Tesla cars last for 10 years, fuel and electricity costs remain the same, and savings are made at the end of each month with the first saving of $570 in one month from now. The effective annual interest rate is 15.8%, and the effective monthly interest rate is 1.23%. What is the present value of the savings? (b) $33,306.1775 Question 753 NPV, perpetuity, DDM, no explanation A perpetuity of yearly payments of $30, with the first payment in 5 years (first payment at t=5, which continues every year after that forever). One payment of $100 in 6 years and 3 months (t=6.25). (a) $260.022324 (b) $213.660269 (c) $178.949525 (d) $151.628987 (e) $140.71319 Question 754 fully amortising loan, interest only loan How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 4% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula: ###\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1### (a) 63.1508% (b) 60.0299% (c) 58.3511% (d) 36.8492% Question 755 bond pricing, capital raising, no explanation A firm wishes to raise $50 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 6 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. (b) 453,483 (c) 500,000 (d) 527,541 (e) 666,873 A firm wishes to raise $50 million now. They will issue 5% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. Question 759 time calculation, fully amortising loan, no explanation Five years ago you entered into a fully amortising home loan with a principal of $500,000, an interest rate of 4.5% pa compounding monthly with a term of 25 years. Then interest rates suddenly fall to 3% pa (t=0), but you continue to pay the same monthly home loan payments as you did before. How long will it now take to pay off your home loan? Measure the time taken to pay off the home loan from the current time which is 5 years after the home loan was first entered into. Assume that the lower interest rate was given to you immediately after the loan repayment at the end of year 5, which was the 60th payment since the loan was granted. Also assume that rates were and are expected to remain constant. (a) 240 months, which is 20 years. (b) 202 months, which is 16 years and 10 months. (c) 168 months, which is 14 years. (d) 152 months, which is 12 years and 8 months. (e) 117 months, which is 9 years and 9 months. Question 760 time calculation, interest only loan, no explanation Five years ago (##t=-5## years) you entered into an interest-only home loan with a principal of $500,000, an interest rate of 4.5% pa compounding monthly with a term of 25 years. Then interest rates suddenly fall to 3% pa (##t=0##), but you continue to pay the same monthly home loan payments as you did before. Will your home loan be paid off by the end of its remaining term? If so, in how many years from now? Measure the time taken to pay off the home loan from the current time which is 5 years after the home loan was first entered into. (a) Yes, in 117 months, which is 9 years and 9 months. (b) Yes, in 152 months, which is 12 years and 8 months. (c) Yes, in 202 months, which is 16 years and 10 months. (d) Yes, in 240 months, which is 20 years. (e) No, it will take longer than 20 years to pay off the loan. Question 762 equivalent annual cash flow, no explanation Radio-Rentals.com offers the Apple iphone 5S smart phone for rent at $12.95 per week paid in advance on a 2 year contract. After renting the phone, you must return it to Radio-Rentals. Kogan.com offers the Apple iphone 5S smart phone for sale at $699. You estimate that the phone will last for 3 years before it will break and be worthless. Currently, the effective annual interest rate is 11.351%, the effective monthly interest rate 0.9% and the effective weekly interest rate is 0.207%. Assume that there are exactly 52 weeks per year and 12 months per year. Find the equivalent annual cost of renting the phone and also buying the phone. The answers below are listed in the same order. (a) $362.33, $287.79 (b) $362.33, $506.29 (c) $516.29, $506.29 (d) $673.40, $233.00 (e) $711.67, $287.79 Question 763 multi stage growth model, DDM A stock is expected to pay its first dividend of $20 in 3 years (t=3), which it will continue to pay for the next nine years, so there will be ten $20 payments altogether with the last payment in year 12 (t=12). From the thirteenth year onward, the dividend is expected to be 4% more than the previous year, forever. So the dividend in the thirteenth year (t=13) will be $20.80, then $21.632 in year 14, and so on forever. The required return of the stock is 10% pa. All rates are effective annual rates. Calculate the current (t=0) stock price. (e) $469.558 Question 764 bond pricing, no explanation A 4.5% fixed coupon Australian Government bond was issued at par in mid-April 2009. Coupons are paid semi-annually in arrears in mid-April and mid-October each year. The face value is $1,000. The bond will mature in mid-April 2020, so the bond had an original tenor of 11 years. Today is mid-September 2015 and similar bonds now yield 1.9% pa. What is the bond's new price? Note: there are 10 semi-annual coupon payments remaining from now (mid-September 2015) until maturity (mid-April 2020); both yields are given as APR's compounding semi-annually; assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. (a) $1,108.390216 (b) $1,114.640575 (c) $1,123.457853 (d) $1,127.892613 (e) $1,132.344879 An investor bought a 5 year government bond with a 2% pa coupon rate at par. Coupons are paid semi-annually. The face value is $100. Calculate the bond's new price 8 months later after yields have increased to 3% pa. Note that both yields are given as APR's compounding semi-annually. Assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. (a) $95.345378 (c) $96.138082 (d) $96.296464 (e) $96.775559 Question 706 utility, risk aversion, utility function Mr Blue, Miss Red and Mrs Green are people with different utility functions. Note that a fair gamble is a bet that has an expected value of zero, such as paying $0.50 to win $1 in a coin flip with heads or nothing if it lands tails. Fairly priced insurance is when the expected present value of the insurance premiums is equal to the expected loss from the disaster that the insurance protects against, such as the cost of rebuilding a home after a catastrophic fire. (a) Mr Blue, Miss Red and Mrs Green all prefer more wealth to less. This is rational from an economist's point of view. (b) Mr Blue is risk averse. He will not enjoy a fair gamble and would like to buy fairly priced insurance. (c) Miss Red is risk-neutral. She will not enjoy a fair gamble but wouldn't oppose it either. Similarly with fairly priced insurance. (d) Mrs Green is risk-loving. She would enjoy a fair gamble and would dislike fairly priced insurance. (e) Mr Blue would like to buy insurance, but only if it is fairly or under priced. Question 704 utility, risk aversion, utility function, gamble Each person has $256 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $256. Each player can flip a coin and if they flip heads, they receive $256. If they flip tails then they will lose $256. Which of the following statements is NOT correct? (a) All people would appear rational to an economist since they prefer more wealth to less. (b) Mrs Green and Miss Red would appear unusual to an economist since they are not risk averse. (c) Mr Blue's certainty equivalent of the gamble is $64. This is less than his current wealth of $256 which is why he would refuse the gamble. (d) Miss Red's certainty equivalent of the gamble is $256. This is the same as her current wealth of $256 which is why she would be indifferent to playing or not. (e) Mrs Green's certainty equivalent of the gamble is $512. This is more than her current wealth of $256 which is why she would love to play. Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? (a) All rational people prefer more wealth to less. (b) Mr Blue is risk averse. (c) Miss Red is risk neutral. (d) Mrs Green is risk averse. (e) Mrs Green may enjoy gambling. (a) Mr Blue prefers more wealth to less. Mrs Green enjoys losing wealth. (b) Miss Red has the same utility no matter how much wealth she has. (c) Mr Blue is risk averse. (d) Miss Red is risk averse. (e) Mrs Green is risk loving. (a) Mr Blue and Miss Red prefer more wealth to less. (b) Mrs Green enjoys losing wealth. (d) Miss Red is risk neutral. (e) Mrs Green is risk averse. (a) Neither Miss Red nor Mrs Green would appear rational to an economist. (b) Mrs Green is satiated when she has $25 of wealth. That is her bliss point. (c) Mrs Green is risk loving when she has between zero and $25 of wealth, same as Mr Blue. (d) Mrs Green is risk averse when she has between $25 and $50 of wealth. (e) Mrs Green is risk loving when she has between $50 and $100 of wealth, same as Mr Blue. Each person has $50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $50. Each player can flip a coin and if they flip heads, they receive $50. If they flip tails then they will lose $50. Which of the following statements is NOT correct? (a) Mr Blue's expected utility of wealth from gambling is 5 while refusing is 7.07. So the gamble makes him less happy. (b) Mrs Green's expected utility of wealth from gambling is 7.07 while refusing is 5. So the gamble makes her more happy. (c) Mr Blue's certainty equivalent of the risky gamble is $25. This is less than his current wealth of $50 which is why he would refuse. (d) Miss Red's certainty equivalent of the risky gamble is $50. This is equal to her current wealth of $50 which is why she is indifferent to gambling or not. (e) Mrs Green's certainty equivalent of the risky gamble is $70.71. This is more than her current wealth of $50 which is why she would accept. (a) Mr Blue's expected utility of wealth from gambling is 7.5 while refusing is 6.25. So the gamble makes him more happy. (b) Miss Red's expected utility of wealth from gambling is 7.5 and refusing is also 7.5. This is why she is indifferent. (c) Mr Blue's certainty equivalent of the risky gamble is $70.71. So he would pay to take part in the gamble. (d) Miss Red's certainty equivalent of the risky gamble is $50. (e) Mrs Green's certainty equivalent of the risky gamble is $70.71. She would enjoy taking part in the gamble. (a) Mr Blue would enjoy the gamble. (b) Miss Red would be indifferent to gambling or not. (c) Mrs Green would dislike the gamble. (d) Mr Blue's certainty equivalent of the risky gamble is $70.71. This is more than his current wealth which is why he would like to gamble. (e) Miss Red's certainty equivalent of the risky gamble is $50. This is the same as her current wealth which is why she is indifferent to gambling or not. Question 243 fundamental analysis, market efficiency Fundamentalists who analyse company financial reports and news announcements (but who don't have inside information) will make positive abnormal returns if: (a) Markets are weak and semi-strong form efficient but strong-form inefficient. (b) Markets are weak form efficient but semi-strong and strong-form inefficient. (c) Technical traders make positive excess returns. (d) Chartists make negative excess returns. (e) Insiders make negative excess returns. Question 100 market efficiency, technical analysis, joint hypothesis problem A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: (a) Only III is true. (b) Only II and III are true. (c) Only I, II and III are true. (d) Only IV is true. (e) Either I, II and III are true, or IV is true, or they are all true. Question 242 technical analysis, market efficiency Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: (a) Markets are weak-form efficient. (b) Markets are semi-strong-form efficient. (c) Past prices cannot be used to predict future prices. (d) Past returns can be used to predict future returns. (e) Stock prices reflect all publically available information. Question 340 market efficiency, opportunity cost A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing $100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: The fund has no private information. Markets are weak and semi-strong form efficient. The fund's transaction costs are negligible. The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. (b) -$20,000.00 (c) -$48,000.17 (d) -$51,999.83 (e) -$80,000.00 Question 416 real estate, market efficiency, income and capital returns, DDM, CAPM A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: His forecast is true. Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. Ignore all costs such as taxes, agent fees, maintenance and so on. All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: (a) The rental yield will fall and approach zero. (b) The total return will fall and approach the capital return (5% pa). (c) One or all of the following must fall: the systematic risk of real estate, the risk free rate or the market risk premium. (d) If the country's nominal wealth growth rate is 4% pa and the nominal real estate growth rate is 5% pa then real estate will approach 100% of the country's wealth over time. (e) If the country's nominal gross domestic production (GDP) growth rate is 4% pa and the nominal real estate rent growth rate is 2% pa then real estate rent will approach 100% of the country's GDP over time. Question 559 variance, standard deviation, covariance, correlation Which of the following statements about standard statistical mathematics notation is NOT correct? (a) The arithmetic average of variable X is represented by ##\bar{X}##. (b) The standard deviation of variable X is represented by ##\sigma_X##. (c) The variance of variable X is represented by ##\sigma_X^2##. (d) The covariance between variables X and Y is represented by ##\sigma_{X,Y}^2##. (e) The correlation between variables X and Y is represented by ##\rho_{X,Y}##. Question 236 diversification, correlation, risk Diversification in a portfolio of two assets works best when the correlation between their returns is: (a) -1 (b) -0.5 Question 111 portfolio risk, correlation All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: (a) The correlation between the stocks' returns rise. (b) The correlation between the stocks' returns decline. (c) The portfolio standard deviation declines. (d) Both stocks' individual variances decline. (e) Both stocks' individual standard deviations decline. Question 83 portfolio risk, standard deviation Stock Expected return Standard deviation Correlation ##(\rho_{A,B})## Dollars A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the standard deviation (not variance) of the above portfolio? Question 285 covariance, portfolio risk Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: Prices and expected returns of each stock stays the same, Variance of stock B's returns stays the same, Correlation of returns between the stocks stays the same. (a) The variance of the portfolio will increase. (b) The standard deviation of the portfolio will increase. (c) The covariance of returns between stocks A and B will stay the same. (d) The portfolio return will stay the same. (e) The portfolio value will stay the same. Question 557 portfolio weights, portfolio return An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. Stock A has an expected return of 5% pa. Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? (a) 80%, 20% (b) 60%, 40% (c) 40%, 60% (d) 20%, 80% (e) 20%, 20% Question 556 portfolio risk, portfolio return, standard deviation An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 12% pa. Stock A has an expected return of 10% pa and a standard deviation of 20% pa. Stock B has an expected return of 15% pa and a standard deviation of 30% pa. The correlation coefficient between stock A and B's expected returns is 70%. What will be the annual standard deviation of the portfolio with this 12% pa target return? (a) 24.28168% pa (b) 24% pa (c) 22.126907% pa (d) 19.697716% pa (e) 16.970563% pa Question 565 correlation What is the correlation of a variable X with a constant C? The corr(X, C) or ##\rho_{X,C}## equals: (a) var(X) or ##\sigma_X^2## (b) sd(X) or ##\sigma_X## (e) Mathematically undefined Question 306 risk, standard deviation Let the standard deviation of returns for a share per month be ##\sigma_\text{monthly}##. What is the formula for the standard deviation of the share's returns per year ##(\sigma_\text{yearly})##? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. (a) ##\sigma_\text{yearly} = \sigma_\text{monthly}## (b) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times 12## (c) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times 144## (d) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times \sqrt{12}## (e) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times {12}^{1/3}## Question 80 CAPM, risk, diversification Diversification is achieved by investing in a large amount of stocks. What type of risk is reduced by diversification? (a) Idiosyncratic risk. (b) Systematic risk. (c) Both idiosyncratic and systematic risk. (d) Market risk. (e) Beta risk. Question 112 CAPM, risk According to the theory of the Capital Asset Pricing Model (CAPM), total risk can be broken into two components, systematic risk and idiosyncratic risk. Which of the following events would be considered a systematic, undiversifiable event according to the theory of the CAPM? (a) A decrease in house prices in one city. (b) An increase in mining industry tax rates. (c) An increase in corporate tax rates. (d) A case of fraud at a major retailer. (e) A poor earnings announcement from a major firm. Question 326 CAPM A fairly priced stock has an expected return equal to the market's. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the stock's beta? Question 110 CAPM, SML, NPV The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot above the SML would have: (a) A positive NPV. (b) A zero NPV. (c) A negative NPV. (d) A large amount of diversifiable risk. (e) Zero diversifiable risk. Question 79 CAPM, risk Which statement is the most correct? (a) The risk free rate has zero systematic risk and zero idiosyncratic risk. (b) The market portfolio has zero idiosyncratic risk. (c) The market portfolio has zero systematic risk. (d) a and b are true. (e) a and c are true. Question 93 correlation, CAPM, systematic risk A stock's correlation with the market portfolio increases while its total risk is unchanged. What will happen to the stock's expected return and systematic risk? (a) The stock will have a higher return and higher systematic risk. (b) The stock will have a lower return and higher systematic risk. (c) The stock will have a higher return and lower systematic risk. (d) The stock will have a lower return and lower systematic risk. (e) The stock's return and systematic risk will be unchanged. Question 627 CAPM, SML, NPV, Jensens alpha Assets A, B, M and ##r_f## are shown on the graphs above. Asset M is the market portfolio and ##r_f## is the risk free yield on government bonds. Which of the below statements is NOT correct? (a) Asset A has a Jensen's alpha of 4.5% pa. (b) Asset A is under-priced. (c) The NPV of buying asset A is zero. (d) Asset B has a Jensen's alpha of zero. (e) Asset B is fairly priced. Question 672 CAPM, beta A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. What do you think will be the stock's expected return over the next year, given as an effective annual rate? (a) 5% pa (b) 7.5% pa (c) 10% pa (d) 12.5% pa (e) 20% pa Question 673 CAPM, beta, expected and historical returns In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? (a) -12.5% (b) -4% (c) -1.5% (d) -1% (e) 12.5% Question 410 CAPM, capital budgeting The CAPM can be used to find a business's expected opportunity cost of capital: ###r_i=r_f+β_i (r_m-r_f)### What should be used as the risk free rate ##r_f##? (a) The current central bank policy rate (RBA overnight money market rate). (b) The current 30 day federal government treasury bill rate. (c) The average historical 30 day federal government treasury bill rate over the last 20 years. (d) The current 30 year federal government treasury bond rate. (e) The average historical 30 year federal government treasury bond rate over the last 20 years. Question 302 WACC, CAPM Which of the following statements about the weighted average cost of capital (WACC) is NOT correct? (a) WACC before tax ##= r_D.\dfrac{D}{V_L} + r_{EL}.\dfrac{E_L}{V_L}## (b) WACC before tax ##= r_f + \beta_{VL}.(r_m - r_f)## (c) WACC after tax ##= r_D.(1-t_c).\dfrac{D}{V_L} + r_{EL}.\dfrac{E_L}{V_L}## (d) WACC after tax ##= r_f + \beta_{VL}.(r_m - r_f) - \dfrac{r_D.D.t_c}{V_L}## (e) WACC after tax ##= r_f + \beta_{VL}.(r_m - r_f)## The 'time value of money' is most closely related to which of the following concepts? (a) Competition: Firms in competitive markets earn zero economic profit. (b) Opportunity cost: The cost of the next best alternative foregone should be subtracted. (c) Separation of the investment and financing decisions. (d) Diversification: Risks can often be reduced by pooling them together. (e) Sunk costs: Costs that cannot be recouped should be ignored. (b) Own debt. (d) Write debt. (e) Have debt assets. Question 657 systematic and idiosyncratic risk, CAPM, no explanation A stock's required total return will decrease when its: (a) Systematic risk increases. (b) Idiosyncratic risk increases. (c) Total risk increases. (d) Systematic risk decreases. (e) Idiosyncratic risk decreases. Question 658 CFFA, income statement, balance sheet, no explanation To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the income statement needed? Note that the income statement is sometimes also called the profit and loss, P&L, or statement of financial performance. Question 659 APR, effective rate, effective rate conversion, no explanation A home loan company advertises an interest rate of 9% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given with an accuracy of 4 decimal places. (a) The APR compounding monthly is 9.000% pa. (c) The effective annual rate is 9.3807% pa. Question 660 fully amortising loan, interest only loan, APR Question 661 systematic and idiosyncratic risk, CAPM A stock's total standard deviation of returns is 20% pa. The market portfolio's total standard deviation of returns is 15% pa. The beta of the stock is 0.8. What is the stock's diversifiable standard deviation? (a) 16% pa (c) 8% pa (d) 5% pa (e) 4% pa Which of the following interest rate labels does NOT make sense? (a) Annualised percentage rate compounding per month. (b) Effective monthly rate compounding per year. (c) Annualised percentage rate compounding per year. (d) Effective annual rate compounding per year. (e) Annualised percentage rate compounding semi-annually. Question 663 leverage, accounting ratio, no explanation A firm has a debt-to-assets ratio of 20%. What is its debt-to-equity ratio? What is the present value of real payments of $100 every year forever, with the first payment in one year? The nominal discount rate is 7% pa and the inflation rate is 4% pa. (a) $3,466.6667 Question 665 stock split A company conducts a 10 for 3 stock split. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. (a) -76.92%, 333.33% (b) -70%, 333.33% (c) -70%, 233.33% (d) -57.14%, 233.33% (e) 233.33%, -70% Question 666 rights issue, capital raising A company conducts a 2 for 3 rights issue at a subscription price of $8 when the pre-announcement stock price was $9. Assume that all investors use their rights to buy those extra shares. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. (a) -60%, 150% (b) -42.22%, 150% (c) -40%, 66.67% (d) -22.22%, 66.67% (e) -4.44%, 66.67% Question 617 systematic and idiosyncratic risk, risk, CAPM A stock's required total return will increase when its: Question 618 capital structure, no explanation Who owns a company's shares? The: (a) Company. (b) Debt holders. (c) Equity holders. (d) Chief Executive Officer (CEO). (e) Board of directors. Question 624 franking credit, personal tax on dividends, imputation tax system, no explanation Which of the following statements about Australian franking credits is NOT correct? Franking credits: (a) Refund the corporate tax paid by companies to their individual shareholders. Therefore they prevent the double-taxation of dividends at the corporate and personal level. (b) Are distributed to shareholders together with cash dividends. (c) Are also called imputation credits. (d) Are worthless to individuals who earn less than the tax-free threshold because they have a zero marginal personal tax rate. (e) Are worthless to individual shareholders who are foreigners for tax purposes. Question 625 dividend re-investment plan, capital raising Which of the following statements about dividend re-investment plans (DRP's) is NOT correct? (a) DRP's are voluntary, shareholders only participate if they choose. (b) DRP's increase the number of shares. (c) The number of shares issued to a shareholder participating in a DRP is usually calculated as their total dividends owed, divided by the allocation share price which is usually close to the current market share price. (d) DRP's do not incur brokerage costs for the shareholder. This is unlike the case where the shareholder uses the cash dividend to buy more shares herself. (e) If all shareholders participated in a company's DRP, the company would not pay any dividends and the firm's share price would not fall due to the cash dividend or the DRP. For a price of $6, Carlos will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to his share or politely ? For a price of $10.20 each, Renee will sell you 100 shares. Each share is expected to pay dividends in perpetuity, growing at a rate of 5% pa. The next dividend is one year away (t=1) and is expected to be $1 per share. Would you like to the shares or politely ? Question 9 DDM, NPV For a price of $129, Joanne will sell you a share which is expected to pay a $30 dividend in one year, and a $10 dividend every year after that forever. So the stock's dividends will be $30 at t=1, $10 at t=2, $10 at t=3, and $10 forever onwards. A three year bond has a face value of $100, a yield of 10% and a fixed coupon rate of 5%, paid semi-annually. What is its price? (a) 87.31 (b) 86.92 (c) 76.37 (d) 74.62 (e) 58.63 What is the discount rate '## r_\text{eff} ##' in this equation? (b) The expected capital return of the stock. (c) The expected dividend return of the stock. (d) The expected income return of the stock. For a price of $100, Carol will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 12% pa. Would you like to her bond or politely ? For a price of $100, Rad will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Question 54 NPV, DDM After year 4, the annual dividend will grow in perpetuity at -5% pa. Note that this is a negative growth rate, so the dividend will actually shrink. So, the dividend at t=5 will be ##$1(1-0.05) = $0.95##, the dividend at t=6 will be ##$1(1-0.05)^2 = $0.9025##, and so on. What is the current price of the stock? (a) $7.2968 (b) $7.5018 (c) $7.6667 (d) $7.7522 What will be the price of the stock in four and a half years (t = 4.5)? Question 767 idiom, corporate financial decision theory, no explanation The sayings "Don't cry over spilt milk", "Don't regret the things that you can't change" and "What's done is done" are most closely related to which financial concept? (a) Competition. (b) Opportunity costs. (d) Diversification. (e) Sunk costs. Question 768 accounting terminology, book and market values, no explanation Accountants and finance professionals have lots of names for the same things which can be quite confusing. Which of the following groups of items are NOT synonyms? (a) Revenue, sales, turn over. (b) Paid up capital, contributed equity. (c) Shares, stock, equity. (d) Net income, earnings, net profit after tax, the bottom line. (e) Market capitalisation of equity, book value of equity. Question 770 expected and historical returns, income and capital returns, coupon rate, bond pricing, no explanation Which of the following statements is NOT correct? Assume that all things remain equal. So for example, don't assume that just because a company's dividends and profit rise that its required return will also rise, assume the required return stays the same. (a) If a company's dividend (and profit and free cash flow to equity) rises, its share price will rise. (b) If a fixed coupon bond's yield to maturity rises, its price will rise. (c) If a residential property's rent revenue rises, its price will rise. (d) If a patent's royalty revenue rises, its price will also rise. (e) If a software asset's licensing revenue rises, its price will also rise. Question 771 debt terminology, interest expense, interest tax shield, credit risk, no explanation You deposit money into a bank account. Which of the following statements about this deposit is NOT correct? (a) You have a debt asset. (b) The bank sold you its promise to pay back interest and principal payments. (c) The bank is exposed to your credit risk, it's afraid that you'll default on your debt. (d) The interest income you're paid is taxable income for you. (e) The interest expense that the bank pays is tax-deductible for the bank. Question 774 leverage, WACC, real estate One year ago you bought a $1,000,000 house partly funded using a mortgage loan. The loan size was $800,000 and the other $200,000 was your wealth or 'equity' in the house asset. The interest rate on the home loan was 4% pa. Over the year, the house produced a net rental yield of 2% pa and a capital gain of 2.5% pa. Assuming that all cash flows (interest payments and net rental payments) were paid and received at the end of the year, and all rates are given as effective annual rates, what was the total return on your wealth over the past year? (b) 2% pa (c) 4.5% pa (d) 6.5% pa (e) 8.5% pa Question 775 utility, utility function Below is a graph of 3 peoples' utility functions, Mr Blue (U=W^(1/2) ), Miss Red (U=W/10) and Mrs Green (U=W^2/1000). Assume that each of them currently have $50 of wealth. Which of the following statements about them is NOT correct? (a) Mr Blue would prefer to invest his wealth in a well diversified portfolio of stocks rather than a single stock, assuming that all stocks had the same total risk and return. (b) Mrs Green would prefer to invest her wealth in a single stock rather than a well diversified portfolio of stocks, assuming that all stocks had the same total risk and return. (c) The popularity of insurance only makes sense if people are similar to Mr Blue. (d) CAPM theory only makes sense if people are similar to Miss Red. (e) The popularity of casino gambling and lottery tickets only make sense if people are similar to Mrs Green. Question 776 market efficiency, systematic and idiosyncratic risk, beta, income and capital returns Which of the following statements about returns is NOT correct? A stock's: (a) Expected total return will equal its required total return if the stock is fairly priced. (b) Expected total return will be less than its required total return if the stock is over-priced. (c) Required total return should be higher than the risk free government bond yield if the stock has a positive beta. (d) Required total return depends on its total variance, required capital return depends on systematic variance and the dividend yield depends on idiosyncratic variance. (e) Expected capital return equals the expected total return less whatever the management decide that they will pay as an income return (which is the dividend yield). The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. A stock has a beta of 0.5. In the last 5 minutes, the federal government unexpectedly raised taxes. Over this time the share market fell by 3%. The risk free rate was unchanged. (a) -1% (b) -1.5% (e) -7.5% Question 779 mean and median returns, return distribution, arithmetic and geometric averages, continuously compounding rate Fred owns some BHP shares. He has calculated BHP's monthly returns for each month in the past 30 years using this formula: ###r_\text{t monthly}=\ln⁡ \left( \dfrac{P_t}{P_{t-1}} \right)### He then took the arithmetic average and found it to be 0.8% per month using this formula: ###\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.008=0.8\% \text{ per month}### He also found the standard deviation of these monthly returns which was 15% per month: ###\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.15=15\%\text{ per month}### Assume that the past historical average return is the true population average of future expected returns and the stock's returns calculated above ##(r_\text{t monthly})## are normally distributed. Which of the below statements about Fred's BHP shares is NOT correct? (a) The returns ##r_\text{t monthly}## are continuously compounded monthly returns. (b) The mean, median and mode of the continuously compounded annual return is expected to equal 0.8% per month. (c) The annualised average continuously compounded return is ##\bar{r}_\text{cc annual}=12×0.008=0.096=9.6\%\text{ pa}##. The annualised standard deviation is ##\sigma_\text{annual} = \sqrt{12}×0.15=0.519615242 = 52\%\text{ pa}##. (d) If the current price of the BHP shares is $20, they're expected to have a median value of $52.2339 ##(= 20×e^{0.008×12×10})## in 10 years. (e) If the current price of the BHP shares is $20, they're expected to have a mean value of $13.5411 ##(= 20×e^{(0.008 - 0.15^2/2)×12×10})## in 10 years. Question 798 idiom, diversification, market efficiency, sunk cost, no explanation The following quotes are most closely related to which financial concept? "Opportunity is missed by most people because it is dressed in overalls and looks like work" -Thomas Edison "The only place where success comes before work is in the dictionary" -Vidal Sassoon "The safest way to double your money is to fold it over and put it in your pocket" - Kin Hubbard (a) The efficient markets hypothesis. (b) Time value of money. Question 799 LVR, leverage, accounting ratio In the home loan market, the acronym LVR stands for Loan to Valuation Ratio. If you bought a house worth one million dollars, partly funded by an $800,000 home loan, then your LVR was 80%. The LVR is equivalent to which of the following ratios? (a) Debt to equity ratio. (b) Debt to assets ratio. (c) Book to market ratio. (d) Price earnings ratio. (e) Interest coverage ratio. Question 800 leverage, portfolio return, risk, portfolio risk, capital structure, no explanation Which of the following assets would you expect to have the highest required rate of return? All values are current market values. (a) $1000 of Australian federal government bonds. (b) $2 million of Australian federal government bonds. (c) A $1 million residential house asset. (d) A home loan secured on a $1 million residential house asset with an LVR of 80%. (e) Equity (the deposit) in a $1 million residential house asset with an LVR of 80%. Question 801 negative gearing, leverage, capital structure, no explanation The following steps set out the process of 'negative gearing' an investment property in Australia. Which of these steps or statements is NOT correct? To successfully achieve negative gearing on an investment property: (a) Buy an investment property with low expected future capital gains and high rental income, funded using a low proportion of debt (mortgage loan); (b) Rent the investment property to a tenant; (c) Negative gearing is achieved when the property investment's annual taxable profit is negative, due to the loan's interest expense being greater than the net rent (rental revenue less the renting expenses such as maintenance); (d) Deduct the property's income loss from the investor's separate personal wage from labour. This means that the investor will pay less personal income tax than if she didn't make the levered property investment; (e) The investment property will be a positive-NPV investment if the capital gains are greater than the income losses on an after-tax, risk-adjusted, present-value basis. Which of the following statements about 'negative gearing' is NOT correct? (a) Negative gearing provides benefits by increasing the investor's personal pre-tax income in the current year. (b) Negative gearing works best when the investor has a high personal income so he's in a high personal marginal income tax bracket. (c) Negative gearing works best when the asset makes large capital gains but has low income (such as rent). (d) The more leverage, the bigger the interest tax shield benefit of negative gearing. (e) Real estate and shares can be negatively geared. Question 803 capital raising, rights issue, initial public offering, on market repurchase, no explanation Which one of the following capital raisings or payouts involve the sale of shares to existing shareholders only? (a) Rights issues. (b) Private placements. (c) Initial public offerings. (d) Seasoned equity offerings. (e) On-market buy backs. Use the below information to value a levered company with annual perpetual cash flows from assets that grow. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Note that 'k' means kilo or 1,000. So the $30k is $30,000. ##\text{CFFA}_\text{U}## $30k Cash flow from assets excluding interest tax shields (unlevered) ##g## 1.5% pa Growth rate of cash flow from assets, levered and unlevered ##r_\text{EL}## 16.3% pa Cost of levered equity (a) The WACC before tax is 6.46% pa. (b) The WACC after tax is 5.55% pa. (c) The current value of the firm's levered assets including tax shields is $603.839k. (d) The current value of debt is $600k. (e) The benefit from interest tax shields in the first year is $7.2k. Question 805 short selling Short selling is a way to make money from falling prices. In what order must the following steps be completed to short-sell an asset? Let Tom, Dick and Harry be traders in the share market. Step P: Purchase the asset from Harry. Step G: Give the asset to Tom. Step W: Wait and hope that the asset price falls. Step B: Borrow the asset from Tom. Step S: Sell the asset to Dick. Select the statement with the correct order of steps. (a) P, G, W, B, S. (b) B, G, W, P, S. (c) B, W, G, P, S. (d) P, W, S, B, G. (e) B, S, W, P, G. Question 806 stock split, no explanation A firm conducts a two-for-one stock split. Which of the following consequences would NOT be expected? (a) The share price would halve. (b) The number of shares would double. (c) The market capitalisation of equity would be unchanged. (d) The earnings per share (EPS) would halve. (e) The price-earnings (PE) ratio would halve. Question 807 market efficiency, expected and historical returns, CAPM, beta, systematic risk, no explanation You work in Asia and just woke up. It looked like a nice day but then you read the news and found out that last night the American share market fell by 10% while you were asleep due to surprisingly poor macro-economic world news. You own a portfolio of liquid stocks listed in Asia with a beta of 1.6. When the Asian equity markets open, what do you expect to happen to your share portfolio? Assume that the capital asset pricing model (CAPM) is correct and that the market portfolio contains all shares in the world, of which American shares are a big part. Your portfolio beta is measured against this world market portfolio. When the Asian equity market opens for trade, you would expect your portfolio value to: (a) Remain unchanged. (b) Instantaneously fall by 10%. (c) Slowly fall by 10% over the day. (d) Instantaneously fall by 16%. (e) Slowly fall by 16% over the day. Question 808 Markowitz portfolio theory, portfolio return A graph of assets' expected returns ##(\mu)## versus standard deviations ##(\sigma)## is given in the below diagram. Each letter corresponds to a separate coloured area. The portfolios at the boundary of the areas, on the black lines, are excluded from each area. Assume that all assets represented in this graph are fairly priced, and that all risky assets can be short-sold. Which of the following statements about this graph and Markowitz portfolio theory is NOT correct? (a) Area A contains ideal portfolios but unfortunately these portfolios are unattainable. (b) Area B portfolios are only attainable if investors can borrow at the risk free rate, meaning that they can risklessly short sell government bonds and only pay the risk free rate to do so. (c) Area C contains all individual risky assets and portfolios of those risky assets. (d) Area D portfolios are only attainable if investors can borrow at the risk free rate, meaning that they can risklessly short sell government bonds and only pay the risk free rate to do so, similarly to Area B. (e) Area E contains unattainable portfolios. Question 809 Markowitz portfolio theory, CAPM, Jensens alpha, CML, systematic and idiosyncratic risk A graph of assets' expected returns ##(\mu)## versus standard deviations ##(\sigma)## is given in the graph below. The CML is the capital market line. Which of the following statements about this graph, Markowitz portfolio theory and the Capital Asset Pricing Model (CAPM) theory is NOT correct? (a) The market portfolio M has systematic risk only. It's a fully diversified portfolio comprised of all individual risky assets. The market portfolio is usually assumed to be the equity index, such as the ASX200 in Australia or the S&P500 in the US. (b) The risk free security has no risk at all. Government bonds are usually assumed to be the risk-free security. (c) Portfolio combinations of the market portfolio and risk free security will plot on the CML and will have systematic risk only. They will have no diversifiable risk. (d) The portfolios on the CML with a return above ##r_f## have maximum return for any given level of risk. (e) The individual assets and portfolios with returns less than the risk free rate are over-priced, have a negative Jensen's alpha and should be sold. Question 810 CAPM, systematic and idiosyncratic risk, market efficiency Examine the graphs below. Assume that asset A is a single stock. Which of the following statements is NOT correct? Asset A: (a) Has a beta of 0.5. (b) Is fairly priced. (c) Has systematic standard deviation equal to 10% pa. (d) Has diversifiable standard deviation equal to 20% pa. (e) Has total standard deviation equal to 40% pa. Question 812 rights issue, no explanation A firm is about to conduct a 2-for-7 rights issue with a subscription price of $10 per share. They haven't announced the capital raising to the market yet and the share price is currently $13 per share. Assume that every shareholder will exercise their rights, the cash raised will simply be put in the bank, and the rights issue is completed so quickly that the time value of money can be ignored. Disregard signalling, taxes and agency-related effects. Which of the following statements about the rights issue is NOT correct? After the rights issue is completed: (a) The number of shares will rise by 2/7 or around 28.57%. (b) The share price will fall by 2/7 or around 28.57%. (c) The firm's market capitalisation of equity will rise by 26/70 or around 37.14%. (d) The market share price is expected to be $10.67. (e) The company will have more cash. Question 811 log-normal distribution, mean and median returns, return distribution, arithmetic and geometric averages Which of the following statements about probability distributions is NOT correct? (a) A normally distributed variable's mean, median and mode are all expected to be equal. (b) A log-normally distributed variable's mean, median and mode are all expected to be different. Mean > median > mode. (c) Future stock prices ##(P_t )## are often assumed to be log-normally distributed. (d) Stocks' annual net discrete returns ##(P_t/P_0-1)## must be normally distributed when stock prices ##(P_t)## are log-normally distributed. (e) The total area under a probability density function (pdf) is always one. Question 66 CAPM, SML Government bonds currently have a return of 5% pa. A stock has an expected return of 6% pa and the market return is 7% pa. What is the beta of the stock? (a) -0.5 Question 70 payout policy Due to floods overseas, there is a cut in the supply of the mineral iron ore and its price increases dramatically. An Australian iron ore mining company therefore expects a large but temporary increase in its profit and cash flows. The mining company does not have any positive NPV projects to begin, so what should it do? Select the most correct answer. (a) Pay out the excess cash by increasing the regular dividend, and cutting it later. (b) Pay out a special dividend. (c) Conduct an on or off-market share repurchase. (d) Conduct a share dividend (also called a 'bonus issue'). (e) Either b or c. Question 72 CAPM, portfolio beta, portfolio risk deviation Correlation Beta Dollars A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the beta of the above portfolio? (b) 0.833333333 (d) 1.166666667 (e) 1.4 deviation Covariance ##(\sigma_{A,B})## Beta Dollars What is the standard deviation (not variance) of the above portfolio? Note that the stocks' covariance is given, not correlation. Question 74 WACC, capital structure, CAPM A firm's weighted average cost of capital before tax (##r_\text{WACC before tax}##) would increase due to: (a) The firm issuing more debt and using the proceeds to repurchase stock. (b) The firm issuing more equity and using the proceeds to pay off debt holders. (c) The firm's industry becoming more systematically risky, for example if it was a mining company whose performance became more sensitive to countries' GDP growth, so the correlation of the firm's returns with the market was higher. (d) The firm's industry becoming less systematically risky, for example if it was a child care centre and the government announced higher subsidies for parents using child care centres, so the correlation of the firm's returns with the market was lower. Question 707 continuously compounding rate, continuously compounding rate conversion Convert a 10% effective annual rate ##(r_\text{eff annual})## into a continuously compounded annual rate ##(r_\text{cc annual})##. The equivalent continuously compounded annual rate is: (a) 230.258509% pa (b) 10.536052% pa (e) 9.531018% pa A continuously compounded semi-annual return of 5% ##(r_\text{cc 6mth})## is equivalent to a continuously compounded annual return ##(r_\text{cc annual})## of: (d) 10.25% pa Question 713 effective rate conversion An effective semi-annual return of 5% ##(r_\text{eff 6mth})## is equivalent to an effective annual return ##(r_\text{eff annual})## of: Convert a 10% continuously compounded annual rate ##(r_\text{cc annual})## into an effective annual rate ##(r_\text{eff annual})##. The equivalent effective annual rate is: Question 709 continuously compounding rate, APR Which of the following interest rate quotes is NOT equivalent to a 10% effective annual rate of return? Assume that each year has 12 months, each month has 30 days, each day has 24 hours, each hour has 60 minutes and each minute has 60 seconds. APR stands for Annualised Percentage Rate. (a) 9.7617696% is the APR compounding semi-annually ##(r_\text{apr comp 6mth})## (b) 9.5689685% is the APR compounding monthly ##(r_\text{apr comp monthly})## (c) 9.6454756% is the APR compounding daily ##(r_\text{apr comp daily})## (d) 9.5310182% is the APR compounding per second ##(r_\text{apr comp per second})## (e) 9.5310180% is the continuously compounded rate per annum ##(r_\text{cc annual})## A continuously compounded monthly return of 1% ##(r_\text{cc monthly})## is equivalent to a continuously compounded annual return ##(r_\text{cc annual})## of: (a) 12.682503% pa An effective monthly return of 1% ##(r_\text{eff monthly})## is equivalent to an effective annual return ##(r_\text{eff annual})## of: Question 714 return distribution, no explanation Which of the following quantities is commonly assumed to be normally distributed? (a) Prices, ##P_1##. (b) Gross discrete returns per annum, ##r_{\text{gdr 0 }\rightarrow \text{ 1}} = \dfrac{P_1}{P_0} ##. (c) Effective annual returns per annum also known as net discrete returns, ##r_{\text{eff 0 }\rightarrow \text{ 1}} = \dfrac{P_1 - P_0}{P_0} = \dfrac{P_1}{P_0}-1##. (d) Continuously compounded returns per annum, ##r_{\text{cc 0 }\rightarrow \text{ 1}} = \ln \left( \dfrac{P_1}{P_0} \right)##. (e) Annualised percentage rates compounding per month, ##r_{\text{apr comp monthly 0 }\rightarrow \text{ 1 mth}} = \left( \dfrac{P_1 - P_0}{P_0} \right) \times 12##. Question 716 return distribution The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Which of the below statements is NOT correct? (a)##-1 < \text{Red} < \infty## if Red is log-normally distributed. (b) ##-2 < \text{Green} < 2## if Green is normally distributed. (c) ##0 < \text{Blue} < \infty## if Blue is log-normally distributed. (d) If the Green distribution is normal, then the mode = median = mean. (e) If the Red and Blue distributions are log-normal, then the mode < median < mean. Question 725 return distribution, mean and median returns If a stock's future expected effective annual returns are log-normally distributed, what will be bigger, the stock's or effective annual return? Or would you expect them to be ? Fred owns some Commonwealth Bank (CBA) shares. He has calculated CBA's monthly returns for each month in the past 20 years using this formula: He then took the arithmetic average and found it to be 1% per month using this formula: ###\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.01=1\% \text{ per month}### He also found the standard deviation of these monthly returns which was 5% per month: ###\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.05=5\%\text{ per month}### Which of the below statements about Fred's CBA shares is NOT correct? Assume that the past historical average return is the true population average of future expected returns. (a) The returns ##r_\text{t monthly}## are continuously compounded monthly returns and are likely to be normally distributed, so the mean, median and mode of these continuously compounded returns are all equal to 1% per month. (b) The annualised average continuously compounded return is ##\bar{r}_\text{cc annual}=12×0.01=0.1=12\%\text{ pa}##. The annualised standard deviation is ##\sigma_\text{annual} = \sqrt{12}×0.05=0.173205081=17.32\%\text{ pa}##. (c) Over the next 10 years the mean, median and mode continuously compounded 10 year returns will all equal ##0.01 \times 12 \times 10 = 120\%##. (d) Over the next 10 years the expected mean gross discrete 10 year return is ##e^{(0.01 - 0.05^2/2)×12×10} = 2.857651118## and the median gross discrete 10 year return is ##e^{(0.01 + 0.05^2/2)×12×10}=3.857425531##. (e) If the current price of the CBA shares is $75, then in 10 years they're expected to have a mean value of ##75×e^{(0.01 + 0.05^2/2)×12×10} = 289.3069148## and a median value of ##75×e^{0.01×12×10}=249.0087692##. Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 1 50 -0.6931 0.5 -0.5 2 100 0.6931 2 1 Arithmetic average 0 1.25 0.25 Arithmetic standard deviation -0.6931 0.75 0.75 (a) The geometric average of the gross discrete returns (GAGDR) equals 1 which is 100%. (b) ##\text{GAGDR} = \exp \left( \text{AALGDR} \right)##. The GAGDR is equal to the natural exponent of the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (c) ##\text{GAGDR} = \left( P_T/P_0 \right)^{1/T}##. The GAGDR equals the ratio of the last and first prices raised to the power of the inverse of the number of time periods between them. (d) ##\text{LGAGDR} = \text{AALGDR}##. This is always true, regardless of the distribution of the prices or returns and the number of return observations. The logarithm of the geometric average of the gross discrete returns (LGAGDR) is always equal to the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (e) ##\text{LAAGDR} = \text{AALGDR} + \text{SDLGDR}^2/2##. This is always true, regardless of the distribution of the prices or returns and the number of return observations. The logarithm of the arithmetic average of the gross discrete returns (LAAGDR) equals the arithmetic average of the logarithms of the gross discrete returns (AALGDR) plus half the variance of the LGDR's. Question 671 future, forward, hedging It's possible for both parties in a futures or forward contract to be hedging, so neither are speculating. or ?
CommonCrawl
\begin{document} \begin{abstract} Let $E$ be an elliptic curve defined over $\mathbf{Q}$ of conductor $N$, let $M$ be the Manin constant of $E$, and $C$ be the product of local Tamagawa numbers of $E$ at prime divisors of $N$. Let $K$ be an imaginary quadratic field in which each prime divisor of $N$ splits, $P_K$ be the Heegner point in $E(K)$, and $\Sh(E/K)$ be the Tate--Shafarevich group of $E$ over $K$. Also, let $2u_K$ be the number of roots of unity contained in $K$. In \cite{GZ}, Gross and Zagier conjectured that if $P_K$ has infinite order in $E(K)$, then the integer $u_K \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2}$ is divisible by $\#E(\QQ)_\mathrm{tors}$. In this paper, we show that this conjecture is true. \end{abstract} \title{On a conjecture of Gross and Zagier} \tableofcontents \section{Introduction} The goal of this paper is to prove a conjecture made by Gross and Zagier in \cite{GZ} concerning certain divisibility among arithmetic invariants of elliptic curves. This gives a theoretical evidence to the ``strong form'' of Birch and Swinnerton-Dyer conjecture, predicting that the leading coefficient of the Hasse--Weil $L$-function of an elliptic curve encodes some precise arithmetic invariants of the curve. In \cite{GZ}, Gross and Zagier gave a formula for the first derivative at $s=1$ of $L$-series of certain modular forms. In particular, they transferred the formula to the realm of $L$-functions of elliptic curves. So let $E$ be an elliptic curve defined over $\mathbf{Q}$ with conductor $N$. For a negative square-free integer $d$, we consider the quadratic twist $E_d$ of $E$ which is in general \emph{not} isomorphic to $E$ over $\mathbf{Q}$ but becomes isomorphic over the imaginary quadratic field $K = \mathbf{Q}(\sqrt{d})$. We denote the discriminant of $K$ over $\mathbf{Q}$ by $\disc(K)$ which is equal to $d$ when $d \equiv 1 \pmod{4}$ and to $4d$ otherwise. We also assume a close relation between $E$ and $K$ in such a way that each prime number dividing $N$ splits completely in $K$. This is called the \emph{Heegner condition} or \emph{Heegner hypothesis} in the literature, which we assume throughout this paper. The corresponding $L$-functions are also strongly related: we have $L(E/K, s) = L(E/\mathbf{Q},s) \cdot L(E_d/\mathbf{Q},s)$. By computing root numbers, the Heegner condition forces that $L(E/K,1)=0$. Throughout this paper, we use the following notations. \begin{itemize} \item $N$ is the conductor of $E$. \item $\omega$ is the \emph{Néron differential} of $E$ over $\mathbf{Q}$ and $\| \omega \|^2 := \int_{E(\mathbf{C})} | \omega \wedge \bar{\omega} |$ is the complex period. \item $\hat{h}$ is the \emph{Néron--Tate height} attached to $E$. \item $M$ is the \emph{Manin constant} of $E$, i.e., if $f$ is the newform attached to $E$ and $\pi: X_0(N) \to E$ is a modular parametrisation, then $M$ is the ratio satisfying $\pi^* \omega = M \cdot 2 \pi i f(\tau) d\tau$. We have $M \in \mathbf{Q}^\times$ and a famous conjecture of Y. Manin is that $M = 1$ for all \emph{strong Weil curves} $E$. For general discussions on the constant and current status about the conjecture, see \cite{ARS}. \item $P_K \in E(K)$ is the \emph{Heegner point} over $K$. This depends on the elliptic curve and its modular parametrisation chosen. \item $2u_K$ is the number of roots of unity contained in the field $K$. $u_K = 1$ for all imaginary quadratic fields $K$ except when $K = \mathbf{Q}(\sqrt{-1})$ and $K = \mathbf{Q}(\sqrt{-3})$, in these cases we have $u_K = 2$ and $u_K = 3$ respectively. \item $C$ is the \emph{Tamagawa number} of $E$ over $\mathbf{Q}$ which is defined by the product $C = \prod_{p \mid N} C_p$ of all local Tamagawa numbers. \end{itemize} Now the main theorem of Gross and Zagier (\cite{GZ}, Theorem I.6.3) has the following consequence. \begin{theorem}[\cite{GZ}, Theorem V.2.1] \begin{equation}\label{eq:GZ_formula} L'(E/K, 1) = \frac{\| \omega \|^2 \cdot \hat{h}(P_K)}{M^2 \cdot u_K^2 \cdot |\disc(K)|^{1/2}}. \end{equation} \end{theorem} Now the Birch and Swinnerton-Dyer conjecture comes into the picture. We assume here and thereafter that the Heegner point $P_K$ has infinite order, so that $L'(E/K,1) \neq 0$. For more details for the following conjecture, we refer \cite{AEC}, appendix C.16. \begin{conjecture}[Birch and Swinnerton-Dyer]\label{conj:BSD} If $\ord_{s=1} L(E/K, s) = 1$, then the Tate--Shafarevich group $\Sh(E/K)$ of $E$ over $K$ is finite, and $L'(E/K, s) = \mathrm{BSD}_{E/K}$, where \begin{equation}\label{eq:BSD_formula} \mathrm{BSD}_{E/K} = \frac{\| \omega \|^2 \cdot C^2 \cdot \hat{h}(P_K) \cdot \# \Sh(E/K) }{ | \disc(K) |^{1/2} \cdot \left[ E(K) : \mathbf{Z} P_K \right]^2}. \end{equation} \end{conjecture} \begin{remark} In the literature, the factor $C^2$ in the right hand side of the equation \eqref{eq:BSD_formula} is replaced by the Tamagawa number of $E$ over the extension $K$. However, by the Heegner hypothesis, any prime $p$ dividing $N$ splits in $K$ like $p = \mathfrak{p} \overline{\mathfrak{p}}$, and thus the number is equal to the square $C^2$ of the Tamagawa number of $E$ over $\mathbf{Q}$. \end{remark} \begin{remark} The Tate--Shafarevich group $\Sh(E/K)$ is in fact finite in this case (cf. Theorem 5 in \cite{Kol}). \end{remark} Equating the above two formulae \eqref{eq:GZ_formula} and \eqref{eq:BSD_formula}, Gross and Zagier obtained the following conjecture. \begin{conjecture}[\cite{GZ}, Conjecture V.2.2, Strong Gross--Zagier Conjecture]\label{conj:GZ_strong_conjecture} If $P_K$ has infinite order in $E(K)$, then $\mathbf{Z} P_K$ has finite index in $E(K)$ and we have \begin{equation} [E(K): \mathbf{Z} P_K ] = u_K \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2}. \end{equation} \end{conjecture} As the order of the rational torsion subgroup $E(\QQ)_\mathrm{tors}$ clearly divides the index $\left[ E(K) : \mathbf{Z} P_K \right]$, they also obtained a weaker version of the conjecture, which we call ``the Gross--Zagier conjecture'' throughout this paper. \begin{conjecture}[\cite{GZ}, Conjecture V.2.3, Weak Gross--Zagier Conjecture]\label{conj:GZ_conjecture} If $E(K)$ has analytic rank 1, then the integer $u_K \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2}$ is divisible by $\# E(\QQ)_\mathrm{tors}$. \end{conjecture} Rational torsion subgroups of elliptic curves $E$ over $\mathbf{Q}$ are completely classified by Mazur \cite{Ma78}: $E(\QQ)_\mathrm{tors}$ is isomorphic to one of the following groups: \begin{equation*} \begin{cases} \mathbf{Z}/n\mathbf{Z} & \text{ for } 1 \le n \le 10,\; n=12, \\ \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/n\mathbf{Z}& \text{ for } n = 2, 4, 6, 8. \end{cases} \end{equation*} In \cite{Lo}, Lorenzini obtained the following theorem. \begin{theorem} [\cite{Lo}, Proposition 1.1] Let $E$ be an elliptic curve defined over $\mathbf{Q}$ with a $\mathbf{Q}$-rational point of order $k$. Then the following statements hold with at most five explicit exceptions for a given $k$. The exceptions are given by their labels in Cremona's table \cite{Cr}. \begin{enumerate} \item If $k=4$, then $2 \mid C$, except for `15a7', `15a8', and `17a4'. \item If $k=5,6$, or $12$, then $k \mid C$, except for `11a3', `14a4', `14a6', and `20a2'. \item If $k =7,8$, or $9$, then $k^2 \mid C$, except for `15a4', `21a3', `26b1', `42a1', `48a6', `54b3', and `102b1'. \item If $k=10$, then $50 \mid C$. \end{enumerate} Without exception, $k \mid C$ if $k=7,8,9,10$ or $12$. \end{theorem} For the exceptions of above proposition, we can check that $\# E(\QQ)_\mathrm{tors} $ divides $C \cdot M$, except for `15a7', which is considered in \S \ref{section:torgp_type_4}. So the only remaining cases for the validity of the conjecture are those when $E(\QQ)_\mathrm{tors}$ is isomorphic to the following 6 groups: $\mathbf{Z}/2\mathbf{Z}$, $\mathbf{Z}/3\mathbf{Z}$, $\mathbf{Z}/4\mathbf{Z}$, $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$, $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$, and $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$. Our goal here is to prove these remaining cases, thus to complete the proof of the conjecture. \begin{maintheorem} Let $E$ be an elliptic curve defined over $\mathbf{Q}$ such that the rational torsion subgroup $E(\QQ)_\mathrm{tors}$ is isomorphic to one of the 6 groups: $\mathbf{Z}/2\mathbf{Z}$, $\mathbf{Z}/3\mathbf{Z}$, $\mathbf{Z}/4\mathbf{Z}$, $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$, $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$, and $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$. Let $K$ be an imaginary quadratic field such that $E(K)$ is of (analytic) rank 1 and that $K$ satisfies the Heegner condition. Then the conjecture \ref{conj:GZ_conjecture} is true, i.e., $\#E(\QQ)_\mathrm{tors}$ divides $C \cdot M \cdot u_k \cdot \left( \# \Sh(E/K) \right)^{1/2}$. \end{maintheorem} From now on, $E$ always denotes an elliptic curve defined over $\mathbf{Q}$ having torsion subgroup isomorphic to one of the above 6 groups, and $K$ is always an imaginary quadratic field such that $\ord_{s=1} L(E/K, s) = 1$ and that $K$ satisfies the Heegner hypothesis. Let us briefly explain how to prove the Main Theorem. The present article is divided into two parts. The first part (\S \ref{section:preliminaries_part_1} $\sim$ \S \ref{section:torgp_type_2}) is dealing with the case that $E(\QQ)_\mathrm{tors}$ has order a power of 2. When $E(\QQ)_\mathrm{tors}$ contains full 2-torsion subgroup $E[2]$, i.e., when $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$, or $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$, the situations are a lot easier than the other cases, and we can prove the Main Theorem by computing Tamagawa numbers using Tate's algorithm (\S \ref{section:torgp_type_2_4} and \S \ref{section:torgp_type_2_2}). For the other cases, i.e., when $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z}$ or $\mathbf{Z}/4\mathbf{Z}$ , there are curves having Tamagawa numbers not divisible by $\# E(\QQ)_\mathrm{tors}$, so we need to compute the size of the 2-torsion part of the Tate--Shafarevich groups over $K$ using Kramer's formula. There are some `exceptional families' for which $C \cdot \left( \# \Sh(E/K) \right)^{1/2}$ does not have enough power of 2. For these cases, we avoid difficulties by considering isogeny invariance of the Gross--Zagier conjecture. Kramer's formula and the isogeny invariance are located at the heart of techniques in the proof, so in the preliminary section \S \ref{section:preliminaries_part_1} we give sufficient background to these techniques. The second part (\S \ref{section:preliminaries_part_2} $\sim$ \S \ref{section:torgp_type_3}) is devoted to the case in which $E(\QQ)_\mathrm{tors}$ has a rational torsion point of order 3. When $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$, we can prove the Main Theorem by computing only Tamagawa numbers (\S \ref{section:torgp_type_2_6}). But when $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/3\mathbf{Z}$, there are also curves having Tamagawa numbers not divisible by $\# E(\QQ)_\mathrm{tors}$, so we need to compute the lower bound of the size of the 3-torsion part of Tate--Shafarevich groups over $K$ using Cassels' formula or need to compute the Manin constants using the phenomenon that optimal curves differ by a 3-isogeny (\S \ref{section:torgp_type_3}). Same as Part 1, preliminaries are summarised in \S \ref{section:preliminaries_part_2}. All expicit computations in this paper were done using Sage Mathematics Software \cite{sagemath}. When we do computations with Weierstrass equations, we frequently change the variables of an equation to obtain another. In particular, when we use the clause ``make a change of variables via $[u,r,s,t]$'', it should be understood to take the change of variables formula given by \begin{equation*} x = u^2 x' + r \quad \text{and} \quad y = u^3 y' + u^2 s x'+ t. \end{equation*} For the details, we refer \cite{AEC}, \S III.1. \part{$E(\mathbf{Q})_\mathrm{tors}$ has order a power of 2} \section{Preliminaries for Part 1} \label{section:preliminaries_part_1} \subsection{Kramer's formula} \label{subsection:Kramer} In this subsection we introduce a formula of Kramer \cite{Kr}, and discuss how to measure the size of the Tate--Shafarevich group of elliptic curve using it. Of course the purpose of this section is to provide a tool to show the Main Theorem for the cases $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z}$ or $\mathbf{Z}/4\mathbf{Z}$. Thus, throughout this subsection, we assume $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z}$ or $\mathbf{Z}/4\mathbf{Z}$, and consequently $E(\mathbf{Q})[2] \simeq \mathbf{Z}/2\mathbf{Z}$. Since the Tate--Shafarevich group $\Sh(E/K)$ is finite (Theorem 5 in \cite{Kol}), its 2-primary part $\Sh(E/K)[2^\infty]$ has perfect square order. So if we find a non-trivial element in $\Sh(E/K)[2]$, (or equivalently $\dim_{\mathbf{F}_2} \Sh(E/K)[2] \ge 1$), we can immediately see that $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$. So in this subsection, we are concentrating on how to find such a non-trivial element. Let $p$ be a prime number. We use the following notations. \begin{itemize} \item $i_p = \dim_{\mathbf{F}_2} \Coker N = \dim_{\mathbf{F}_2} E(\mathbf{Q}_p)/NE(K_\mathfrak{p})$, where $N: E(K_\mathfrak{p}) \to E(\mathbf{Q}_p)$ is the \emph{norm map}. This quantity is called \emph{local norm index} of $E$ at $p$. \item Let \begin{equation*} \Phi = \left\{ \xi \in \Sel^2(E/\mathbf{Q}) : \xi \in N_p\left( \prod_{\mathfrak{p} \mid p} \Sel^2(E/K_\mathfrak{p}) \right) \right\}. \end{equation*} This group is called the \emph{everywhere-local norm group}. \item $NS'$ is the image of the norm map $\Sel^2(E/K) \to \Sel^2(E/\mathbf{Q})$, which we do not need in this paper. \end{itemize} \begin{theorem}[\cite{Kr}, Theorem 1] \label{th:Kr} The dimension of $\Sh (E/K)[2]$ (over $\mathbf{F}_2$) is equal to \begin{equation*} \sum i_\ell + \dim_{\mathbf{F}_2} \Phi + \dim_{\mathbf{F}_2} NS' - \rk E(K) - 2 \dim_{\mathbf{F}_2} E(\mathbf{Q})[2], \end{equation*} where the sum is taken over all primes (including infinity) of $\mathbf{Q}$. \end{theorem} Back to our case. Because $\rk E(K)=1$ and $E(\mathbf{Q})[2] \simeq \mathbf{Z}/2\mathbf{Z}$, by Theorem \ref{th:Kr}, $\dim_{\mathbf{F}_2} \Sh(E/K)[2] \ge 1$ if and only if the quantity \begin{equation*} \sum i_\ell + \dim_{\mathbf{F}_2} \Phi + \dim_{\mathbf{F}_2} NS' \end{equation*} is greater than or equal to 4. \subsubsection{Local norm indices} For general introduction and useful facts about the numbers $i_p$, we refer \S 4 of \cite{Ma72} and \S 2 of \cite{Kr}. We only concern those numbers relevant to our situation. The proof of the following proposition can be found in \S 2 of \cite{Kr}. \begin{proposition}\label{prop:computing_local_norm_indices} Let $E$ be an elliptic curve over $\mathbf{Q}$ with $E(\mathbf{Q})[2] \simeq \mathbf{Z}/2\mathbf{Z}$ and let $K =\mathbf{Q}(\sqrt{d})$ be an imaginary quadratic field satisfying the Heegner hypothesis. The local norm indices $i_\ell$ for various primes $\ell$ are given as follows. \begin{enumerate} \item One has $i_\infty = i(\mathbf{C}\mid\mathbf{R}) = \begin{cases} 0 & \text{ if } \Delta_\mathrm{min} < 0, \\ 1 & \text{ if } \Delta_\mathrm{min} > 0. \end{cases}$ \item Let $p$ be an odd prime. If $p$ is a good prime for $E$ and is ramified in $K$, then one has $i_p = \dim_{\mathbf{F}_2} E[2](k)$, where $k$ is the residue field of $\mathbf{Q}_p$. Otherwise one has $i_p = 0$. \item If $2$ is a good prime for $E$ and is ramified in $K$, then one has $i_2 = \begin{cases} 2 & \text{if $(\Delta_\mathrm{min},d)_{\mathbf{Q}_2} = +1$,} \\ 1 & \text{if $(\Delta_\mathrm{min},d)_{\mathbf{Q}_2} = -1$,} \end{cases}$ where $(-,-)_{\mathbf{Q}_2}$ denotes the Hilbert norm-residue symbol. Otherwise, one has $i_2 =0$. \end{enumerate} \end{proposition} \subsubsection{Everywhere-local norm group} Now we provide a way to compute the everywhere-local norm group $\Phi$. The following is the key. \begin{proposition}[\cite{Kr}, Proposition 7]\label{prop:str_of_Phi} The everywhere-local norm group $\Phi$ is the intersection of $\Sel^2(E/\mathbf{Q})$ and $\Sel^2(E_d/\mathbf{Q})$ inside $H^1(\mathbf{Q}, E[2]) \simeq H^1(\mathbf{Q}, E_d[2])$, where $E_d$ is the quadratic twist of $E$ by $d$. \end{proposition} Let $E_d$ be the quadratic twist of $E$ by $d$. In particular, suppose $E$ is defined by the Weierstrass equation \begin{equation}\label{eq:Weierstrass_having_2_torsion} y^2 = x^3 + Ax^2 + Bx, \end{equation} which has discriminant $\Delta = 2^4 B^2 ( A^2 - 4B)$. Then $E_d$ has the Weierstrass equation of the form \begin{equation}\label{eq:Weierstrass_quadratic_twist} y^2 = x^3 + Ad x^2 + Bd^2x. \end{equation} The discriminant of the above equation \eqref{eq:Weierstrass_quadratic_twist} is given by $\Delta_d = 16d^6 B^2(A^2 - 4B)$. \begin{proposition}\label{prop:canonical_isomorphism_of_torsion_groups} The 2-torsion subgroups $E[2]$ and $E_d[2]$ are canonically isomorphic as $\Gal(\overline{\mathbf{Q}}|\mathbf{Q})$-modules. Consequently, the Galois cohomology groups $H^\bullet (\mathbf{Q}, E[2])$ and $H^\bullet(\mathbf{Q}, E_d[2])$ are isomorphic. In particular, we identify $H^1(\mathbf{Q}, E[2]) = H^1(\mathbf{Q}, E_d[2])$ in the sequel. \end{proposition} \begin{proof} The Galois-equivariant isomorphism $E[2] \to E_d[2]$ is given by $(t,0) \mapsto (dt,0)$. \end{proof} Denote by $P$ (resp. $P_d$) the rational torsion point of order 2 in $E$ (resp. $E_d$) corresponding to $(0,0)$ in the equation \eqref{eq:Weierstrass_having_2_torsion} (resp. $(0,0)$ in the equation \eqref{eq:Weierstrass_quadratic_twist}). Let $E'$ (resp. $E'_d$) be the elliptic curve $E/\langle P \rangle$ (resp. $E_d / \langle P_d \rangle$) and let $\phi$ (resp. $\phi_d$) be the canonical quotient 2-isogeny $E \to E'$ (resp. $E_d \to E'_d$). \begin{proposition} There are canonical homomorphisms \begin{equation*} H^1(\mathbf{Q}, E[\phi]) \to H^1(\mathbf{Q}, E[2]), \qquad \text{and} \qquad H^1(\mathbf{Q}, E_d[\phi_d]) \to H^1(\mathbf{Q}, E_d[2]), \end{equation*} and they induce \begin{equation*} \Sel^\phi (E/\mathbf{Q}) \to \Sel^2 (E/\mathbf{Q}), \qquad \text{and} \qquad \Sel^{\phi_d} (E_d/\mathbf{Q}) \to \Sel^2 (E_d/\mathbf{Q}). \end{equation*} \end{proposition} \begin{proof} If we denote the unique dual rational 2-isogeny of $\phi$ by $\phi'$, then we have a canonical exact sequence \begin{equation}\label{eq:isogeny_SES} 0 \longrightarrow E[\phi] \longrightarrow E[2] \longrightarrow E'[\phi'] \longrightarrow 0. \end{equation} This defines a canonical map $H^1(\mathbf{Q}, E[\phi]) \to H^1(\mathbf{Q}, E[2])$ on cohomology groups, and it restricts to the map $\Sel^\phi (E/\mathbf{Q}) \to \Sel^2 (E/\mathbf{Q})$ of subgroups. For $E_d$ and $\phi_d$ the proof is \textit{mutatis mutandis} the same. \end{proof} \begin{proposition} There are canoncial isomorphisms $H^1(\mathbf{Q}, E[\phi]) \simeq \sqfr{\mathbf{Q}}$ and $H^1(\mathbf{Q}, E_d[\phi_d]) \simeq \sqfr{\mathbf{Q}}$. Moreover, the isomorphisms are compatible in the sense that the following diagram is commutative: \begin{equation*}\begin{gathered} \xymatrix{ & H^1(\mathbf{Q}, E[\phi]) \ar[dd] \ar[r] & H^1(\mathbf{Q},E[2]) \ar[dd]^= \\ \sqfr{\mathbf{Q}} \simeq H^1(\mathbf{Q},\mu_2) \ar[ur]^\sim \ar[dr]_\sim && \\ & H^1(\mathbf{Q}, E_d[\phi_d]) \ar[r] & H^1(\mathbf{Q},E_d[2]) \\ } \end{gathered}\end{equation*} where the vertical map in the middle is induced by the canonical isomorphism in the Proposition \ref{prop:canonical_isomorphism_of_torsion_groups}. \end{proposition} \begin{proof} Clearly the isomorphisms $\mu_2 \to E[\phi]$ and $\mu_2 \to E_d[\phi_d]$ are compatible in the sense the left triangle commutes. By Kummer theory we know $H^1(\mathbf{Q}, \mu_2) =\sqfr{\mathbf{Q}}$, whence the result follows. \end{proof} \begin{proposition}\label{prop:kernel_of_selmer} Let $G$ be the subgroup of $\sqfr{\mathbf{Q}}$ generated by the class of $A^2 - 4B$. Then $G$ is the kernel of the homomorphisms $H^1(\mathbf{Q}, E[\phi]) \to H^1(\mathbf{Q}, E[2])$ and $H^1(\mathbf{Q}, E_d[\phi_d]) \to H^1(\mathbf{Q}, E_d[2])$. Thus, \begin{equation*} \Ker \left( \Sel^\phi (E/\mathbf{Q}) \to \Sel^2 (E/\mathbf{Q}) \right) = G \cap \Sel^\phi(E/\mathbf{Q}) \subset \Sel^\phi (E/\mathbf{Q}). \end{equation*} Similarly, \begin{equation*} \Ker \left( \Sel^{\phi_d} (E_d/\mathbf{Q}) \to \Sel^2 (E_d/\mathbf{Q}) \right) = G \cap \Sel^{\phi_d}(E_d/\mathbf{Q}) \subset \Sel^\phi (E_d/\mathbf{Q}). \end{equation*} \end{proposition} \begin{proof} We only give a proof for $E$ and $\phi$. For $E_d$ and $\phi_d$, everything is the same under making certain notational change. From the short exact sequence \eqref{eq:isogeny_SES}, we have the long exact sequence of cohomology groups: \begin{equation*} 0 \to E(\mathbf{Q})[\phi] \to E(\mathbf{Q})[2] \to E'(\mathbf{Q})[\phi'] \xrightarrow{\eta} H^1(\mathbf{Q},E[\phi]) \to H^1(\mathbf{Q}, E[2]) \to H^1(\mathbf{Q},E'[\phi']) \to \cdots \end{equation*} Because we only consider those elliptic curves with $E(\mathbf{Q})[\phi] = E(\mathbf{Q})[2]$, the map $E(\mathbf{Q})[2] \to E'(\mathbf{Q})[\phi']$ is the zero map, and this again forces us that $\eta : E'(\mathbf{Q})[\phi'] \to H^1(\mathbf{Q},E[\phi])$ is injective. The image $\eta \left( E'(\mathbf{Q})[\phi'] \right)$ is the kernel of $H^1(\mathbf{Q}, E[\phi]) \to H^1(\mathbf{Q}, E[2])$. We claim that this kernel is equal to $G$. Write $E(\overline{\mathbf{Q}})[2] = \lbrace O, P, Q, P+Q \rbrace$, where $O$ is the identity of $E$ and $P \in E(\mathbf{Q})$, and similarly write $E'(\overline{\mathbf{Q}})[\phi'] = \lbrace O', T \rbrace$, where $O'$ is the identity of $E'$. Clearly $T \in E'(\mathbf{Q})$. Since $E(\overline{\mathbf{Q}})[2] \to E'(\overline{\mathbf{Q}})[\phi']$ is surjective but $E(\mathbf{Q})[2] \to E'(\mathbf{Q})[\phi']$ is the zero map, the point $Q$ is mapped onto $T$ under $E(\overline{\mathbf{Q}})[2] \to E'(\overline{\mathbf{Q}})[\phi']$. Then, $\eta(T) \in H^1(\mathbf{Q},E[\phi])$ is defined by the 1-cocyle \begin{equation*} \sigma \mapsto \sigma(Q) - Q = \begin{cases} P & \text{ if $\sigma(Q) = P+Q \neq Q$, } \\ 0 & \text{ if $\sigma(Q) = Q$.} \end{cases} \end{equation*} However, this 1-cocycle corresponds to the 1-cocycle $\sigma \mapsto \sigma(\sqrt{b})/\sqrt{b}$ defining an element $H^1(\mathbf{Q}, \mu_2)$, where $b=A^2 - 4B$, since in the Weierstrass equation \eqref{eq:Weierstrass_having_2_torsion}, $Q$ corresponds to the point $\displaystyle \left( \frac{-A \pm \sqrt{A^2 - 4B}}{2}, 0 \right)$ and thus $\sigma(Q) = Q$ if and only if $\sigma \left( \sqrt{A^2 - 4B} \right) = \sqrt{A^2 - 4B}$. Clearly the 1-cocycle $\sigma \mapsto \dfrac{\sigma(\sqrt{A^2-4B})}{\sqrt{A^2 - 4B}}$ defining an element $H^1(\mathbf{Q}, \mu_2)$ corresponds to $A^2 - 4B$ in $\sqfr{\mathbf{Q}}$. \end{proof} Recall (Proposition \ref{prop:str_of_Phi}) that the everywhere-local norm group $\Phi$ is the intersection of two Selmer groups $\Sel^2(E/\mathbf{Q})$ and $\Sel^2(E_d/\mathbf{Q})$ inside $H^1(\mathbf{Q}, E[2]) = H^1(\mathbf{Q}, E_d[2])$. In order to identify elements in the intersection, we need to find $b \in \sqfr{\mathbf{Q}}$ such that $b \in \Sel^\phi(E/\mathbf{Q}) \cap \Sel^{\phi_d}(E/\mathbf{Q})$ by descent arguments (cf. \cite{AEC}, chapter X). In order to ensure this is not the identity element in $\Phi$, we should check $b \not\in G$. This will be done when we deal with $E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/4\mathbf{Z}$ or $\mathbf{Z}/2\mathbf{Z}$. \subsection{Isogeny invariance of the Gross--Zagier conjecture} Let $E$ and $E'$ be isogenous elliptic curves defined over $\mathbf{Q}$, and $K$ be an imaginary quadratic field satisfying the Heegner hypothesis. We consider those curves with fixed modular parametrisations $\pi: X_0(N) \to E$ and $\pi': X_0(N) \to E'$. \begin{proposition}\label{prop:isoginv} Let $\theta: E \to E'$ be a rational isogeny. \begin{enumerate} \item If the strong Gross--Zagier conjecture (Conjecture \ref{conj:GZ_strong_conjecture}) is true for $E$ then it is also true for $E'$. \item Suppose that $\theta$ respects modular parametrisations of $E$ and $E'$, i.e., $\pi' = \theta \circ \pi$. Then we have \begin{equation}\label{eq:isoginv} \frac{M^2 \cdot C^2 \cdot \# \Sh(E/K)}{[E(K):\mathbf{Z} P_K]^2} = \frac{M'^2 \cdot C'^2 \cdot \# \Sh(E'/K)}{[E'(K):\mathbf{Z} P_K']^2}. \end{equation} \item Let $p$ be a prime. If \begin{enumerate}[label=(\roman*)] \item $\ord_p \# E(K)_\mathrm{tors} = \ord_p \# E(\mathbf{Q})_\mathrm{tors}$, and \item $\ord_p \# E(\mathbf{Q})_\mathrm{tors} \le \ord_p \left( u_K \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2} \right)$, \end{enumerate} then \begin{equation*} \ord_p \# E'(\mathbf{Q})_\mathrm{tors} \le \ord_p \left( u_K \cdot C' \cdot M' \cdot \left( \# \Sh(E'/K) \right)^{1/2} \right). \end{equation*} In particular, if $E(K)_\mathrm{tors} = E(\mathbf{Q})_\mathrm{tors}$, and if the weak Gross--Zagier conjecture (Conjecture \ref{conj:GZ_conjecture}) for $E$ is true, then it is also true for $E'$. \end{enumerate} \end{proposition} \begin{proof} (a) Isogenous curves $E$ and $E'$ have the same $L$-functions and the same BSD formulae, i.e., $L(E/K, s) = L(E'/K, s)$ and $\mathrm{BSD}_{E/K} = \mathrm{BSD}_{E'/K}$ (cf. Conjecture \ref{conj:BSD}). The latter is a theorem of Cassels \cite{Ca1}. As the strong Gross--Zagier conjecture is obtained by simply equating these formulae, it is clearly isogeny invariant. (b) Let $P_K'$ be the Heegner point for $E'$ defined by $P_K' = \theta (P_K)$. Since $L'(E/K, s) = L'(E'/K,s)$, we have \begin{equation*} \frac{\| \omega \|^2 \cdot \hat{h}(P_K)}{\| \omega' \|^2 \cdot \hat{h}(P_K')} = \frac{M^2}{M'^2}. \end{equation*} Similarly, from $\mathrm{BSD}_{E/K} = \mathrm{BSD}_{E'/K}$, we get \begin{equation*} \frac{\| \omega \|^2 \cdot \hat{h}(P_K)}{\| \omega' \|^2 \cdot \hat{h}(P_K')} = \frac{\# \Sh(E'/K) \cdot C'^2 \cdot [E(K):\mathbf{Z} P_K]^2}{\# \Sh(E/K) \cdot C^2 \cdot [E'(K):\mathbf{Z} P_K']^2}. \end{equation*} Equating, we obtain the equation \ref{eq:isoginv}. (c) Let $P$ (resp. $P'$) be a generator of the group $E(K)/E(K)_\mathrm{tors}$ (resp. $E'(K)/E'(K)_\mathrm{tors}$), and let $P_K = \nu P$ (resp. $P_K' = \nu' P'$). As $P_K' = \theta(P_K) = \nu \theta(P)$, the index $\nu'$ is divisible by $\nu$. The assumption (i) $\ord_p \# E(K)_\mathrm{tors} = \ord_p \# E(\mathbf{Q})_\mathrm{tors}$ implies that $\ord_p [E(K)_\mathrm{tors}: E(\mathbf{Q})_\mathrm{tors}] = 0$. By the equation \ref{eq:isoginv}, we have \begin{equation*} \frac{u_K^2 \cdot M^2 \cdot C^2 \cdot \# \Sh(E/K)}{\left( \# E(\mathbf{Q})_\mathrm{tors} \right)^2 \cdot [E(K)_\mathrm{tors}:E(\mathbf{Q})_\mathrm{tors}]^2} = \frac{u_K^2 \cdot M'^2 \cdot C'^2 \cdot \# \Sh(E'/K)}{\left( \frac{\nu'}{\nu} \right)^2 \cdot \left( \# E'(\mathbf{Q})_\mathrm{tors} \right)^2 \cdot [E'(K)_\mathrm{tors}:E'(\mathbf{Q})_\mathrm{tors}]^2}, \end{equation*} and by the assumption (ii) the left hand side of the above equation is a $p$-adic integer. Thus, \begin{align*} \ord_p \left( u_K \cdot C' \cdot M' \cdot \left( \# \Sh(E'/K) \right)^{1/2} \right) & \ge \ord_p \left( \frac{\nu'}{\nu} \cdot \left( \# E'(\mathbf{Q})_\mathrm{tors} \right) \cdot [E'(K)_\mathrm{tors}:E'(\mathbf{Q})_\mathrm{tors}] \right) \\ & \ge \ord_p \# E'(\mathbf{Q})_\mathrm{tors}. \end{align*} \end{proof} \begin{remark} By \cite{GJT} Corollary 4 or \cite{Naj}, Theorem 2, for a given elliptic curve $E$ defined over $\mathbf{Q}$, there are at most 4 quadratic fields $K$ such that $E(K)_\mathrm{tors} \neq E(\mathbf{Q})_\mathrm{tors}$. \end{remark} \section{$E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$} \label{section:torgp_type_2_4} In this section, we prove the Main Theorem for the cases when $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$. \begin{theorem}\label{th:torgp_type_2_4} Suppose that $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$. Then the order $8 = \# E(\QQ)_\mathrm{tors}$ divides the Tamagawa number $C$ of $E$, except for the curve `15a3', in which case $C \cdot M = 8$. \end{theorem} From \cite{Ku}, table 3, such elliptic curves can be parametrised by one parameter $\lambda \in \mathbf{Q}$ by \begin{equation}\label{eq:Weierstrass_basic_for_type_2_4} y^2 + xy - \lambda y = x^3 - \lambda x^2, \end{equation} where $\displaystyle \lambda = \left( \frac{\alpha}{\beta} \right)^2 - \frac{1}{16} = \frac{16\alpha^2 - \beta^2}{16\beta^2}$, with positive integers $\alpha, \beta$ having no common prime divisor, and $\alpha/\beta \neq 1/4$. The discriminant of the equation is $\Delta = \lambda^4(1+16\lambda) \neq 0$. Note that since we take $\alpha$ and $\beta$ relatively prime, there are no common prime divisor of $16\alpha^2 - \beta^2$ and $16\beta^2$ except 2. \begin{proposition} \label{prop:something_good_happened_for_eq_lambda} Let $p$ be a prime. \begin{enumerate} \item If $m:=\ord_p \lambda >0$, then the reduction of $E$ modulo $p$ is (split) multiplicative of type $\mathrm{I}_{4m}$. Consequently the Tamagawa number at $p$ of $E$ is $C_p = 4m$. \item Suppose that $p \neq 2$. If $m := \ord_p \lambda < 0$, then $m$ is always even, and the minimal Weierstrass equation at $p$ is given by \begin{equation}\label{eq:m-even} y^2 + p^z xy -up^z y = x^3 -ux^2, \end{equation} where $u \in \mathbf{Z}_p^\times$ satisfying $\lambda = up^m$ in $\mathbf{Z}_p$, and where $z$ is a positive integer. The reduction type of the equation modulo $p$ is $\mathrm{I}_n$ with $n = 2z$, whence $C_p = 2z$. \end{enumerate} \end{proposition} \begin{proof} (a) This can be shown by directly applying Tate's algorithm (see \cite{advAEC}, \S IV.9) to the Weierstrass equation \eqref{eq:Weierstrass_basic_for_type_2_4}. (b) Since $\gcd \left( 16\alpha^2 - \beta^2, 16\beta^2 \right)$ is a power of $2$, if $m = \ord_p \lambda = \ord_p \left( (16\alpha^2 - \beta^2) / 16\beta^2 \right) <0$ then the exponent $m$ is always even. Changing Weierstrass equation (cf. \cite{Lo}, proof of Proposition 2.4), we get the equation \eqref{eq:m-even}. We use Tate's algorithm again for this equation to obtain the minimality and reduction type. \end{proof} Let \begin{equation*} S = \lbr p \text{ primes}: \ord_p \lambda > 0 \rbr, \qquad T = \lbr p \text{ primes}: p \neq 2,\, \ord_p \lambda < 0 \rbr. \end{equation*} Proposition \ref{prop:something_good_happened_for_eq_lambda} says that Theorem \ref{th:torgp_type_2_4} is true when (i) $\# S \ge 2$; or (ii) $\# S = 1$ and $\# T \ge 1$. Thus the following proposition shows Theorem \ref{th:torgp_type_2_4}. \begin{proposition}\label{prop:relations_of_S_and_T} With possible finite number of exceptions, we have $\# S \ge 1$ and moreover if $T = \emptyset$, then $\# S \ge 2$. The exceptions are exactly the following curves: `15a1', `15a3', `21a1', `24a1', `48a3', `120a2', `240a3', `240d5', and `336e4'. But in any case including these exceptions, we have $8 \mid C \cdot M$. \end{proposition} \begin{proof} Write $\beta = 2^n \beta'$ with $n \ge 0$ and $\beta'$ odd. The condition $T = \emptyset$ is equivalent to the condition $\beta' = 1$. We divide the proof according to the value of $n$. Suppose $n=0$. In this case $16\alpha^2 - \beta^2$ is odd and so $\gcd \left( 16\alpha^2 - \beta^2, 16\beta^2 \right) =1$. Suppose that there is no prime dividing $16\alpha^2 - \beta^2$. We then have $16\alpha^2 - \beta^2 = \pm 1$. This is possible only if $\alpha = 0$, a contradiction. For the second statement, assume $\beta = 1$. In this case, we get $\lambda = (16\alpha^2 - 1)/16$. If there is only one odd prime $p$ dividing $16\alpha^2 - 1$, then we must have $4\alpha - 1 =1$, a contradiction ($\alpha \in \mathbf{Z}_{>0}$). Suppose $n=1$. We have $\displaystyle \lambda = \frac{16\alpha^2 - 4\beta'^2}{16\cdot 4 \beta'^2} = \frac{4\alpha^2 - \beta'^2}{16\beta'^2}$, and $\gcd \left( 4\alpha^2 - \beta'^2, 16\beta'^2 \right) =1$. If there were no odd prime dividing $4\alpha^2 - \beta'^2$, we would have $4\alpha^2 - \beta'^2 = \pm 1$, whence $\alpha = 0$, a contradiction. If $\beta=2$ (equivalently $\beta' = 1$), and if there were only one prime dividing $4\alpha^2 - \beta'^2$, then either one of the relations $2\alpha - 1 =1$ or $2\alpha + 1 = -1$ would hold. Thus we must have $\alpha = 1$. In this case we get the curve `48a3', having $C_2 = C_3 = 4$. Suppose $n \ge 5$. In this case we can take another Weierstrass equation (cf. Equation \eqref{eq:m-even}) of $E$ of the following form: \begin{equation}\label{eq:at2} y^2 + 2^n xy - 2^n u y = x^3 - ux^2, \end{equation} where $u = \alpha^2 - 2^{2n-4}$. This equation has discriminant $\Delta = (2^{2n} + 16u) 2^{2n} u^4$ and $c_4 = 2^{2n + 4} u + 16 u^2 + 2^{4n}$, so $\ord_2(\Delta) = 2n+4$ and $\ord_2(c_4) = 4$. Moreover, by \cite{AEC}, Proposition VII.5.5, since its $j$-invariant has order $8-2n < 0$, $E$ has potentially multiplicative reduction modulo $2$. If this equation \eqref{eq:at2} is minimal at the prime 2, then the curve has additive reduction modulo 2 ($\ord_2(c_4)>0$). Tate's algorithm says that $E$ has reduction of type $\mathrm{I}_k^\ast$ for some $k$, with Tamagawa number 2 or 4. Suppose that the equation \eqref{eq:at2} is not minimal modulo $2$. Then we can transform \eqref{eq:at2} into a minimal model modulo $2$, which has discriminant of order $2n+4 - 12 = 2n - 8$ at 2 and $c_4$ of order 0. Since the order of the minimal discriminant is even and $>0$, and since $E$ has multiplicative reduction ($\ord_2 c_4 = 0$), we have even $C_2$ by Tate's algorithm. As $C_2$ is even and $4 \mid C_p$ for some odd $p \in S$, the proof of this case is completed. Remaining cases ($n = 2, 3,$ and 4) can be shown similarly. \end{proof} \section{$E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$} \label{section:torgp_type_2_2} In this section, we prove the Main Theorem for the cases when $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$. \begin{theorem}\label{th:torgp_type_2_2} Suppose that $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$. Then the order $4 = \# E(\QQ)_\mathrm{tors}$ divides the Tamagawa number $C$ of $E$, except for two curves `17a2' and `32a2'. For these two cases we have $4 = C \cdot M$. \end{theorem} Following \cite{Ku}, we can take a Weierstrass model of the form \begin{equation}\label{eq:Weierstrass_basic_for_type_2_2} y^2 = x(x+a)(x+b), \end{equation} where $a, b \in \mathbf{Z}$ with $a \neq b \neq 0 \neq a$. Note that $a$ and $b$ is in general not relatively prime. The discriminant of the equation \eqref{eq:Weierstrass_basic_for_type_2_2} is $\Delta = 16 (a - b)^2 a^2 b^2$ and $c_4 = 16a^2 - 16 ab + 16 b^2$. Let $c = a-b \neq 0$. If there is a prime $p$ dividing both $a$ and $b$, then by changing the equation via $[p, 0, 0, 0]$ if necessary, we assume $\min \left( \ord_p a, \ord_p b \right) = 1$. We first investigate the Tamagawa number $C_p$ for primes $p$ dividing $abc$. \begin{proposition}\label{prop:prime_dividing_a_xor_b} Let $p$ be a prime. Assume that either (i) $p \mid a$ and $p \nmid bc$; or (ii) $p \mid b$ and $p \nmid ac$. Then we have the following. \begin{enumerate} \item If $p$ is odd, then $E$ has reduction of type $\mathrm{I}_{\ord_p \Delta} = \mathrm{I}_{2\ord_p(a)}$ modulo $p$, with even Tamagawa number at $p$. \item Suppose that $p = 2$. If $m:=\ord_2 a = 4$ and if $b \equiv 1 \pmod{4}$, then $E$ has good reduction modulo $2$, whence $C_2 = 1$. Otherwise, $C_2$ is even. \end{enumerate} \end{proposition} \begin{proof} We only give the proof for the case (i). By the symmetry of the roles of $a$ and $b$ in the equation, the case (ii) follows immediately. (a) This is immediate from Tate's algorithm. (b) Suppose that $2 \mid a$ and $2 \nmid bc$. We do a case-by-case study. In order to help readers to re-construct proofs of the results in the following table, we remark that we mostly apply Tate's algorithm to the Weierstrass equation \eqref{eq:Weierstrass_basic_for_type_2_2}, while for the case $m=4$ and $b \equiv 1 \pmod{4}$ and for $m \ge 5$, we apply the algorithm to another Weierstrass equation $\displaystyle y^2 + xy = x^3 + \frac{a+b-1}{4}x^2 + \frac{ab}{2^4} x$. \begin{center} \begin{tabular}{|c|c|c|c|} \hline $m$ & $b \mod{4}$ & Reduction Type of $E$ at $p=2$ & $C_2$ \\ \hline\hline $1$ & $1$ or $3$ & $\mathrm{III}$ & 2 \\ \hline \multirow{2}{*}{$2$} & $1$ & $\mathrm{I}_n^\ast$ & 2 or 4 \\ \cline{2-4} & $3$ & $\mathrm{I}_0^\ast$ & 2 \\ \hline \multirow{2}{*}{$3$} & $1$ & $\mathrm{III}^\ast$ & 2 \\ \cline{2-4} & $3$ & $\mathrm{I}_n^\ast$ & 2 or 4 \\ \hline \multirow{2}{*}{$4$} & $1$ & $\mathrm{I}_0$ (good) & 1 \\ \cline{2-4} & $3$ & $\mathrm{I}_n^\ast$ & 2 or 4 \\ \hline $ \ge 5$ & $1$ or $3$ & $\mathrm{I}_{2m - 8}$ & even \\ \hline \end{tabular} \end{center} \end{proof} \begin{proposition}\label{prop:prime_dividing_c} Let $p$ be a prime such that $p \mid c$ and $p \nmid ab$. \begin{enumerate} \item If $p$ is odd, then $E$ has reduction of type $\mathrm{I}_{\ord_p \Delta} = \mathrm{I}_{2\ord_p(c)}$ modulo $p$, with even Tamagawa number at $p$. \item Suppose that $p = 2$. If $m:=\ord_2 c = 4$ and if $a \equiv b \equiv 3 \pmod{4}$, then $E$ has good reduction modulo $2$, whence $C_2 = 1$. Otherwise, $C_2$ is even. \end{enumerate} \end{proposition} \begin{proof} We make a change of variables via $[1,-a,0,0]$, to get another equation \begin{equation}\label{eq:for_looking_c} y^2 = x^3 + (-2a+b)x^2 + a(a-b)x. \end{equation} (a) Immediate from Tate's algorithm applied to equation \eqref{eq:for_looking_c}. (b) Let $p=2$. Similar as above proposition, the results from Tate's algorithm applied to the equation \eqref{eq:for_looking_c} are summarised as follows. In particular, when dealing with the cases $a \equiv b \equiv 3 \pmod{4}$ and $m \ge 4$, we use the equation $\displaystyle y^2 + xy = x^3 + \frac{-2c -b -1}{4} x^2 + \frac{ac}{16} x$ instead. \begin{center} \begin{tabular}{|c|c|c|c|} \hline $m$ & $a$ and $b \mod{4}$ & Reduction Type of $E$ at $p=2$ & $C_2$ \\ \hline \hline 1 & any & $\mathrm{III}$ & 2 \\ \hline $\ge 2$ & $a \equiv 1 \pmod{4}$ or $b \equiv 1 \pmod{4}$ (or both) & $\mathrm{I}_k^\ast$ for some $k$ & 2 or 4 \\ \hline $2$ or $3$ & \multirow{3}{*}{$a \equiv b \equiv 3 \pmod{4}$} & $\mathrm{III}^\ast$ & 2 \\ \cline{1-1} \cline{3-4} $4$ & & $\mathrm{I}_0$ (good) & 1 \\ \cline{1-1} \cline{3-4} $5$ & & $\mathrm{I}_{2m-8}$ & even \\ \hline \end{tabular} \end{center} \end{proof} \begin{proposition}\label{prop:commonprime_dividing_a_and_b} Let $p$ be a prime dividing two of $a$, $b$, or $c$. Then clearly it divides the third. By changing variables in the equation \eqref{eq:Weierstrass_basic_for_type_2_2} via $[p,0,0,0]$ if necessary, we assume $\min \left( \ord_p a, \ord_p b \right) = 1$. Then $E$ has reduction of type $\mathrm{I}_k^\ast$ for some $k$, with even Tamagawa number. \end{proposition} \begin{proof} If $m \neq n$, then we may assume $m > n = 1$ without any loss of generality. By Tate's algorithm, in this case $E$ has reduction of type $\mathrm{I}_k^\ast$ with Tamagawa number 2 or 4. If $m = n = 1$, then we can write $a= pa'$ and $b=pb'$ with $(a',p) = (b',p) = 1$. Hence, \begin{itemize} \item if $a' \not\equiv b' \pmod{p}$, then $E$ has reduction of type $\mathrm{I}_0^\ast$ modulo $p$ with Tamagawa number 4; \item if $a' \equiv b' \pmod{p}$, then $E$ has reduction of type $\mathrm{I}_k^\ast$ modulo $p$ with Tamagawa number 2 or 4. \end{itemize} \end{proof} Recall that $E$ is an elliptic curve defined by the equation $y^2 = x(x+a)(x+b)$ with discriminant $\Delta = 16a^2 b^2 c^2 \neq 0$ where $a, b, c:=a-b \in \mathbf{Z}$. We also have assumed that $\min \left( \ord_p a, \ord_p b \right) \le 1$ for all primes $p$. Let \begin{equation*} S : = \lbr p \text{ primes}: \ord_p a >0,\, \ord_p b>0 \rbr. \end{equation*} If $\# S \ge 2$, then by Proposition \ref{prop:commonprime_dividing_a_and_b}, then the Tamagawa number $C$ of $E$ is divisible by 4. Thus the following proposition shows Theorem \ref{th:torgp_type_2_2}. \begin{proposition} Suppose that $\# S \le 1$. Then $4 \mid C$ with only two exceptions: `17a2' and `32a2'. But in both exceptions, we have $C =M =2$. \end{proposition} \begin{proof} Proofs are similar to Proposition \ref{prop:relations_of_S_and_T}. \end{proof} \section{$E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/4\mathbf{Z}$} \label{section:torgp_type_4} \begin{theorem}\label{th:torgp_type_4} If $E$ is an elliptic curve defined over $\mathbf{Q}$, having rational torsion subgroup $E(\QQ)_\mathrm{tors}$ isomorphic to $\mathbf{Z}/4\mathbf{Z}$, then the order $4 = \# E(\QQ)_\mathrm{tors}$ divides $u_K \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2}$. \end{theorem} \subsection{Tamagawa numbers} In order to prove Theorem \ref{th:torgp_type_4}, we first consider Tamagawa numbers of $E$. From \cite{Ku}, table 3, such elliptic curves can be parametrized by one parameter $\lambda$ by \begin{equation*} y^2 + xy - \lambda y = x^3 - \lambda x^2, \end{equation*} where the discriminant of the equation $\lambda^4(1+16\lambda) \neq 0$. This is the same as in section \ref{section:torgp_type_2_4}, but without further restriction on $\lambda$. Let $\lambda = \alpha / \beta$, with $\alpha, \beta \in \mathbf{Z}$ and $\gcd(\alpha, \beta)=1$. By Proposition \ref{prop:something_good_happened_for_eq_lambda} (a), we may assume $\alpha = 1$. So we begin with the following Weierstrass equation \begin{equation}\label{eq:type_4_only_beta} y^2 + \beta xy - \beta^2 y = x^3 - \beta x^2, \end{equation} with $\beta \in \mathbf{Z}$. Note that this curve has discriminant $\Delta = (16+\beta)\beta^7$ and $c_4 = (16+16\beta+\beta^2)\beta^2$. If $\beta = \pm 1$, then we have either `15a8' or `17a4', both of which have $M=4$. So we may assume that there is at least one prime dividing $\beta$. Let $p$ be a prime dividing $\beta$, and let $m : = \ord_p \beta > 0$. Write $\beta = p^m u$, for some $u \in \mathbf{Z}$ with $\gcd(u,p)=1$. Using Tate's algorithm applied to Weierstrass equations $\displaystyle y^2 + p^{z+1} xy - p^{z+2}u^{-1}y = x^3 -pu^{-1}x^2$ (when $m = 2z+1$ is odd) or $\displaystyle y^2 + p^{z+1} xy - p^{z+2}u^{-1}y = x^3 -pu^{-1}x^2$ (when $m=2z$ is even), we can figure out the reduction types and Tamagawa numbers at primes $p \mid \beta$ for $E$. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $m$ & $p$ & additional conditions & Reduction Type of $E$ at $p$ & $C_p$ \\ \hline \hline $m = 2z + 1$ for $z \in \mathbf{Z}_{\ge 0}$ & any & & $\mathrm{I}_1^\ast$ & 4 \\ \hline \multirow{3}{*}{$m = 2z$ for $z \in \mathbf{Z}_{>0}$} & $p \neq 2$ & & $\mathrm{I}_{2z}$ & even \\ \cline{2-5} & \multirow{2}{*}{$p = 2$} & $u \equiv 3 \pmod{4}$ and $m = 8$ & $\mathrm{I}_0$ (good) & 1 \\ \cline{3-5} & & otherwise & bad & even \\ \hline \end{tabular} \end{center} So, in the sequel, we assume \begin{itemize} \item $\ord_p \beta$ is even for all prime $p$; \item the number of odd primes dividing $\beta$ is $\le 1$. \end{itemize} Moreover, if $\ell$ is an odd prime dividing $\beta + 16$, then $E$ has reduction of type $I_{\ord_\ell (\beta+16)}$ at $\ell$.\footnote{This can be also shown by Tate's algorithm, applied to the equation $\displaystyle y^2 + \beta x y -(\beta+16)^2 y = x^3 - (\beta+ 96) x^2 + 192(\beta+16) x - 128(\beta+24)(\beta+16)$ for $E$.} We furthermore assume throughout this section, that \begin{itemize} \item if $\ell$ is an odd prime dividing $\beta + 16$, then $\ord_\ell \left( \beta + 16 \right)$ is odd. \end{itemize} Suppose that there is no odd prime $p$ dividing $\beta$, i.e., $\beta = \pm 2^m$ for some positive integer $m$. As we can see in the above table, in order to avoid $4 \mid C$, we may assume $m = 2z$ is even. Applying Tate's algorithm to the Weierstrass equation \eqref{eq:type_4_only_beta}, we have the following results. \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\beta$ & Curve & Tamagawa Number & Manin Constant \\ \hline \hline $2^2$ & `40a3' & $C_2 \cdot C_5 = 2 \cdot 1$ & 2 \\ \hline $2^4$ & `32a4' & $C_2 = 2$ & 2 \\ \hline $2^{2z}$ with $z \ge 3$ & & $C_2=4$ & \\ \hline $-2^2$ & `24a4' & $C_2 \cdot C_3 = 2 \cdot 1$ & 2 \\ \hline $-2^4$ & singular curve & & \\ \hline $-2^6$ & `24a3' & $C_2 \cdot C_3 = 2 \cdot 1$ & 1 \\ \hline $-2^8$ & `15a7' & $C_3 \cdot C_5 = 1 \cdot 1$ & 2 \\ \hline $-2^{2z}$ with $z \ge 5$ even & & $C_2 = 2(z-4)$ & \\ \hline $-2^{2z}$ with $z \ge 5$ odd & & $C_2 = 2(z-4)$ & \\ \hline \end{tabular} \end{center} So when $|\beta|$ is a power of 2, then we only need to deal with the cases $\beta = -2^{2z}$ with (i) $z = 4$ or (ii) $z \ge 3$ being odd. \subsection{$\left( \# \Sh(E/K) \right)^{1/2}$} In this subsection, we shall see $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$, for various remaining cases left from considerations about Tamagawa numbers. Our main job is to show $\sum i_\ell + \dim \Phi \ge 4$ (notations from subsection \ref{subsection:Kramer}). Then, \begin{equation*} \boxed{\sum i_\ell + \dim \Phi \ge 4} \Longrightarrow \boxed{ \dim_{\mathbf{F}_2} \Sh(E/K)[2] \ge 1} \Longrightarrow \boxed{2 \mid \left( \# \Sh(E/K) \right)^{1/2}}. \end{equation*} The first implication follows from Kramer's theorem (see subsection \ref{subsection:Kramer}), and the last implication is due to Kolyvagin's theorem \cite{Kol}. From the above subsection, we only need to deal with the cases when $\beta$ has at most one odd prime divisor. First, we consider the case where $\beta$ is actually a power of an odd prime. \begin{proposition} Suppose that $\beta = p^m$ for some $m >0$. By the previous subsection, we assume \begin{itemize} \item $m = 2z$ for some positive integer $z$; and \item for all odd prime $\ell$ dividing $\beta + 16$, $\ord_\ell \Delta_\mathrm{min} = \ord_\ell \left( \beta + 16 \right)$ is odd. \end{itemize} Then we have $\dim_{\mathbf{F}_2} \Sh(E/K)[2] \ge 1$, i.e., Theorem \ref{th:torgp_type_4} is true, except for a family of curves defined by the equation \begin{equation*} y^2 + p^{z} xy - p^{z} y = x^3 - x^2, \end{equation*} with $p^{2z} + 16 = \ell^k$ being prime powers. \end{proposition} \begin{remark} For the exceptional family, Theorem \ref{th:torgp_type_4} is also true. This will be shown in the following subsection \ref{subsection:exceptionalZmod4}. \end{remark} \begin{proof} We begin with the following equation: $y^2 + p^{2z} xy - p^{4z} y = x^3 - p^{2z} x^2.$ By a change of variables via $[(1/2)p^{z},0,0,0]$, we get $y^2 + 2p^{z} xy - 8p^{z} y = x^3 - 4x^2.$ Making another change of variables via $[1,4,-p^{z},0]$, we get $y^2 = x^3 + (p^{2z}+8)x^2 + 16x.$ The last equation has discriminant $\Delta = 2^{12}(p^{2z} + 16)p^{2z}$ and $c_4 = 16p^{4z} + 256p^{2z} + 256$. Note that the minimal discriminant of $E$ is given by $\Delta_\mathrm{min} = (p^{2z} + 16)p^{2z}$; in particular, $E$ has good reduction modulo $2$. Let $\phi$ be the isogeny $E \to E':= E/E(\mathbf{Q})[2]$. (Note that $E(\mathbf{Q})[2] \simeq \mathbf{Z}/2\mathbf{Z}$.) Following \cite{Go}, we compute the Selmer group $\Sel^\phi (E/\mathbf{Q})$. For each prime $\ell$ (including $\infty$), we denote by $\delta_\ell$ the map $E'(\mathbf{Q}_\ell)/ \phi \left( E(\mathbf{Q}_\ell) \right) \to H^1(\mathbf{Q}_\ell, E[\phi])$. Since $\Sel^\phi(E/\mathbf{Q}) \subset H^1(\mathbf{Q}, E[\phi]) \simeq \sqfr{\mathbf{Q}}$, the elements of $\Sel^\phi(E/\mathbf{Q})$ are those classes of $b \in \mathbf{Q}^\times$ such that their restrictions $b \in H^1(\mathbf{Q}_\ell, E[\phi]) \simeq \sqfr{\mathbf{Q}_\ell}$ are contained in the image $\Ima \delta_\ell$. So by considering the images $\Ima \delta_\ell$, we can figure out which classes are in the Selmer group. For more details of this paragraph, see subsection \ref{subsection:Kramer}. These local images are given as follows. \begin{itemize} \item $\Ima\delta_\infty = \lbrace 1 \rbrace$. \item $\Ima\delta_\ell = \mathbf{Z}_\ell^\times \mathbf{Q}_\ell^{\times 2} / \mathbf{Q}_\ell^{\times 2}$ for odd primes $\ell \nmid \Delta$. \item $\Ima\delta _\ell = \sqfr{\mathbf{Q}_\ell}$ for odd prime $\ell \mid \Delta$, and $\ell \neq p$. \item $\Ima \delta_p = \begin{cases} \sqfr{\mathbf{Q}_p} & \text{ if } p \equiv 1 \pmod{4},\\ \mathbf{Z}_p^\times \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2} & \text{ if } p \equiv 3 \pmod{4}. \end{cases}$ \item $\Ima\delta_2 = \lbrace 1, 5 \rbrace \subset \sqfr{\mathbf{Q}_2}$. \end{itemize} Here are some remarks on the odd primes dividing $\Delta$. Since $p^{2z} + 16$ is a sum of two squares, so by the famous theorem on the sum of two squares, if $\ell \equiv 3 \pmod{4}$ divides $\left( p^{2z} + 16 \right)$, then $\ord_\ell \left( p^{2z} + 16 \right)$ must be even. However, we assumed that the exponent $\ord_\ell \left( p^{2z} + 16 \right)$ is always odd. Hence any prime divisor $\ell$ of $\left( p^{2z} + 16 \right)$ must satisfy $\ell \equiv 1 \pmod{4}$. Let $d$ be a negative, squarefree integer. We now compute the sum of local norm indices $\sum i_\ell$. Note that $i_\infty = 1$. After excluding obvious cases giving $\sum i_\ell \ge 4$, we have the following four cases: \begin{itemize} \item $d = -2$; \item $d = -q$ for an odd prime $q$; \item $d = -2q$ for an odd prime $q$; \item $d = -qq'$ for odd primes $q$, $q'$. \end{itemize} Suppose first that $d = -2$. As $\left( \Delta_\mathrm{min}, d \right)_{\mathbf{Q}_2} = \left( (p^{2z'} + 16)p^{2z}, -2 \right)_{\mathbf{Q}_2} = \left( 1, -2 \right)_{\mathbf{Q}_2} = 1$, we have $\sum i_\ell = 3$. Now we compute the Selmer group $\Sel^{\phi_d} (E_d/\mathbf{Q})$, where $E_d$ is the quadratic twist of $E$ by $d$, and $\phi_d: E_d \to E_d'$ is the corresponding 2-cyclic isogeny. We denote by $\delta^d_\ell$ the corresponding homomorphism $E_d'(\mathbf{Q}_\ell) / \phi_d \left( E_d(\mathbf{Q}_\ell) \right) \to H^1(\mathbf{Q}_\ell, E_d[\phi_d])$. Local images $\Ima \delta^d_\ell$ are given as follows. \begin{itemize} \item $\Ima\delta_\infty^d = \sqfr{\mathbf{R}}$. \item $\Ima\delta_\ell^d = \mathbf{Z}_\ell^\times \mathbf{Q}_\ell^{\times 2} / \mathbf{Q}_\ell^{\times 2}$ for any odd prime $\ell \nmid \Delta$. \item $\Ima\delta_\ell^d = \sqfr{\mathbf{Q}_\ell}$ for any odd prime $\ell \mid \Delta$, with $\ell \neq p$. \item $ \Ima \delta_p^d = \begin{cases} \sqfr{\mathbf{Q}_p} & \text{ if } p \equiv \pm 1 \pmod{8},\\ \mathbf{Z}_p^\times \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2} & \text{ if } p \equiv \pm 5 \pmod{8}. \end{cases}$ \item $\Ima\delta_2^d = \lbrace 1, -2 \rbrace$. \end{itemize} By Heegner hypothesis, for any prime $\ell \mid \Delta_\mathrm{min}$, we have $\quadsym{-2}{\ell} = \quadsym{-1}{\ell} \quadsym{2}{\ell} = 1$. This implies that such an $\ell$ is congruent to either $1$ or $-5$ modulo $8$. However, by the sum of two squares theorem mentioned above, we must have $\ell \equiv 1 \pmod{8}$ if $\ell \mid \Delta_\mathrm{min}$ and if $\ell \neq p$. If $p \equiv 1 \pmod{8}$, then the image of $p$ is contained in $\Phi$ and is non-trivial, by Proposition \ref{prop:kernel_of_selmer}, and the assumptions we made in the statement of current proposition. So suppose that $p \equiv -5 \pmod{8}$. If there are two distinct odd primes $\ell$ and $\ell'$ dividing $p^{2z} + 16$, then the image of $\ell$ or equivalently of $\ell'$ is contained in $\Phi$ and is non-trivial. So for these cases, we have $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$. If there is only one odd prime dividing $p^{2z} + 16$, this will be covered in the following subsection \ref{subsection:exceptionalZmod4}. Suppose that $d = -q$ for some odd prime $q$. Suppose first that $q \equiv 1 \pmod{4}$, i.e. $d \equiv 3 \pmod{4}$. As $\disc(\mathbf{Q}(\sqrt{d}) |\mathbf{Q}) = 4d = -4q$, the prime $2$ is ramified in $K = \mathbf{Q}(\sqrt{d})$. Since $\left( \Delta_\mathrm{min}, d \right)_{\mathbf{Q}_2} = \left( 1, -q \right)_{\mathbf{Q}_2} = 1$ as $-q \equiv -1$ or $-5 \pmod{8}$, we have $i_2 = 2$. Since $i_\infty = 1$ and $i_q \ge 1$, we always have $\sum i_\ell \ge 4$. Now assume that $d = -q$ with a prime $q \equiv 3 \pmod{4}$, then $d \equiv 1 \pmod{4}$. In this case the prime $2$ is unramified in $K$. So we have $i_2 = 0$. Let us consider the Selmer group $\Sel^{\phi_d}(E_d/\mathbf{Q})$. \begin{itemize} \item $\Ima \delta_\infty^d = \sqfr{\mathbf{R}}$. \item $\Ima \delta_\ell^d = \mathbf{Z}_\ell^\times \mathbf{Q}_\ell^{\times 2} / \mathbf{Q}_\ell^{\times 2}$ for any odd prime $\ell \nmid \Delta$, $\ell \neq q$. \item $\Ima \delta_\ell^d = \sqfr{\mathbf{Q}_\ell}$ for any odd prime $\ell \mid \Delta$, and $\ell \neq p$. \item $\Ima \delta_p^d = \begin{cases} \sqfr{\mathbf{Q}_p} & \text{ if } p \equiv 1 \pmod{4}, \\ \mathbf{Z}_p \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2} & \text{ if } p \equiv 3 \pmod{4}. \end{cases}$ \item $\Ima \delta_q^d = \begin{cases} \lbrace 1, qu \rbrace \subset \sqfr{\mathbf{Q}_q} & \text{ if } q \nmid \left( p^{2z} + 8 \right) \text{ and } \left( \dfrac{p^{2z} + 16}{q} \right) = 1, \\ \sqfr{\mathbf{Q}_q} & \text{ otherwise,} \end{cases}$ for some $u \in \mathbf{Z}_q^\times$. \item $\Ima \delta_2^d = \lbrace 1, 5 \rbrace$. \end{itemize} Note that for any odd prime $\ell \mid \Delta$, we have $1 = \left( \dfrac{-q}{\ell} \right) = \left( \dfrac{-1}{\ell} \right) \left( -1 \right)^{\frac{q-1}{2}\frac{\ell-1}{2}} \left( \dfrac{\ell}{q} \right) = \left( \dfrac{\ell}{q} \right)$. If $p \equiv 1 \pmod{4}$, then the image of $p$ is contained in $\Phi$ and is non-trivial. Even if $p \equiv 3 \pmod{4}$, if there are at least two odd prime divisors of $\Delta_\mathrm{min}$ apart from $p$, then we also have $\dim_{\mathbf{F}_2} \Phi \ge 1$, i.e., $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$. If there is only one odd prime dividing $p^{2z} + 16$, this will be covered in the following `exceptional case' \ref{subsection:exceptionalZmod4}. Assume $d=-2q$. We have $i_\infty = 1$ always. Note that $\left( \Delta_\mathrm{min}, d \right)_{\mathbf{Q}_2} = \left( p^{2z} \left( p^{2z} + 16 \right), -2q \right)_{\mathbf{Q}_2} = \left( 1, -2q \right)_{\mathbf{Q}_2} = 1$, whence $i_2 = 2$. Since $i_q \ge 1$, we always have $\sum i_\ell \ge 4$. Finally, assume $d = -qq'$. If the prime $2$ is ramified in $K = \mathbf{Q}(\sqrt{d})$, then surely we have $\sum_\ell i_\ell \ge 4$. Hence, we must assume the other, i.e., $2$ is unramified, which means that $d \equiv 1 \pmod{4}$. Without loss of generality, we then assume $q \equiv 1 \pmod{4}$ and $q' \equiv 3 \pmod{4}$. Moreover, we further assume $i_q = i_{q'} = 1$, i.e., $ \left( \dfrac{ p^{2z} + 16 }{q} \right) = \left( \dfrac{ p^{2z} + 16 }{q'} \right)= -1$. Now consider the local images of $\Sel^{\phi_d}(E_d / \mathbf{Q})$ as follows. \begin{itemize} \item $\Ima \delta_\infty^d = \sqfr{\mathbf{R}}$. \item $\Ima \delta_\ell^d = \mathbf{Z}_\ell^\times \mathbf{Q}_\ell^{\times 2} / \mathbf{Q}_\ell^{\times 2}$ for any odd prime $\ell \nmid \Delta$, $\ell \neq q$. \item $\Ima \delta_\ell^d = \sqfr{\mathbf{Q}_\ell}$ for any odd prime $\ell \mid \Delta$ and $\ell \neq p$. \item $\Ima \delta_p^d = \begin{cases} \sqfr{\mathbf{Q}_p} & \text{ if } p \equiv 1 \pmod{4}, \\ \mathbf{Z}_p \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2} & \text{ if } p \equiv 3 \pmod{4}. \end{cases}$ \item $ \Ima \delta_q^d = \begin{cases} \sqfr{\mathbf{Q}_q} & \text{ if } q \nmid \left( p^{2z} + 8 \right), \\ \lbrace 1, qu \rbrace \subset \sqfr{\mathbf{Q}_q} & \text{ if } q \mid \left( p^{2z} + 8 \right), \end{cases}$ for some $u \in \mathbf{Z}_q^\times$. \item $\Ima \delta_{q'}^d = \sqfr{\mathbf{Q}_q}$. \item $\Ima \delta_2^d = \lbrace 1, 5 \rbrace$. \end{itemize} Note that for any odd primes $\ell$ dividing $\Delta$, we get \begin{equation*} 1 = \left( \dfrac{-qq'}{\ell} \right) = \left( \dfrac{-1}{\ell} \right) \left( -1 \right)^{\frac{\ell-1}{2}\frac{q-1}{2}} \left( \dfrac{\ell}{q} \right)\left( -1 \right)^{\frac{\ell-1}{2}\frac{q'-1}{2}} \left( \dfrac{\ell}{q'} \right) = \left( \dfrac{\ell}{q} \right) \left( \dfrac{\ell}{q'} \right) \end{equation*} and thus we have either $\left( \dfrac{\ell}{q} \right) = \left(\dfrac{\ell}{q'} \right) = 1$ or $\left( \dfrac{\ell}{q} \right) = \left(\dfrac{\ell}{q'} \right) = -1$. Suppose first that $p \equiv 1 \pmod{4}$. If $\left( \dfrac{p}{q} \right) = 1$, then we are done, since the image of $p$ in $\Phi$ is non-trivial. If $\left( \dfrac{p}{q} \right) = -1$, then the image of either $p$ or $pq$ in $\Phi$ is non-trivial. Now, suppose that $p \equiv 3 \pmod{4}$. If there are at least two distinct prime divisors of $p^{2z} + 16$, then among those divisors, at least one $\ell$ must have $\left( \dfrac{\ell}{q} \right) = 1$, since $\left( \dfrac{ p^{2z} + 16 }{q} \right) = -1$. For these cases we have $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$. If there is only one odd prime dividing $p^{2z} + 16$, this will be covered in the following `exceptional case' \ref{subsection:exceptionalZmod4}. So far, we have shown that for any cases of $d$, we obtain $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$ with a family of exceptions. Thus by Kramer's formula, we have $4 \mid C \cdot \left( \# \Sh(E/K) \right)^{1/2}$ for the curves not in the exceptional family. \end{proof} \begin{proposition} Suppose that $\beta = (-1)^s2^m p^{m'}$ for some $s \in \lbr 0, 1 \rbr$ and $m, m' \geq 0$. By the considerations of the above subsection and by the above proposition, the remaining cases are further divided by the following three cases. \begin{itemize} \item $s=1$, $m=2z$ for some $z=3,4$ or odd $z \geq 5$ and $m'=0$, \item $s=1$, $m=0$ and $m'=2z$ for some $z \in \mathbf{Z}_{>0}$, or \item $s=1$, $m = 8$ and $m' = 2z$ for some odd $z \in \mathbf{Z}_{>0}$. \end{itemize} Furthermore by the above subsection, we assume for all odd prime $\ell$, $\ord_\ell \left( \beta + 16 \right)$ is either zero or odd. For each such case, we have $\dim_{\mathbf{F}_2} \Sh(E/K)[2] \ge 1$, i.e., Theorem \ref{th:torgp_type_4} is true. \end{proposition} \begin{proof} Proofs are similar to Proposition 5.2. \end{proof} \subsection{Exceptional case} \label{subsection:exceptionalZmod4} This family is parametrised by the following Weierstrass equation: \begin{equation*} E: y^2 + p^{z} xy - p^{z} y = x^3 - x^2, \end{equation*} where $p$ is an odd prime congruent to $3$ modulo $4$. We consider the cases when $p^{2z} + 16$ is an odd power of a prime, in other words, $p^{2z} + 16 = q^k$ for some odd prime $q$ and odd integer $k$. When $k > 1$, such Diophantine equation has only integer solution $p^{z} = 3$, $q = 5$, and $k = 2$, c.f. \cite{CCS}, Lemma 5.5. But this case corresponds to the curve `15a3', having torsion subgroup $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/4\mathbf{Z}$. So we can exclude it from our consideration, and we may assume $k = 1$, i.e., $p^{2z} + 16$ is a prime. Here the discriminant $\Delta = p^{2z} \left( p^{2z} + 16 \right) = p^{2z}q$, which is the minimal discriminant. The conductor of the curve $E$ is $pq$, and $E(\mathbf{Q})_\mathrm{tors} = \mathbf{Z}/4\mathbf{Z}$. Let $G$ be the unique subgroup of $E(\mathbf{Q})_\mathrm{tors}$ of order 2, and let $E'$ be the curve $E/G$. We can find a Weierstrass equation for $E'$ thanks to Vélu's formulae (cf. \cite{MMR}). The Weierstrass equation for $E'$ is given as follows: \begin{equation*}\label{eq:1-Epr} y^2 + p^z xy - p^z y = x^3 - x^2 -5x - (p^{2z} + 3), \end{equation*} with discriminant $\Delta' = p^{4z}(p^{2z} + 16)^2$. Factoring 2-torsion polynomial, we see that $E'$ contains the full 2-torsion subgroup in $E'(\mathbf{Q})$: their $x$-coordinates are: $3$, $-1$, and $-\left( p^{2z} + 1 \right)/4$. In particular, the weak Gross--Zagier conjecture is true for $E'$ (cf. \S \ref{section:torgp_type_2_4} and \S \ref{section:torgp_type_2_2}). Now the next corollary follows from the isogeny invariance of the Gross--Zagier conjecture. \begin{corollary}[to Proposition \ref{prop:isoginv}] The weak Gross--Zagier conjecture is true for the elliptic curve $E$ in the family and the quadratic field $K$ satisfying Heegner hypothesis. \end{corollary} \begin{proof} Take $\theta: E' \to E$ be the isogeny dual to $E \to E/G$ and take modular parametrisations respecting $\theta$, i.e., we first choose a modular parametrisation $\pi'$ of $E'$ and let $\pi = \theta \circ \pi'$ be the modular parametrisation for $E$. By Proposition \ref{prop:isoginv} (c), and by the remark just below the proposition, for a fixed $E$ in the family, the weak Gross--Zagier conjecture is true except possibly for at most 4 quadratic fields. Since we only concern 2-divisibility, let us try to figure out the quadratic fields satisfying $2 = \ord_2 E'(\mathbf{Q})_\mathrm{tors} < \ord_2 E'(K)_\mathrm{tors}$. If this inequality is satisfied, then $E'(K)_\mathrm{tors}$ must contain a point of exact order 4. The $4$-torsion polynomial for $E'$ (i.e., the polynomial whose roots are the $x$-coordinates of the points in $E'[4](\overline\mathbf{Q})$) is given as follows: \begin{equation*} f_1(x) f_2(x) f_3(x) g(x) \end{equation*} where \begin{itemize} \item $f_1(x) = 2x^2 + (p^{2z} + 4) x + (-p^{2z} + 2)$, \item $f_2(x) = x^2 - 6x - (p^{2z} + 7)$, \item $f_3(x) = x^2 + 2x + (p^{2z} + 1)$, \item $g(x) = \left( 4x + p^{2z} + 4 \right) \left( x + 1 \right) \left( x - 3 \right)$. \end{itemize} Evidently, the roots of $g(x)$ correspond to points in $E'[2]$. Discriminants $d_i$ of $f_i(x)$ are as follows: \begin{itemize} \item $d_1 = p^{2z} \left( p^{2z} + 16 \right) = p^{2z} q$, \item $d_2 = 4 \left( p^{2z} + 16 \right) = 4q$, \item $d_3 = -4p^{2z}$. \end{itemize} Thus if $K = \mathbf{Q}(\sqrt{d})$ is a quadratic field, the polynomials $f_i(x)$ do not have roots in $K$ unless $K = \mathbf{Q}(\sqrt{-1})$ or $K = \mathbf{Q}(\sqrt{q})$. Note that $\mathbf{Q}(\sqrt{q})$ is a real quadratic field, which is not in our concern. If $K = \mathbf{Q}(\sqrt{-1})$, then we have $u_K = 2$. As we already knew $2 \mid C_E \cdot \left( \# \Sh(E/K) \right)^{1/2}$, we have $4 \mid u_K \cdot C_E \cdot \left( \# \Sh(E/K) \right)^{1/2}$, and the weak Gross--Zagier conjecture is also true for this case. \end{proof} \section{$E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z}$} \label{section:torgp_type_2} \begin{theorem}\label{th:torgp_type_2} If $E$ is an elliptic curve defined over $\mathbf{Q}$, having rational torsion subgroup $E(\QQ)_\mathrm{tors}$ isomorphic to $\mathbf{Z}/2\mathbf{Z}$, then the order $2 = \# E(\QQ)_\mathrm{tors}$ divides $u_K \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2}$. \end{theorem} For such elliptic curves, we can find a Weierstrass model following Kubert \cite{Ku}: \begin{equation}\label{eq:Weierstrass_basic_type_2} y^2 = x^3 + Ax^2 + Bx, \end{equation} where $A, B \in \mathbf{Z}$. Note that $A$ and $B$ are not necessarily relatively prime. This elliptic curve has discriminant $\Delta = 16B^2 (A^2 - 4B)$ and $c_4 = 16(A^2 - 3B)$. Let $N$ be the conductor of $E$, and $\Delta_\mathrm{min}$ be the minimal discriminant of $E$. \subsection{Tamagawa numbers} The purpose of this subsection is to compute Tamagawa numbers of $E$ at various primes, in order to reduce the cases. Remaining cases will be dealt with in the subsequent subsections. More precisely, we show the following. \begin{proposition}\label{prop:final_reduction_about_A_and_B} Let $E$ be an elliptic curve defined by the equation \eqref{eq:Weierstrass_basic_type_2}. \begin{enumerate} \item If $\gcd(A,B) \neq 1$, i.e., if there is a common prime dividing both $A$ and $B$, then $2 \mid C$. \item If $B \not\in \lbr 1, -1, 16, -16 \rbr$, then $2 \mid C$. \item If $p$ is an odd prime such that $\ord_p \left( A^2 - 4B \right)$ is even, then $2 \mid C_p$. \end{enumerate} \end{proposition} \begin{proof} (a) Let $p$ be a prime dividing both $A$ and $B$. First of all, if both $\ord_p A \ge 2$ and $\ord_p B \ge 4$ are true, then we can make a change of variables via $[p,0,0,0]$ to get another equation of the same form as equation \eqref{eq:Weierstrass_basic_type_2} with $(A,B)$ replaced by $(A/p^2, B/p^4)$. Consequently, we assume either $\ord_p A <2$ or $\ord_p B <4$. Using Tate's algorithm, we find reduction types for $E$ modulo $p$ as summarised in the following. \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\ord_p B$ & $\ord_p A$ & Reduction Types of $E$ modulo $p$ & Tamagawa Number $C_p$ \\ \hline\hline 1 & & $\mathrm{III}$ & 2 \\ \hline 2 & & $\mathrm{I}_k^\ast$ for some $k$ & even \\ \hline \multirow{2}{*}{3} & 1 & $\mathrm{I}_k^\ast$ for some $k$ & even \\ \cline{2-4} & $\ge 2$ & $\mathrm{III}^\ast$ & 2 \\ \hline $\ge 4$ & 1 (bindingly) & $\mathrm{I}_k^\ast$ for some $k$ & even \\ \hline \end{tabular} \end{center} (b) Let $p$ be a prime such that $p \mid B$ but $p \nmid A$. Using Tate's algorithm, we have the following results. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $\ord_p B$ & $A \mod{4}$ & Reduction Types of $E$ modulo $p$ & Tamagawa Number $C_p$ \\ \hline \hline $p \neq 2$ & & & $\mathrm{I}_n$ with $n = \ord_p \Delta = 2 \ord_p B$ & even \\ \hline \multirow{6}{*}{$p = 2$} & 1 & & $\mathrm{III}$ & 2 \\ \cline{2-5} & 2 & & $\mathrm{I}_k$ for some $k$ & even \\ \cline{2-5} & $\ge 3$ & $-1$ & $\mathrm{I}_k$ for some $k$ & even \\ \cline{2-5} & 3 & \multirow{3}{*}{$1$} & $\mathrm{III}^\ast$ & even \\ \cline{2-2} \cline{4-5} & 4 & & $\mathrm{I}_0$ (good) & 1 \\ \cline{2-2} \cline{4-5} & $\ge 5$ & & $\mathrm{I}_n$ with $n= \ord_p \Delta = 2\ord_p B - 8$ & even \\ \hline \end{tabular} \end{center} In particular, if $B \not\in \lbr 1, -1, 16, -16 \rbr$, we always have $2 \mid C$. (c) Let $p$ be an odd prime such that $\ord_p \left( A^2 - 4B \right)$ is an even positive integer. By (a), we assume $p \nmid AB$. Tate's algorithm tells us that in this case, $E$ has reduction of type $\mathrm{I}_{\ord_p \left( A^2 -4B \right)}$, and we have $2 \mid C_p$. \end{proof} \begin{proposition}\label{prop:final_reduction_about_A_and_B_2} Let $E$ be an elliptic curve defined by the equation \eqref{eq:Weierstrass_basic_type_2}. \begin{enumerate} \item Suppose that $B = 1$. If $A \equiv 0$ or $1 \pmod{4}$, then $C_2= 1$. If $A \equiv 3 \pmod{4}$, then $C_2 = 2$. When $A \equiv 2 \pmod{4}$, the situation is more complicated, and we summarise the value $C_2$ modulo $2$ according to $A \pmod{128}$ as follows. \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $A \mod{128}$ & 2 & 6 & 10 & 14 & 18 & 22 & 26 & 30 & 34 & 38 & 42 & 46 & 50 & 54 & 58 & 62 \\ \hline $C_2 \mod{2}$ & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \\ \hline\hline $A \mod{128}$ & 66 & 70 & 74 & 78 & 82 & 86 & 90 & 94 & 98 & 102 & 106 & 110 & 114 & 118 & 122 & 126 \\ \hline $C_2 \mod{2}$ & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & \\ \hline \end{tabular} \end{center} If $A \equiv 126 \pmod{128}$, then the parity of $C_2$ is the same as the parity of $\ord_2 \left( A+2 \right)$. Moreover, $E$ has good reduction modulo 2 if and only if $A \equiv 62 \pmod{128}$. In particular, $C_2$ is odd if and only if $A \equiv 0 \pmod{4}$; $A \equiv 1 \pmod{4}$; $A \equiv 10 \pmod{16}$; $A \equiv 62 \pmod{128}$; or $A \equiv 126 \pmod{128}$ and $\ord_2 \left( A + 2 \right)$ is odd. \item Suppose that $B = -1$. Then $C_2$ is even if and only if $A \equiv 0 \pmod{4}$. \end{enumerate} \end{proposition} \begin{proof} Tate's algorithm. \end{proof} \begin{remark} By Propositions \ref{prop:final_reduction_about_A_and_B} and \ref{prop:final_reduction_about_A_and_B_2}, we assume the following throughout this section; $E$ is an elliptic curve defined by the equation \eqref{eq:Weierstrass_basic_type_2} for relatively prime $A \in \mathbf{Z}$ and $B \in \lbr 1, -1, 16, -16 \rbr$ with discriminant $\Delta = 2^4 B^2 (A^2 - 4B)$, such that all odd prime divisors of $A^2 - 4B$ has odd exponent. Moreover, \begin{itemize} \item when $B = 1$, we assume $A \equiv 0 \pmod{4}$, $A \equiv 1 \pmod{4}$, $A \equiv 10 \pmod{16}$, $A \equiv 62 \pmod{128}$, or $A \equiv 126 \pmod{128}$ and $\ord_2 \left( A + 2 \right)$ is odd; \item when $B = -1$, we assume $A \not\equiv 0 \pmod{4}$; \item when $B = \pm 16$, we assume $A \equiv 1 \pmod{4}$. \end{itemize} \end{remark} \begin{remark} We furthermore assume $\Delta >0$, by removing finitely many exceptional cases by explicit computation. As $\Delta = 16 B^2 (A^2 - 4B)$, we need to check the cases $(A,B) = (0,1)$, $(\pm 1, 1)$ and $(\pm n, 16)$ for $n = 0, 1, \cdots, 7$. This is easy with Sage Mathematics Software \cite{sagemath}. \end{remark} \subsection{$\left( \# \Sh(E/K) \right)^{1/2}$} In this subsection, we shall see $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$, for various remaining cases left from considerations about Tamagawa numbers. Our main job is to show $\sum i_\ell + \dim \Phi \ge 4$ (notations from subsection \ref{subsection:Kramer}). Then, \begin{equation*} \boxed{\sum i_\ell + \dim \Phi \ge 4} \Longrightarrow \boxed{ \dim_{\mathbf{F}_2} \Sh(E/K)[2] \ge 1} \Longrightarrow \boxed{2 \mid \left( \# \Sh(E/K) \right)^{1/2}}. \end{equation*} The first implication follows from Kramer's theorem (see subsection \ref{subsection:Kramer}), and the last implication is due to Kolyvagin's theorem \cite{Kol}. \begin{proposition} Suppose that $B = -1$. By the considerations of the subsection above, we assume \begin{itemize} \item $A \equiv 0 \pmod{4}$, and \item if $\ell$ is an odd prime dividing $A^2 - 4B = A^2 + 4$, then it has odd exponent. \end{itemize} Then we have $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$, i.e., $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$, except for \begin{itemize} \item `128b2' and `128d2', for which $2 \mid M$; \item a family of curves for which $A^2 + 4$ is a power of a prime number. \end{itemize} \end{proposition} \begin{remark} The exceptional family will be dealt with in the next subsection \ref{subsection:exceptionalZmod2}. \end{remark} \begin{proof} Our elliptic curve $E$ is given by \begin{equation}\label{eq:Weierstrass_B=-1} y^2 = x^3 + Ax^2 - x, \end{equation} such that $A \equiv 1, 2, 3 \pmod{4}$. In any cases, the minimal discriminant of the curve becomes $\Delta_\mathrm{min} = \Delta = 2^4(A^2+4)$. In particular, the prime $2$ is always a bad one. Since $2$ must split comletely in $K$, we must have $d \equiv 1 \pmod{8}$. Since $i_q \ge 1$ for each prime divisor $q$ of $d$, we may assume that there are at most 2 prime divisors in $d$, as $i_\infty =1$ always. Glueing this with the fact that $d \equiv 1 \pmod{8}$, we have either $d = -q$ for an odd prime $q$ such that $q \equiv -1 \pmod{8}$ or $d = -qq'$, for distinct odd primes $q$ and $q'$ such that $q \equiv 1 \pmod{4}$ and $q' \equiv 3 \pmod{4}$ with either $(q,q') \equiv (1,-1) \pmod{8}$ or $(q,q') \equiv (5,-5) \pmod{8}$. If $\ell$ is an odd prime dividing $\Delta$, i.e., $\ell \mid \left( A^2 + 4 \right)$, then by the ``sum of two squares'' theorem, we must have $\ell \equiv 1 \pmod{4}$, i.e., $\ell \equiv 1$ or $5 \pmod{8}$. Now we compute the group $\Sel^\phi(E/\mathbf{Q})$. Note the following local images. Definitions for $\delta_\ell$ are the same as in \S \ref{section:torgp_type_4}. \begin{itemize} \item $\Ima \delta_\infty = \lbrace 1 \rbrace$. \item $\Ima \delta_p = \mathbf{Z}_p \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2}$, for odd primes $p \nmid \Delta$. \item $\Ima \delta_p = \sqfr{\mathbf{Q}_p}$ for odd primes $p \mid \Delta$. \item $\Ima \delta_2 = \begin{cases} \lbrace 1, 5 \rbrace & \text{ if } A \equiv 1, 3 \pmod{4}, \\ \lbrace 1, 2, 5, 10 \rbrace & \text{ if } A \equiv 2 \pmod{4}. \end{cases}$ \end{itemize} Suppose that $d = -q$ with $q \equiv -1 \pmod{8}$ Since $\displaystyle \left( \frac{-q}{p} \right) = \left( -1 \right)^\frac{p-1}{2} \left( -1 \right)^{\frac{p-1}{2}\frac{q-1}{2}} \left( \frac{p}{q} \right)$, we have $\displaystyle \left( \frac{p}{q} \right) = 1$ for any odd prime $p \mid (A^2 +4)$. If $A \equiv 1, 3 \pmod{4}$, then $A^2 +4$ is odd, and as every prime divisor of $A^2 +4$ has odd exponent, we can conclude that $\displaystyle \left( \frac{A^2 + 4}{q} \right) = 1$ because the left hand side of the expression is the product of $\displaystyle \left( \frac{p}{q} \right)$ running over all primes $p \mid (A^2+4)$. Secondly, suppose that $A \equiv 2 \pmod{4}$. This means that there is an integer $k$ such that $A = 2+4k$, whence $A^2 + 4 = 16k^2 + 16k +8 = 2^3 (2k^2 + 2k +1)$, so $\ord_2 (A^2 + 4) = 3$. In this case, $\displaystyle \left( \frac{A^2+4}{q} \right) = \left( \frac{2}{q} \right) \prod_{\substack{p \text{ odd primes, } \\ p \mid A^2+4}} \left( \frac{p}{q} \right) = 1$, since $q \equiv -1 \pmod{8}$. Therefore, we always have $\displaystyle \left( \frac{A^2 + 4}{q} \right) = 1$, i.e., $i_q = 2$, whence $\sum i_\ell = 3$. Now consider the Selmer group $\Sel^{\phi_d}(E_d/\mathbf{Q})$. The local images are given as follows. \begin{itemize} \item $\Ima \delta_\infty^d = \lbrace 1 \rbrace$. \item $\Ima \delta_p^d = \mathbf{Z}_p \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2}$, for odd primes $p \nmid \Delta q$. \item $\Ima \delta_p^d = \sqfr{\mathbf{Q}_p}$ for odd primes $p \mid \Delta$. \item $\Ima \delta_q^d = \lbrace 1 \rbrace$. \item $\Ima \delta_2^d = \begin{cases} \lbrace 1, 5 \rbrace & \text{ if } A \equiv 1, 3 \pmod{4}, \\ \lbrace 1, 2, 5, 10 \rbrace & \text{ if } A \equiv 2 \pmod{4}. \end{cases}$ \end{itemize} If $A \equiv 2 \pmod{4}$ then $\ord_2 (A^2 +4) = 3$. As $2$ is a quadratic residue modulo $q$, and since $A^2 + 4$ must have at least one odd prime except for the cases $A = \pm 2$, the image of $2$ gives a nontrivial element in $\Phi$. When $A = \pm 2$, the curve is equal to `128b2' or `128d2'. In these cases $M = 2$. If $A \equiv 1$ or $3 \pmod{4}$ and if there are at least two prime divisors of $A^2 + 4$, then we can also find a nontrivial element in $\Phi$. If $A^2 +4$ is a power of a prime, then this will be dealt as exceptional cases. See \ref{subsection:exceptionalZmod2}. Suppose that $d = -qq'$ with $q \equiv 1 \pmod{4}$ and $q' \equiv 3 \pmod{4}$. First note that we are reduced to the case that $\displaystyle \left( \frac{A^2 + 4}{q} \right) = \left( \frac{A^2+4}{q'} \right) = -1$, since otherwise we have $\sum i_\ell \ge 4$. Moreover, if $\ell \mid A$, for $\ell = q$ or $q'$ then we have $\quadsym{A^2 + 4}{\ell} = \quadsym{4}{\ell} = 1$, a contradiction. Now we impose the Heegner hypothesis. At first, since the prime $2$ must split completely in $K$, so thus $d \equiv 1 \pmod{8}$, and we have $(q, q') \equiv (1, -1)$ or $\equiv (5,-5) \pmod{8}$. For odd primes $p$ dividing $\Delta$, we must have $p \equiv 1 \pmod{4}$ by `sum of two squares theorem', and thus we have to have either $\quadsym{p}{q} = \quadsym{p}{q'} = 1$ or $\quadsym{p}{q} = \quadsym{p}{q'} = -1$. Local images for the Selmer group $\Sel^{\phi_d}(E_d/\mathbf{Q})$ are given as follows: \begin{itemize} \item $\Ima \delta_\infty^d = \lbrace 1 \rbrace$; \item $\Ima \delta_p^d = \mathbf{Z}_p^\times \mathbf{Q}_p^{\times 2} / \mathbf{Q}_p^{\times 2}$, for odd primes $p \nmid \Delta q$ (including $p = q'$); \item $\Ima \delta_p^d = \sqfr{\mathbf{Q}_p}$, for odd primes $p \mid \Delta q$; \item $\Ima \delta_2^d = \begin{cases} \lbrace 1, 5 \rbrace & \text{ when } A \equiv 1, 3 \pmod{4}, \\ \lbrace 1, 2, 5, 10 \rbrace & \text{ when } A \equiv 2 \pmod{4}. \end{cases}$ \end{itemize} Similar as above, if $A \equiv 2 \pmod{4}$, then the image of $2$ in $\Phi$ is a non-trivial element, so that $\dim_{\mathbf{F}_2} \Phi \ge 1$. Now assume $A$ is odd. If $A^2 + 4$ have at least two distinct odd prime divisor, then the image of either one of them gives a non-trivial element in $\Phi$. If $A^2 +4$ is a power of a prime, then this will be dealt as exceptional cases. See \ref{subsection:exceptionalZmod2}. \end{proof} \begin{proposition} Suppose that $B = 1$. By the considerations of the subsection above, we assume \begin{itemize} \item $A \equiv 0 \pmod{4}$, $A \equiv 1 \pmod{4}$, $A \equiv 10 \pmod{16}$, $A \equiv 62 \pmod{128}$, or $A \equiv 126 \pmod{128}$ (in the last case we also assume $\ord_2 (A +2 )$ is odd); and \item if $\ell$ is an odd prime dividing $A^2 - 4B = A^2 - 4$, then it has odd exponent. \end{itemize} Then we have $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$, i.e., $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$, except for `17a3', `32a3', and `80a2', for which $2 \mid M$. \end{proposition} \begin{proof} The main difficulty to prove this proposition is due to a large number of cases for $A$. The key to overcome is to group the various cases into 3 categories: (i) $E$ has good reduction modulo $2$, i.e., $A \equiv 62 \pmod{128}$, (ii) $A \equiv 126 \pmod{128}$, and (iii) the remaining cases. After this, proofs are nothing special. \end{proof} \begin{proposition} Suppose that $B = 16$. By the considerations of the subsection above, we assume \begin{itemize} \item $A \equiv 1 \pmod{4}$, and \item if $\ell$ is an odd prime dividing $A^2 - 4B = A^2 - 64$, then it has odd exponent. \end{itemize} Then we have $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$, i.e., $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$, except for `17a4', for which $2 \mid M$. \end{proposition} \begin{proof} Proofs are similar as above. \end{proof} \begin{proposition} Suppose that $B = -16$. By the considerations of the subsection above, we assume \begin{itemize} \item $A \equiv 1 \pmod{4}$, and \item if $\ell$ is an odd prime dividing $A^2 - 4B = A^2 + 64$, then it has odd exponent. \end{itemize} Then we have $\sum i_\ell + \dim_{\mathbf{F}_2} \Phi \ge 4$, i.e., $2 \mid \left( \# \Sh(E/K) \right)^{1/2}$, except for \begin{itemize} \item $A = 15$, in this case the curve is `272b2' having $C_2 = 2$; \item the family characterised by the condition that $A^2 + 64$ is a prime, having $M=2$ for any curve in this family. \end{itemize} \end{proposition} \begin{proof} Proofs are similar. The exceptional family here is called Neumann--Setzer family, and $M =2$ can be found in \cite{SW04}. \end{proof} \subsection{Exceptional case} \label{subsection:exceptionalZmod2} In this case we deal with the cases where $A^2 + 4$ is a power of an odd prime. By Lemma 5.4 of Cao--Chu--Shiu \cite{CCS}, then $A^2 + 4$ is a prime unless either $A = 2$ or $A = 11$. For those two non-prime cases, the corresponding curves are `128d2' and `80b4', and both of them have Manin constant 2. Excluding these cases, we assume $A^2 + 4$ is a prime. This family is parametrised by the following Weierstrass equation: $y^2 = x^3 + Ax^2 - x$, where $A$ is an integer not divisible by $4$, and $A^2 + 4 = p$ is an odd prime. It has discriminant $\Delta=16(A^2+4) = 16p$, which is the minimal discriminant. The conductor of the curve $E$ is $4p$ and $E(\mathbf{Q})_\mathrm{tors} = \mathbf{Z}/2\mathbf{Z}$. Let $G = E(\mathbf{Q})_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z}$, and consider the curve $E' := E/G$. By Vélu's formula, we can find a Weierstrass equation for $E'$. This is given as follows: \begin{equation*} y^2 = x^3 + Ax^2 + 4x + 4A \end{equation*} with discriminant $\Delta' = -2^8 (A^2 + 4)^2 = -2^8 p^2$. As the $2$-torsion polynomial for $E'$ is given by \begin{equation*} 4(x^2 + 4)(x+A), \end{equation*} we must have a rational 2-torsion point $P = (-A, 0) \in E'(\mathbf{Q})$, i.e., $\mathbf{Z}/2\mathbf{Z} \subseteq E'(\mathbf{Q})_\mathrm{tors}$. If $E'(\mathbf{Q})_\mathrm{tors} \supsetneq \mathbf{Z}/2\mathbf{Z}$, the weak Gross--Zagier conjecture is proved in the above sections. So we assume $E'(\mathbf{Q}) = \mathbf{Z}/2\mathbf{Z}$. Making a change of variables $x \mapsto x' = x+ A$, we get another Weierstrass equation \begin{equation*} y^2 = x^3 - 2Ax^2 + (A^2 + 4)x. \end{equation*} By Tate's algorithm, we know that $C_p = 2$. Thus the weak Gross--Zagier conjecture is true for $E'$ unconditionally. Now we consider $E'(K)_\mathrm{tors}$ for quadratic field $K$ satisfying Heegner hypothesis. If $\ord_2 E'(K)_\mathrm{tors} > \ord_2 E'(\mathbf{Q})_\mathrm{tors}$, then $E'(K)$ must contain either $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/2\mathbf{Z}$ or $\mathbf{Z}/4\mathbf{Z}$. For the first case, the 2-torsion polynomial of $E'$ must split into linear factors in $K$. As the polynomial is $4(x^2 + 4)(x+A)$, this happens if and only if $K=\mathbf{Q}(\sqrt{-1})$. But in this case $u_K = 2$, and the weak conjecture is also true for $E$. Now suppose that $E(K)_\mathrm{tors} \ge \mathbf{Z}/4\mathbf{Z}$. By Lemma 13 in \cite{GJT}, we must have $A^2 + 4 = s^2$ for some $s \in \mathbf{Q}$. But since $A^2 + 4 = p$ is a prime, we cannot have this case. Therefore, we have the following corollary to Proposition \ref{prop:isoginv}. \begin{corollary} The weak Gross--Zagier conjecture is true for $E$ in this family and for any quadratic field $K$ satisfying Heegner hypothesis. \end{corollary} \part{$E(\mathbf{Q})_\mathrm{tors}$ has a point of order 3} \section{Preliminaries for Part 2} \label{section:preliminaries_part_2} \subsection{Optimal curves} \label{subsection:optimalcurves} For a positive integer $N$, let $X_1(N)$ and $X_0(N)$ denote the usual modular curves defined over $\mathbf{Q}$. Let $\mathcal{C}$ denote an isogeny class of elliptic curves defined over $\mathbf{Q}$ of conductor $N$. For $i=0,1$, there is a unique curve $E_i \in \mathcal{C}$ and a parametrisation $\pi_i: X_i(N) \to E_i$ such that for any $E \in \mathcal{C}$ and parametrisation $\pi_i': X_i(N) \to E$, there is an isogeny $\phi_i:E_i \to E$ such that $\phi_i \circ \pi_i= \pi_i'$. For $i=0,1$, the curve $E_i$ is called the $X_i(N)$-\emph{optimal curve}. In \cite{BY}, the authors proved the following theorem, which was conjectured by Stein and Watkins \cite{SW02}. \begin{theorem}[\cite{BY}, Theorem 1.1]\label{th:Byeon--Yhee} For $i=0,1$, let $E_i$ be the $X_i(N)$-optimal curve of an isogeny class $\mathcal{C}$ of elliptic curves defined over $\mathbf{Q}$ of conductor $N$. If there is an elliptic curve $E \in \mathcal{C}$ given by $y^2+axy+y=x^3$ with discriminant $\Delta = a^3-27=(a-3)(a^2+3a+9)$, where $a$ is an integer such that no prime factors of $a-3$ are congruent to $1$ modulo $6$ and $a^2+3a+9$ is a power of a prime number, then $E_0$ and $E_1$ differ by a 3-isogeny, which means that there is an isogeny $\pi: E_0 \to E_1$ with $3 \mid \deg(\pi)$. \end{theorem} For any $E \in \mathcal{C}$, we let $E_{\mathbf{Z}}$ be the Néron model over $\mathbf{Z}$ and $\omega$ be a Néron differential on $E$. Let $\phi:E \to E'$ be an isogeny. We say that $\phi$ is \emph{étale} if the extension $E_{\mathbf{Z}} \to E'_{\mathbf{Z}}$ to Néron models is étale. If $\phi:E \to E'$ is an isogeny over $\mathbf{Q}$, then we have $\phi^*(\omega')=n\omega$ for some non-zero integer $n=n_{\phi}$, where $\omega'$ is a Néron differential on $E'$. The isogeny $\phi$ is étale if and only if $n_{\phi}=\pm 1$. If $\phi:E \to E$ is the multiplication by an integer $m$, then $\phi^*(\omega')=m\omega$. Thus if $\phi$ is any isogeny of degree $p$ for a prime number $p$, we must have $n_{\phi}=1$ or $n_{\phi}=p$. If $\phi'$ denotes the dual isogeny, then $\phi' \circ \phi=[p]$ is the multiplication by $p$ mapping. So precisely one of $\phi$ and $\phi'$ is étale. In \cite{St}, Stevens proved that in every isogeny class $\mathcal{C}$ of elliptic curves defined over $\mathbf{Q}$, there exists a unique curve $E_\mathrm{min} \in \mathcal{C}$ such that for every $E \in \mathcal{C}$, there is an étale isogeny $\phi: E_\mathrm{min} \to E$. The curve $E_\mathrm{min}$ is called the \emph{(étale) minimal curve} in $\mathcal{C}$. Stevens conjectured that $E_\mathrm{min}=E_1$ and recently Vatsal proved the following theorem. \begin{theorem}[\cite{Va}, Theorem 1.10]\label{th:Vatsal} Suppose that the isogeny class $\mathcal{C}$ consists of semi-stable curves. The étale isogeny $\phi: E_\mathrm{min} \to E_1$ has degree a power of two. \end{theorem} \subsection{Cassels' theorem} Let $F$ be a number field with absolute Galois group $G_F$, $E$ and $E'$ be elliptic curves defined over $F$, and $\phi: E \to E'$ be an isogeny defined over $F$ with dual isogeny $\phi': E' \to E$. For various places $v$ of $F$, let $F_v$ denote the completion of $F$ with respect to the place $v$. The sizes of the $\phi$-Selmer group and the $\phi'$-Selmer group are related by the following theorem of Cassels in \cite{Ca2}. \begin{theorem}[\cite{Ca2} or \cite{KS}, Theorem 1]\label{th:Cassels} Suppose $\phi$ is an isogeny from $E$ to $E'$ over $F$. Let $C_{\mathfrak{q}}$ and $C'_{\mathfrak{q}}$ be Tamagawa numbers of $E$ and $E'$ at a finite place $\mathfrak{q}$ of $F$, respectively. Then we have \begin{equation} \frac{\# \Sel^{\phi}(E/F)}{\# \Sel^{\phi'}(E'/F) }= \frac{\# E(F)[\phi] \cdot \prod_{v} \int_{E'(F_{v})} \left|\omega' \right|_{v} \cdot \prod_{\mathfrak{q}} C'_{\mathfrak{q}}} {\# E'(F)[\phi'] \cdot \prod_{v} \int_{E(F_{v})} \left|\omega \right|_{v} \cdot \prod_{\mathfrak{q}} C_{\mathfrak{q}}}, \end{equation} where $v$ runs through the infinite places, and $\mathfrak{q}$ runs through the finite places. \end{theorem} \section{$E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$} \label{section:torgp_type_2_6} In this section, we prove the Main Theorem for the cases when $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$. \begin{lemma}\label{lem:torsionpoints_generating_component_group} Let $E$ be an elliptic curve defined over $\mathbf{Q}$ given by $y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$ with $a_i \in \mathbf{Z}$, and $P$ be a torsion point of $E(\mathbf{Q})$ of a prime order $\ell$. Suppose that $E$ has bad reduction at $p$, having Weierstrass equation of the form $y^2+\overline{a_1} xy =x^3+\overline{a_2}{x}^2$ over $\mathbf{F}_p$, where $\overline{a_i} = a_i \pmod{p}$. If the point $P$ goes to $(0,0)$ in the reduced curve, then $\ell$ divides $C_p$. \end{lemma} \begin{proof} Let $E_0(\mathbf{Q}_p)$ be the group of $\mathbf{Q}_p$-rational points of $E$ which become non-singular points in the reduced curve modulo $p$. Since $P$ becomes singular, the class $P + E_0(\mathbf{Q}_p) \in E(\mathbf{Q}_p)/E_0(\mathbf{Q}_p)$ is non-trivial. Since $[\ell] P = O$, the identity element in $E(\mathbf{Q})$, the order of the element $P + E_0(\mathbf{Q}_p)$ is exactly $\ell$ in $E(\mathbf{Q}_p)/E_0(\mathbf{Q}_p)$. Thus $\ell \mid C_p = \left[ E(\mathbf{Q}_p) : E_0(\mathbf{Q}_p) \right]$. \end{proof} \begin{theorem} Suppose that $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$. Then the order $12 = \# E(\QQ)_\mathrm{tors}$ divides the Tamagawa number $C$ of $E$. \end{theorem} \begin{proof} From \cite{Ku}, Table 3, elliptic curves defined over $\mathbf{Q}$ having torsion subgroup $\mathbf{Z}/2\mathbf{Z} \oplus \mathbf{Z}/6\mathbf{Z}$ are parametrized as follows: \begin{equation} y^2+ (u-v) xy - uv(v+u) y=x^3 - v(v+u)x^2, \end{equation} with $u,v \in \mathbf{Z}$, $\gcd(u,v)=1$ and $\displaystyle{\frac{u}{v}=\frac{(T-3S)(T+3S)}{2S(5S-T)}}$ for a pair of relatively prime integers $S,T \in \mathbf{Z}$, with $S > 0$. In the expression of $\dfrac{u}{v}$, the numerator $(T-3S)(T+3S)$ and the denominator $2S(5S-T)$ of the right hand side are relatively prime outside $2$. Originally Kubert used a parametrisation seemingly different from the current one, but with some routine computations, readers may see that they are in fact equivalent. The discriminant $\Delta$ of the above equation is given by \begin{eqnarray*} \Delta &=& v^6 (v+u)^3u^2(9v+u)\\ &=&2^6S^6(5S-T)^6 (S-T)^6 (T-3S)^2 (T+3S)^2 (9S-T)^2. \end{eqnarray*} Let $P=(0,0)$ be a torsion point of $E(\mathbf{Q})$ of order $6$. One can easily check that $\Delta$ is minimal at every prime $p \mid v$ because $p$ cannot divide $c_4=(u + 3 v)(u^3 + 9 u^2 v + 3 u v^2 + 3 v^3)$. Similarly, $\Delta$ is minimal at every prime $q \mid (v+u)$ and $r \mid u$ possibly except for $q=2$ and $r=3$. Suppose either $S$ or $T$ is even. As $\gcd(S,T) = 1$, then the other should be odd. In this case $(T-3S)(T+3S)$ and $2S(5S-T)$ are relatively prime, and thus $u = (T-3S)(T+3S)$ and $v = 2S(5S-T)$. Suppose now that $S-T \neq \pm 1$. In this case there are two distinct primes $p \mid 2S(5S-T)$, (in fact, $p \mid v$) and $q \mid (S-T)$ (in fact, $q \mid (v+u)$ and $q$ is odd). Modulo these primes $p$ and $q$, the curve $E$ has split multiplicative reduction. By \cite{AEC}, Appendix C, Corollary 15.2.1, we have $6 \mid C_p$ and $6 \mid C_q$. So $36 \mid C$. Now assume $S-T = \pm 1$. In this case we can find two distinct primes $p = 2$ and $q$ dividing $v = 2S(5S-T)$. Similar as above, modulo these primes $p=2$ and $q$, the curve $E$ has split multiplicative reduction. By \cite{AEC}, Appendix C, Corollary 15.2.1, we have $6 \mid C_p$ and $6 \mid C_q$. So $36 \mid C$. Now we assume that both $S$ and $T$ are odd. If $S=1$, then with the condition $\Delta \neq 0$, there is an odd prime $p \mid (5-T)(1-T)$ (in fact, $p \mid v(v+u)$) and a prime $r \mid (T-3)(T+3)$ (in fact, $r \mid u$ and $r \neq 3$, $p$). Since $E$ has split multiplicative reduction modulo $p$, by \cite{AEC} Appendix C, Corollary 15.2.1, we have $6 \mid C_p$. Since $E$ has bad reduction modulo $r$, where the reduced equation is given by the following form $y^2+\overline{a_1}xy=x^3+\overline{a_2}x^2$ modulo $r$, by applying Lemma \ref{lem:torsionpoints_generating_component_group} to the point $[3]P=(uv, uv^2)$ of order 2, we have $2 \mid C_r$. So $12 \mid C$. If $S \neq 1$, then there is an odd prime $p \mid S$ (in fact, $p \mid v$) at which $E$ has split multiplicative reduction. By \cite{AEC}, Appendix C, Corollary 15.2.1, we have $6 \mid C_p$. When $(5S-T)(S-T)$ has an odd prime factor $q$ (in fact, $q|v(v+u)$ and $q \neq p$), then $E$ has split multiplicative reduction modulo $q$. Similarly, we have $6 \mid C_q$. So $36 \mid C$. Suppose that $|5S-T|=2^A$ and $|S-T|=2^B$. From the condition that $S$ is odd and $S \neq 1$, one can find that either $A=2$ or $B=2$ (and not both). If $A=2$, we have $T=5S\pm 4$. Substituting this into $(T-3S)(T+3S)$, we can find an odd prime $r \mid (S\pm 2)(2S\pm 1)$ (in fact, $r \mid u$ and $r\neq p$) at which $E$ has bad reduction $y^2+\overline{a_1}xy=x^3+\overline{a_2}x^2$. Note that if $(S\pm 2)(2S\pm 1)$ is a power of $3$, then we must have $S = 1, 2$ or $5$. As the cases $S = 1$ or $2$ are dealt in the above paragraphs, we can choose $r \neq 3$ if $S\neq 5$. Moreover, if $S=5$ and $T=5S-4$, then $\ord_3 \Delta < 12$, so $\Delta$ is also minimal at $r=3$. By applying Lemma \ref{lem:torsionpoints_generating_component_group} for the point $[3]P=(uv, uv^2)$ of order 2, we have $2 \mid C_r$. So $12 \mid C$. If $B=2$, we have $T=S\pm 4$. There is a prime $r \mid (S \mp 2)(S \pm 1)$ (in fact, $r \mid u$ and $r \nmid 3p$) at which $E$ has bad reduction with reduced equation $y^2+\overline{a_1}xy=x^3+\overline{a_2}x^2$. By applying Lemma \ref{lem:torsionpoints_generating_component_group} for the point $[3]P=(uv, uv^2)$ of order 2, we have $2 \mid C_r$. So $12 \mid C$. \end{proof} \section{$E(\QQ)_\mathrm{tors} \simeq \mathbf{Z}/3\mathbf{Z}$} \label{section:torgp_type_3} In this section, we prove the Main Theorem for the cases when $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/3\mathbf{Z}$. More precisely, we show the following theorem. \begin{theorem}\label{th:torgp_type_3} Suppose that $E(\QQ)_\mathrm{tors}$ is isomorphic to $\mathbf{Z}/3\mathbf{Z}$. Then the order $3 = \# E(\QQ)_\mathrm{tors}$ divides $u_k \cdot C \cdot M \cdot \left( \# \Sh(E/K) \right)^{1/2}$. \end{theorem} \subsection{Tamagawa numbers} Let $E$ be an elliptic curve defined over $\mathbf{Q}$ with a rational torsion point of order 3. We can take a Weierstrass equation for $E$ of the following form:\begin{equation} y^2+axy+by=x^3, \end{equation} with $a,b \in \mathbf{Z}$, $b >0$, and such that there is no prime number $q$ satisfying both $q \mid a$ and $q^3 \mid b$. The discriminant of $\Delta$ of $E$ is given by $\Delta={b}^3({a}^3-27b)$, which is the minimal discriminant $\Delta_\mathrm{min}$ for $E$. Let $T=\{P:=(0,0), (0,-b), O \}$ be the rational torsion subgroup of order 3. Suppose first that $b \neq 1$. Then there is a prime $p \mid b$, and Lemma \ref{lem:torsionpoints_generating_component_group} or Tate's algorithm shows $3 \mid C_p$. So we assume $b = 1$ in the sequel. \subsection{Manin constants} We introduce an useful theorem by T. Hadano. \begin{theorem}[\cite{Ha}, Theorem 1.1]\label{th:Hadano} The quotient curve $E':=E/T$ has a rational point of order 3 if and only if $b$ is a cube $t^3$ with $t>0$. Moreover the curve $E'$ is given by the equation \begin{equation} y^2+(a+6t)xy+(a^2+3at+9t^2)ty=x^3 \end{equation} with discriminant $\Delta' = t^3(a^2+3at+9t^2)^3(a-3t)^3$. \end{theorem} Let $E'$ be the curve $E/T$. Since $b=1=1^3$, Theorem 9.2 says that $E'$ also has a rational point of order 3. Thus we have a `chain' $E \to E' \to E''$ of elliptic curves and isogenies of degree 3. Each isogeny in the above chain is étale because its kernel is isomorphic to $\mathbf{Z}/3\mathbf{Z}$ (the group scheme) since each kernel of the isogenies $E \to E'$ and $E' \to E''$ consists of $\mathbf{Q}$-rational points of order $3$. It follows from \cite{Ke}, Proof of Theorem 2, such a chain in the isogeny class of $E$ must have length at most $4$. However, we can readily check that if there is a chain of exact length $4$, then it must be identical to the chain $\text{`27a4'} \to \text{`27a3'} \to \text{`27a1'} \to \text{`27a2'}$, and in this case we can check that $3 \mid M$. Denote by $\mathcal{C}$ the rational isogeny class of the curve $E$, and let $E_\mathrm{min}$, $E_0$, and $E_1$ be the étale minimal curve, the $X_0(N)$-optimal curve, and the $X_1(N)$-optimal curve in the isogeny class $\mathcal{C}$, respectively (cf. see subsection \ref{subsection:optimalcurves}). Ignoring the above case, we have $E = E_\mathrm{min}$, the étale minimal curve in $\mathcal{C}$, up to isogeny of degree prime to $3$. Thus, by Vatsal's theorem \ref{th:Vatsal}, we have $E = E_\mathrm{min} = E_1$. Since we have a canonical étale isogeny $E_1 \to E_0$ called the Shimura covering (cf. Remark 1.8 in \cite{Va}) having degree divisible by 3, if $E \neq E_0$ then we have $3 \mid M$. So we assume here and thereafter that $E = E_\mathrm{min} = E_1 = E_0$. Thus, by (the contrapositive statement of) Theorem \ref{th:Byeon--Yhee}, either there are at least two distinct primes dividing $a^2+3a+9$ or there is a prime $p$ such that $p \mid (a-3)$ and $p \equiv 1 \pmod{3}$. \subsection{$\left( \# \Sh(E/K) \right)^{1/2}$} Now Theorem \ref{th:torgp_type_3} is reduced to the following proposition. \begin{proposition} Let $E$ be an elliptic curve of conductor $N$ defined by a minimal Weierstrass equation $y^2+ a xy+y=x^3$ with $a \in \mathbf{Z}$. Let $K$ be an imaginary quadratic field satisfying the Heegner hypothesis with $\disc(K) \neq -3$, i.e., $u_K \neq 3$ such that $E(K)$ has rank 1. Suppose that either (i) there are at least two distinct primes dividing $a^2+3a+9$; or (ii) there is a prime $p$ such that $p \mid ( a-3) $ and $p \equiv 1 \pmod{3}$. Then $3$ divides $C \cdot \left( \# \Sh(E/K) \right)^{1/2}$. \end{proposition} \begin{proof} Let $\phi$ be an isogeny defined over $\mathbf{Q}$ of degree 3 from E to the quotient curve $E'$ of E by the torsion subgroup $T=\{P, 2P, O\}$ and $\phi': E' \to E$ be its dual isogeny. Since $\# E(K)[\phi] =3$ and $\# E(K)[\phi'] = 1$, we have $\dfrac{\# E(K)[\phi] }{\# E'(K)[\phi']}=3$. Since $K$ is an imaginary quadratic field, by Theorem 1.2 in \cite{DD}, we have \begin{equation*} \prod_{\nu | \infty}\frac{\int_{E'(K_{\nu})}|\omega'|_{\nu}}{\int_{E(K_{\nu})}|\omega|_{\nu}} =\frac{\int_{E'(\mathbf{C})}|\omega'|}{\int_{E(\mathbf{C})}|\omega|}= 3^{-1}|\phi^*(\omega')/\omega| =1 \text{ or } 3^{-1}. \end{equation*} Assume that $3 \nmid C$, whence $3 \nmid \prod_\mathfrak{q} C_\mathfrak{q}$. By Theorem \ref{th:Cassels}, we have \begin{equation}\label{eq:lowerbound_of_Sel_pi} \dim_{\mathbf{F}_3} \Sel^{\phi}(E/K) \ge \ord_3 \left( \prod_\mathfrak{q} C'_\mathfrak{q} \right). \end{equation} Suppose that there are at least two distinct primes dividing $a^2+3a+9$. Then at least one of them, say $p$, is not $3$. By Hadano's theorem \ref{th:Hadano} and Lemma \ref{lem:torsionpoints_generating_component_group}, we have $3 \mid C'_{p}=C'_{\mathfrak{p}}=C'_{\overline{\mathfrak{p}}}$ and $3 \mid C'_{q}=C'_{\mathfrak{q}}=C'_{\overline{\mathfrak{q}}}$. Thus from \eqref{eq:lowerbound_of_Sel_pi}, we have $\dim_{\mathbf{F}_3} \Sel^{\phi}(E/K) \ge 4$. Suppose that there is a prime $p$ such $p \mid (a-3)$ and $p \equiv 1 \pmod{3}$. Then there is at least one prime $q \neq p$ such that $q \mid (a^2+3a+9)$. By Theorem \ref{th:Hadano} and Lemma \ref{lem:torsionpoints_generating_component_group}, we have $3 \mid C'_{q}=C'_{\mathfrak{q}}=C'_{\bar{\mathfrak{q}}}$. Since $E'$ has split multiplicative reduction at $p$ and $3 \mid \ord_p(\Delta')=-\ord_p(j')$, where $\Delta'$ and $j'$ are the discriminant and the $j$-invariant of $E'$ respectively, we have $3 \mid C'_{p}=C'_{\mathfrak{p}}=C'_{\overline{\mathfrak{p}}}$ by \cite{AEC}, Appendix C, Corollary 15.2.1. Thus from \eqref{eq:lowerbound_of_Sel_pi}, we have that $\dim_{\mathbf{F}_3}\Sel^{\phi}(E/K) \geq 4$. From the following short exact sequence of $G_K$-modules $0 \to E[\phi] \to E[3] \xrightarrow{\phi} E'[\phi'] \rightarrow 0$, we have the following long exact sequence: $\cdots \rightarrow H^0(G_K, E'[\phi']) \rightarrow H^1(G_K, E[\phi]) \xrightarrow{\imath} H^1(G_K, E[3]) \rightarrow \cdots$. Since $E'(K)[\phi']=0$, $\imath$ is injective and thus $\dim_{\mathbf{F}_3} \Sel^{3}(E/K) \geq \dim_{\mathbf{F}_3} \Sel^{\phi}(E/K)$. Thus we conclude that for the two cases, \begin{equation}\label{eq:lowerbound_of_Sel_3} \dim_{\mathbf{F}_3} \Sel^{3}(E/K) \geq 4. \end{equation} From the condition that $E(K)$ has rank 1, we have $E(K)/3E(K) \simeq \mathbf{Z}/3\mathbf{Z} \oplus \mathbf{Z}/3\mathbf{Z}$. So the following descent exact sequence $0 \rightarrow E(K)/3E(K) \rightarrow \Sel^{3}(E/K) \rightarrow \Sh(E/K)[3] \rightarrow 0$ and equation \eqref{eq:lowerbound_of_Sel_3} imply that $\dim_{\mathbf{F}_3} \Sh(E/K)[3] \geq 2$, and therefore $3 \mid \left( \# \Sh(E/K)[3] \right)^{1/2}$. \end{proof} \noindent Dongho Byeon\\ Department of Mathematical Sciences, Seoul National University,\\ 1 Gwanak-ro, Gwanak-gu, Seoul, South Korea,\\ E-mail: \url{[email protected]}\\ \noindent Taekyung Kim\\ Department of Mathematical Sciences, Seoul National University,\\ 1 Gwanak-ro, Gwanak-gu, Seoul, South Korea,\\ E-mail: \url{[email protected]}\\ \noindent Donggeon Yhee\\ School of Mathematics and Statistics, University of Sheffield,\\ Room K28, Hicks Building, Hounsfield Road, Sheffield, S3 7RH, United Kingdom,\\ E-mail: \url{[email protected]} \end{document}
arXiv
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced /ʃəˈlɛski/ shə-LES-kee) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924.[1] When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.[2] Statement The Cholesky decomposition of a Hermitian positive-definite matrix A, is a decomposition of the form $\mathbf {A} =\mathbf {LL} ^{*},$ where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition.[3] The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. When A is a real matrix (hence symmetric positive-definite), the factorization may be written $\mathbf {A} =\mathbf {LL} ^{\mathsf {T}},$ where L is a real lower triangular matrix with positive diagonal entries.[4][5][6] Positive semidefinite matrices If a Hermitian matrix A is only positive semidefinite, instead of positive definite, then it still has a decomposition of the form A = LL* where the diagonal entries of L are allowed to be zero.[7] The decomposition need not be unique, for example: ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}=\mathbf {L} \mathbf {L} ^{*},\quad \quad \mathbf {L} ={\begin{bmatrix}0&0\\\cos \theta &\sin \theta \end{bmatrix}}.$ However, if the rank of A is r, then there is a unique lower triangular L with exactly r positive diagonal elements and n−r columns containing all zeroes.[8] Alternatively, the decomposition can be made unique when a pivoting choice is fixed. Formally, if A is an n × n positive semidefinite matrix of rank r, then there is at least one permutation matrix P such that P A PT has a unique decomposition of the form P A PT = L L* with $\mathbf {L} ={\begin{bmatrix}\mathbf {L} _{1}&0\\\mathbf {L} _{2}&0\end{bmatrix}}$, where L1 is an r × r lower triangular matrix with positive diagonal.[9] LDL decomposition A closely related variant of the classical Cholesky decomposition is the LDL decomposition, $\mathbf {A} =\mathbf {LDL} ^{*},$ where L is a lower unit triangular (unitriangular) matrix, and D is a diagonal matrix. That is, the diagonal elements of L are required to be 1 at the cost of introducing an additional diagonal matrix D in the decomposition. The main advantage is that the LDL decomposition can be computed and used with essentially the same algorithms, but avoids extracting square roots.[10] For this reason, the LDL decomposition is often called the square-root-free Cholesky decomposition. For real matrices, the factorization has the form A = LDLT and is often referred to as LDLT decomposition (or LDLT decomposition, or LDL′). It is reminiscent of the eigendecomposition of real symmetric matrices, A = QΛQT, but is quite different in practice because Λ and D are not similar matrices. The LDL decomposition is related to the classical Cholesky decomposition of the form LL* as follows: $\mathbf {A} =\mathbf {LDL} ^{*}=\mathbf {L} \mathbf {D} ^{1/2}\left(\mathbf {D} ^{1/2}\right)^{*}\mathbf {L} ^{*}=\mathbf {L} \mathbf {D} ^{1/2}\left(\mathbf {L} \mathbf {D} ^{1/2}\right)^{*}.$ Conversely, given the classical Cholesky decomposition $\mathbf {A} =\mathbf {C} \mathbf {C} ^{*}$ of a positive definite matrix, if S is a diagonal matrix that contains the main diagonal of $\mathbf {C} $, then A can be decomposed as $\mathbf {L} \mathbf {D} \mathbf {L} ^{*}$ where $\mathbf {L} =\mathbf {C} \mathbf {S} ^{-1}$ (this rescales each column to make diagonal elements 1), $\mathbf {D} =\mathbf {S} ^{2}.$ If A is positive definite then the diagonal elements of D are all positive. For positive semidefinite A, an $\mathbf {L} \mathbf {D} \mathbf {L} ^{*}$ decomposition exists where the number of non-zero elements on the diagonal D is exactly the rank of A.[11] Some indefinite matrices for which no Cholesky decomposition exists have an LDL decomposition with negative entries in D: it suffices that the first n−1 leading principal minors of A are non-singular.[12] Example Here is the Cholesky decomposition of a symmetric real matrix: ${\begin{aligned}{\begin{pmatrix}4&12&-16\\12&37&-43\\-16&-43&98\\\end{pmatrix}}={\begin{pmatrix}2&0&0\\6&1&0\\-8&5&3\\\end{pmatrix}}{\begin{pmatrix}2&6&-8\\0&1&5\\0&0&3\\\end{pmatrix}}.\end{aligned}}$ And here is its LDLT decomposition: ${\begin{aligned}{\begin{pmatrix}4&12&-16\\12&37&-43\\-16&-43&98\\\end{pmatrix}}&={\begin{pmatrix}1&0&0\\3&1&0\\-4&5&1\\\end{pmatrix}}{\begin{pmatrix}4&0&0\\0&1&0\\0&0&9\\\end{pmatrix}}{\begin{pmatrix}1&3&-4\\0&1&5\\0&0&1\\\end{pmatrix}}.\end{aligned}}$ Applications The Cholesky decomposition is mainly used for the numerical solution of linear equations $\mathbf {Ax} =\mathbf {b} $. If A is symmetric and positive definite, then we can solve $\mathbf {Ax} =\mathbf {b} $ by first computing the Cholesky decomposition $\mathbf {A} =\mathbf {LL} ^{\mathrm {*} }$, then solving $\mathbf {Ly} =\mathbf {b} $ for y by forward substitution, and finally solving $\mathbf {L^{*}x} =\mathbf {y} $ for x by back substitution. An alternative way to eliminate taking square roots in the $\mathbf {LL} ^{\mathrm {*} }$ decomposition is to compute the LDL decomposition $\mathbf {A} =\mathbf {LDL} ^{\mathrm {*} }$, then solving $\mathbf {Ly} =\mathbf {b} $ for y, and finally solving $\mathbf {DL} ^{\mathrm {*} }\mathbf {x} =\mathbf {y} $. For linear systems that can be put into symmetric form, the Cholesky decomposition (or its LDL variant) is the method of choice, for superior efficiency and numerical stability. Compared to the LU decomposition, it is roughly twice as efficient.[2] Linear least squares Systems of the form Ax = b with A symmetric and positive definite arise quite often in applications. For instance, the normal equations in linear least squares problems are of this form. It may also happen that matrix A comes from an energy functional, which must be positive from physical considerations; this happens frequently in the numerical solution of partial differential equations. Non-linear optimization Non-linear multi-variate functions may be minimized over their parameters using variants of Newton's method called quasi-Newton methods. At iteration k, the search steps in a direction $p_{k}$ defined by solving $B_{k}p_{k}=-g_{k}$ for $p_{k}$, where $p_{k}$ is the step direction, $g_{k}$ is the gradient, and $B_{k}$ is an approximation to the Hessian matrix formed by repeating rank-1 updates at each iteration. Two well-known update formulas are called Davidon–Fletcher–Powell (DFP) and Broyden–Fletcher–Goldfarb–Shanno (BFGS). Loss of the positive-definite condition through round-off error is avoided if rather than updating an approximation to the inverse of the Hessian, one updates the Cholesky decomposition of an approximation of the Hessian matrix itself .[13] Monte Carlo simulation The Cholesky decomposition is commonly used in the Monte Carlo method for simulating systems with multiple correlated variables. The covariance matrix is decomposed to give the lower-triangular L. Applying this to a vector of uncorrelated samples u produces a sample vector Lu with the covariance properties of the system being modeled.[14] The following simplified example shows the economy one gets from the Cholesky decomposition: suppose the goal is to generate two correlated normal variables $x_{1}$ and $x_{2}$ with given correlation coefficient $\rho $. To accomplish that, it is necessary to first generate two uncorrelated Gaussian random variables $z_{1}$ and $z_{2}$, which can be done using a Box–Muller transform. Given the required correlation coefficient $\rho $, the correlated normal variables can be obtained via the transformations $x_{1}=z_{1}$ and $ x_{2}=\rho z_{1}+{\sqrt {1-\rho ^{2}}}z_{2}$. Kalman filters Unscented Kalman filters commonly use the Cholesky decomposition to choose a set of so-called sigma points. The Kalman filter tracks the average state of a system as a vector x of length N and covariance as an N × N matrix P. The matrix P is always positive semi-definite and can be decomposed into LLT. The columns of L can be added and subtracted from the mean x to form a set of 2N vectors called sigma points. These sigma points completely capture the mean and covariance of the system state. Matrix inversion The explicit inverse of a Hermitian matrix can be computed by Cholesky decomposition, in a manner similar to solving linear systems, using $n^{3}$ operations (${\tfrac {1}{2}}n^{3}$ multiplications).[10] The entire inversion can even be efficiently performed in-place. A non-Hermitian matrix B can also be inverted using the following identity, where BB* will always be Hermitian: $\mathbf {B} ^{-1}=\mathbf {B} ^{*}(\mathbf {BB} ^{*})^{-1}.$ Computation There are various methods for calculating the Cholesky decomposition. The computational complexity of commonly used algorithms is O(n3) in general. The algorithms described below all involve about (1/3)n3 FLOPs (n3/6 multiplications and the same number of additions) for real flavors and (4/3)n3 FLOPs for complex flavors,[15] where n is the size of the matrix A. Hence, they have half the cost of the LU decomposition, which uses 2n3/3 FLOPs (see Trefethen and Bau 1997). Which of the algorithms below is faster depends on the details of the implementation. Generally, the first algorithm will be slightly slower because it accesses the data in a less regular manner. The Cholesky algorithm The Cholesky algorithm, used to calculate the decomposition matrix L, is a modified version of Gaussian elimination. The recursive algorithm starts with i := 1 and A(1) := A. At step i, the matrix A(i) has the following form: $\mathbf {A} ^{(i)}={\begin{pmatrix}\mathbf {I} _{i-1}&0&0\\0&a_{i,i}&\mathbf {b} _{i}^{*}\\0&\mathbf {b} _{i}&\mathbf {B} ^{(i)}\end{pmatrix}},$ where Ii−1 denotes the identity matrix of dimension i − 1. If we now define the matrix Li by $\mathbf {L} _{i}:={\begin{pmatrix}\mathbf {I} _{i-1}&0&0\\0&{\sqrt {a_{i,i}}}&0\\0&{\frac {1}{\sqrt {a_{i,i}}}}\mathbf {b} _{i}&\mathbf {I} _{n-i}\end{pmatrix}},$ (note that ai,i > 0 since A(i) is positive definite), then we can write A(i) as $\mathbf {A} ^{(i)}=\mathbf {L} _{i}\mathbf {A} ^{(i+1)}\mathbf {L} _{i}^{*}$ where $\mathbf {A} ^{(i+1)}={\begin{pmatrix}\mathbf {I} _{i-1}&0&0\\0&1&0\\0&0&\mathbf {B} ^{(i)}-{\frac {1}{a_{i,i}}}\mathbf {b} _{i}\mathbf {b} _{i}^{*}\end{pmatrix}}.$ Note that bi b*i is an outer product, therefore this algorithm is called the outer-product version in (Golub & Van Loan). We repeat this for i from 1 to n. After n steps, we get A(n+1) = I. Hence, the lower triangular matrix L we are looking for is calculated as $\mathbf {L} :=\mathbf {L} _{1}\mathbf {L} _{2}\dots \mathbf {L} _{n}.$ :=\mathbf {L} _{1}\mathbf {L} _{2}\dots \mathbf {L} _{n}.} The Cholesky–Banachiewicz and Cholesky–Crout algorithms If we write out the equation ${\begin{aligned}\mathbf {A} =\mathbf {LL} ^{T}&={\begin{pmatrix}L_{11}&0&0\\L_{21}&L_{22}&0\\L_{31}&L_{32}&L_{33}\\\end{pmatrix}}{\begin{pmatrix}L_{11}&L_{21}&L_{31}\\0&L_{22}&L_{32}\\0&0&L_{33}\end{pmatrix}}\\[8pt]&={\begin{pmatrix}L_{11}^{2}&&({\text{symmetric}})\\L_{21}L_{11}&L_{21}^{2}+L_{22}^{2}&\\L_{31}L_{11}&L_{31}L_{21}+L_{32}L_{22}&L_{31}^{2}+L_{32}^{2}+L_{33}^{2}\end{pmatrix}},\end{aligned}}$ we obtain the following: ${\begin{aligned}\mathbf {L} ={\begin{pmatrix}{\sqrt {A_{11}}}&0&0\\A_{21}/L_{11}&{\sqrt {A_{22}-L_{21}^{2}}}&0\\A_{31}/L_{11}&\left(A_{32}-L_{31}L_{21}\right)/L_{22}&{\sqrt {A_{33}-L_{31}^{2}-L_{32}^{2}}}\end{pmatrix}}\end{aligned}}$ and therefore the following formulas for the entries of L: $L_{j,j}=(\pm ){\sqrt {A_{j,j}-\sum _{k=1}^{j-1}L_{j,k}^{2}}},$ $L_{i,j}={\frac {1}{L_{j,j}}}\left(A_{i,j}-\sum _{k=1}^{j-1}L_{i,k}L_{j,k}\right)\quad {\text{for }}i>j.$ For complex and real matrices, inconsequential arbitrary sign changes of diagonal and associated off-diagonal elements are allowed. The expression under the square root is always positive if A is real and positive-definite. For complex Hermitian matrix, the following formula applies: $L_{j,j}={\sqrt {A_{j,j}-\sum _{k=1}^{j-1}L_{j,k}^{*}L_{j,k}}},$ $L_{i,j}={\frac {1}{L_{j,j}}}\left(A_{i,j}-\sum _{k=1}^{j-1}L_{j,k}^{*}L_{i,k}\right)\quad {\text{for }}i>j.$ So we can compute the (i, j) entry if we know the entries to the left and above. The computation is usually arranged in either of the following orders: • The Cholesky–Banachiewicz algorithm starts from the upper left corner of the matrix L and proceeds to calculate the matrix row by row. for (i = 0; i < dimensionSize; i++) { for (j = 0; j <= i; j++) { float sum = 0; for (k = 0; k < j; k++) sum += L[i][k] * L[j][k]; if (i == j) L[i][j] = sqrt(A[i][i] - sum); else L[i][j] = (1.0 / L[j][j] * (A[i][j] - sum)); } } The above algorithm can be succinctly expressed as combining a dot product and matrix multiplication in vectorized programming languages such as Fortran as the following, do i = 1, size(A,1) L(i,i) = sqrt(A(i,i) - dot_product(L(i,1:i-1), L(i,1:i-1))) L(i+1:,i) = (A(i+1:,i) - matmul(conjg(L(i,1:i-1)), L(i+1:,1:i-1))) / L(i,i) end do where conjg refers to complex conjugate of the elements. • The Cholesky–Crout algorithm starts from the upper left corner of the matrix L and proceeds to calculate the matrix column by column. for (j = 0; j < dimensionSize; j++) { float sum = 0; for (k = 0; k < j; k++) { sum += L[j][k] * L[j][k]; } L[j][j] = sqrt(A[j][j] - sum); for (i = j + 1; i < dimensionSize; i++) { sum = 0; for (k = 0; k < j; k++) { sum += L[i][k] * L[j][k]; } L[i][j] = (1.0 / L[j][j] * (A[i][j] - sum)); } } The above algorithm can be succinctly expressed as combining a dot product and matrix multiplication in vectorized programming languages such as Fortran as the following, do i = 1, size(A,1) L(i,i) = sqrt(A(i,i) - dot_product(L(1:i-1,i), L(1:i-1,i))) L(i,i+1:) = (A(i,i+1:) - matmul(conjg(L(1:i-1,i)), L(1:i-1,i+1:))) / L(i,i) end do where conjg refers to complex conjugate of the elements. Either pattern of access allows the entire computation to be performed in-place if desired. Stability of the computation Suppose that we want to solve a well-conditioned system of linear equations. If the LU decomposition is used, then the algorithm is unstable unless we use some sort of pivoting strategy. In the latter case, the error depends on the so-called growth factor of the matrix, which is usually (but not always) small. Now, suppose that the Cholesky decomposition is applicable. As mentioned above, the algorithm will be twice as fast. Furthermore, no pivoting is necessary, and the error will always be small. Specifically, if we want to solve Ax = b, and y denotes the computed solution, then y solves the perturbed system (A + E)y = b, where $\|\mathbf {E} \|_{2}\leq c_{n}\varepsilon \|\mathbf {A} \|_{2}.$ Here ||·||2 is the matrix 2-norm, cn is a small constant depending on n, and ε denotes the unit round-off. One concern with the Cholesky decomposition to be aware of is the use of square roots. If the matrix being factorized is positive definite as required, the numbers under the square roots are always positive in exact arithmetic. Unfortunately, the numbers can become negative because of round-off errors, in which case the algorithm cannot continue. However, this can only happen if the matrix is very ill-conditioned. One way to address this is to add a diagonal correction matrix to the matrix being decomposed in an attempt to promote the positive-definiteness.[16] While this might lessen the accuracy of the decomposition, it can be very favorable for other reasons; for example, when performing Newton's method in optimization, adding a diagonal matrix can improve stability when far from the optimum. LDL decomposition An alternative form, eliminating the need to take square roots when A is symmetric, is the symmetric indefinite factorization[17] ${\begin{aligned}\mathbf {A} =\mathbf {LDL} ^{\mathrm {T} }&={\begin{pmatrix}1&0&0\\L_{21}&1&0\\L_{31}&L_{32}&1\\\end{pmatrix}}{\begin{pmatrix}D_{1}&0&0\\0&D_{2}&0\\0&0&D_{3}\\\end{pmatrix}}{\begin{pmatrix}1&L_{21}&L_{31}\\0&1&L_{32}\\0&0&1\\\end{pmatrix}}\\[8pt]&={\begin{pmatrix}D_{1}&&(\mathrm {symmetric} )\\L_{21}D_{1}&L_{21}^{2}D_{1}+D_{2}&\\L_{31}D_{1}&L_{31}L_{21}D_{1}+L_{32}D_{2}&L_{31}^{2}D_{1}+L_{32}^{2}D_{2}+D_{3}.\end{pmatrix}}.\end{aligned}}$ The following recursive relations apply for the entries of D and L: $D_{j}=A_{jj}-\sum _{k=1}^{j-1}L_{jk}^{2}D_{k},$ $L_{ij}={\frac {1}{D_{j}}}\left(A_{ij}-\sum _{k=1}^{j-1}L_{ik}L_{jk}D_{k}\right)\quad {\text{for }}i>j.$ This works as long as the generated diagonal elements in D stay non-zero. The decomposition is then unique. D and L are real if A is real. For complex Hermitian matrix A, the following formula applies: $D_{j}=A_{jj}-\sum _{k=1}^{j-1}L_{jk}L_{jk}^{*}D_{k},$ $L_{ij}={\frac {1}{D_{j}}}\left(A_{ij}-\sum _{k=1}^{j-1}L_{ik}L_{jk}^{*}D_{k}\right)\quad {\text{for }}i>j.$ Again, the pattern of access allows the entire computation to be performed in-place if desired. Block variant When used on indefinite matrices, the LDL* factorization is known to be unstable without careful pivoting;[18] specifically, the elements of the factorization can grow arbitrarily. A possible improvement is to perform the factorization on block sub-matrices, commonly 2 × 2:[19] ${\begin{aligned}\mathbf {A} =\mathbf {LDL} ^{\mathrm {T} }&={\begin{pmatrix}\mathbf {I} &0&0\\\mathbf {L} _{21}&\mathbf {I} &0\\\mathbf {L} _{31}&\mathbf {L} _{32}&\mathbf {I} \\\end{pmatrix}}{\begin{pmatrix}\mathbf {D} _{1}&0&0\\0&\mathbf {D} _{2}&0\\0&0&\mathbf {D} _{3}\\\end{pmatrix}}{\begin{pmatrix}\mathbf {I} &\mathbf {L} _{21}^{\mathrm {T} }&\mathbf {L} _{31}^{\mathrm {T} }\\0&\mathbf {I} &\mathbf {L} _{32}^{\mathrm {T} }\\0&0&\mathbf {I} \\\end{pmatrix}}\\[8pt]&={\begin{pmatrix}\mathbf {D} _{1}&&(\mathrm {symmetric} )\\\mathbf {L} _{21}\mathbf {D} _{1}&\mathbf {L} _{21}\mathbf {D} _{1}\mathbf {L} _{21}^{\mathrm {T} }+\mathbf {D} _{2}&\\\mathbf {L} _{31}\mathbf {D} _{1}&\mathbf {L} _{31}\mathbf {D} _{1}\mathbf {L} _{21}^{\mathrm {T} }+\mathbf {L} _{32}\mathbf {D} _{2}&\mathbf {L} _{31}\mathbf {D} _{1}\mathbf {L} _{31}^{\mathrm {T} }+\mathbf {L} _{32}\mathbf {D} _{2}\mathbf {L} _{32}^{\mathrm {T} }+\mathbf {D} _{3}\end{pmatrix}},\end{aligned}}$ where every element in the matrices above is a square submatrix. From this, these analogous recursive relations follow: $\mathbf {D} _{j}=\mathbf {A} _{jj}-\sum _{k=1}^{j-1}\mathbf {L} _{jk}\mathbf {D} _{k}\mathbf {L} _{jk}^{\mathrm {T} },$ $\mathbf {L} _{ij}=\left(\mathbf {A} _{ij}-\sum _{k=1}^{j-1}\mathbf {L} _{ik}\mathbf {D} _{k}\mathbf {L} _{jk}^{\mathrm {T} }\right)\mathbf {D} _{j}^{-1}.$ This involves matrix products and explicit inversion, thus limiting the practical block size. Updating the decomposition A task that often arises in practice is that one needs to update a Cholesky decomposition. In more details, one has already computed the Cholesky decomposition $\mathbf {A} =\mathbf {L} \mathbf {L} ^{*}$ of some matrix $\mathbf {A} $, then one changes the matrix $\mathbf {A} $ in some way into another matrix, say ${\tilde {\mathbf {A} }}$, and one wants to compute the Cholesky decomposition of the updated matrix: ${\tilde {\mathbf {A} }}={\tilde {\mathbf {L} }}{\tilde {\mathbf {L} }}^{*}$. The question is now whether one can use the Cholesky decomposition of $\mathbf {A} $ that was computed before to compute the Cholesky decomposition of ${\tilde {\mathbf {A} }}$. Rank-one update The specific case, where the updated matrix ${\tilde {\mathbf {A} }}$ is related to the matrix $\mathbf {A} $ by ${\tilde {\mathbf {A} }}=\mathbf {A} +\mathbf {x} \mathbf {x} ^{*}$, is known as a rank-one update. Here is a function[20] written in Matlab syntax that realizes a rank-one update: function [L] = cholupdate(L, x) n = length(x); for k = 1:n r = sqrt(L(k, k)^2 + x(k)^2); c = r / L(k, k); s = x(k) / L(k, k); L(k, k) = r; if k < n L((k+1):n, k) = (L((k+1):n, k) + s * x((k+1):n)) / c; x((k+1):n) = c * x((k+1):n) - s * L((k+1):n, k); end end end A rank-n update is one where for a matrix $\mathbf {M} $ one updates the decomposition such that ${\tilde {\mathbf {A} }}=\mathbf {A} +\mathbf {M} \mathbf {M} ^{*}$. This can be achieved by successively performing rank-one updates for each of the columns of $\mathbf {M} $. Rank-one downdate A rank-one downdate is similar to a rank-one update, except that the addition is replaced by subtraction: ${\tilde {\mathbf {A} }}=\mathbf {A} -\mathbf {x} \mathbf {x} ^{*}$. This only works if the new matrix ${\tilde {\mathbf {A} }}$ is still positive definite. The code for the rank-one update shown above can easily be adapted to do a rank-one downdate: one merely needs to replace the two additions in the assignment to r and L((k+1):n, k) by subtractions. Adding and removing rows and columns If we have a symmetric and positive definite matrix $\mathbf {A} $ represented in block form as $\mathbf {A} ={\begin{pmatrix}\mathbf {A} _{11}&\mathbf {A} _{13}\\\mathbf {A} _{13}^{\mathrm {T} }&\mathbf {A} _{33}\\\end{pmatrix}}$ and its upper Cholesky factor $\mathbf {L} ={\begin{pmatrix}\mathbf {L} _{11}&\mathbf {L} _{13}\\0&\mathbf {L} _{33}\\\end{pmatrix}},$ then for a new matrix ${\tilde {\mathbf {A} }}$, which is the same as $\mathbf {A} $ but with the insertion of new rows and columns, ${\begin{aligned}{\tilde {\mathbf {A} }}&={\begin{pmatrix}\mathbf {A} _{11}&\mathbf {A} _{12}&\mathbf {A} _{13}\\\mathbf {A} _{12}^{\mathrm {T} }&\mathbf {A} _{22}&\mathbf {A} _{23}\\\mathbf {A} _{13}^{\mathrm {T} }&\mathbf {A} _{23}^{\mathrm {T} }&\mathbf {A} _{33}\\\end{pmatrix}}\end{aligned}}$ we are interested in finding the Cholesky factorization of ${\tilde {\mathbf {A} }}$, which we call ${\tilde {\mathbf {S} }}$, without directly computing the entire decomposition. ${\begin{aligned}{\tilde {\mathbf {S} }}&={\begin{pmatrix}\mathbf {S} _{11}&\mathbf {S} _{12}&\mathbf {S} _{13}\\0&\mathbf {S} _{22}&\mathbf {S} _{23}\\0&0&\mathbf {S} _{33}\\\end{pmatrix}}.\end{aligned}}$ Writing $\mathbf {A} \setminus \mathbf {b} $ for the solution of $\mathbf {A} \mathbf {x} =\mathbf {b} $, which can be found easily for triangular matrices, and ${\text{chol}}(\mathbf {M} )$ for the Cholesky decomposition of $\mathbf {M} $, the following relations can be found: ${\begin{aligned}\mathbf {S} _{11}&=\mathbf {L} _{11},\\\mathbf {S} _{12}&=\mathbf {L} _{11}^{\mathrm {T} }\setminus \mathbf {A} _{12},\\\mathbf {S} _{13}&=\mathbf {L} _{13},\\\mathbf {S} _{22}&=\mathrm {chol} \left(\mathbf {A} _{22}-\mathbf {S} _{12}^{\mathrm {T} }\mathbf {S} _{12}\right),\\\mathbf {S} _{23}&=\mathbf {S} _{22}^{\mathrm {T} }\setminus \left(\mathbf {A} _{23}-\mathbf {S} _{12}^{\mathrm {T} }\mathbf {S} _{13}\right),\\\mathbf {S} _{33}&=\mathrm {chol} \left(\mathbf {L} _{33}^{\mathrm {T} }\mathbf {L} _{33}-\mathbf {S} _{23}^{\mathrm {T} }\mathbf {S} _{23}\right).\end{aligned}}$ These formulas may be used to determine the Cholesky factor after the insertion of rows or columns in any position, if we set the row and column dimensions appropriately (including to zero). The inverse problem, when we have ${\begin{aligned}{\tilde {\mathbf {A} }}&={\begin{pmatrix}\mathbf {A} _{11}&\mathbf {A} _{12}&\mathbf {A} _{13}\\\mathbf {A} _{12}^{\mathrm {T} }&\mathbf {A} _{22}&\mathbf {A} _{23}\\\mathbf {A} _{13}^{\mathrm {T} }&\mathbf {A} _{23}^{\mathrm {T} }&\mathbf {A} _{33}\\\end{pmatrix}}\end{aligned}}$ with known Cholesky decomposition ${\begin{aligned}{\tilde {\mathbf {S} }}&={\begin{pmatrix}\mathbf {S} _{11}&\mathbf {S} _{12}&\mathbf {S} _{13}\\0&\mathbf {S} _{22}&\mathbf {S} _{23}\\0&0&\mathbf {S} _{33}\\\end{pmatrix}}\end{aligned}}$ and wish to determine the Cholesky factor ${\begin{aligned}\mathbf {L} &={\begin{pmatrix}\mathbf {L} _{11}&\mathbf {L} _{13}\\0&\mathbf {L} _{33}\\\end{pmatrix}}\end{aligned}}$ of the matrix $\mathbf {A} $ with rows and columns removed, ${\begin{aligned}\mathbf {A} &={\begin{pmatrix}\mathbf {A} _{11}&\mathbf {A} _{13}\\\mathbf {A} _{13}^{\mathrm {T} }&\mathbf {A} _{33}\\\end{pmatrix}},\end{aligned}}$ yields the following rules: ${\begin{aligned}\mathbf {L} _{11}&=\mathbf {S} _{11},\\\mathbf {L} _{13}&=\mathbf {S} _{13},\\\mathbf {L} _{33}&=\mathrm {chol} \left(\mathbf {S} _{33}^{\mathrm {T} }\mathbf {S} _{33}+\mathbf {S} _{23}^{\mathrm {T} }\mathbf {S} _{23}\right).\end{aligned}}$ Notice that the equations above that involve finding the Cholesky decomposition of a new matrix are all of the form ${\tilde {\mathbf {A} }}=\mathbf {A} \pm \mathbf {x} \mathbf {x} ^{*}$, which allows them to be efficiently calculated using the update and downdate procedures detailed in the previous section.[21] Proof for positive semi-definite matrices Proof by limiting argument The above algorithms show that every positive definite matrix $\mathbf {A} $ has a Cholesky decomposition. This result can be extended to the positive semi-definite case by a limiting argument. The argument is not fully constructive, i.e., it gives no explicit numerical algorithms for computing Cholesky factors. If $\mathbf {A} $ is an $n\times n$ positive semi-definite matrix, then the sequence $ \left(\mathbf {A} _{k}\right)_{k}:=\left(\mathbf {A} +{\frac {1}{k}}\mathbf {I} _{n}\right)_{k}$ consists of positive definite matrices. (This is an immediate consequence of, for example, the spectral mapping theorem for the polynomial functional calculus.) Also, $\mathbf {A} _{k}\rightarrow \mathbf {A} \quad {\text{for}}\quad k\rightarrow \infty $ in operator norm. From the positive definite case, each $\mathbf {A} _{k}$ has Cholesky decomposition $\mathbf {A} _{k}=\mathbf {L} _{k}\mathbf {L} _{k}^{*}$. By property of the operator norm, $\|\mathbf {L} _{k}\|^{2}\leq \|\mathbf {L} _{k}\mathbf {L} _{k}^{*}\|=\|\mathbf {A} _{k}\|\,.$ The $\leq $ holds because $M_{n}(\mathbb {C} )$ equipped with the operator norm is a C* algebra. So $\left(\mathbf {L} _{k}\right)_{k}$ is a bounded set in the Banach space of operators, therefore relatively compact (because the underlying vector space is finite-dimensional). Consequently, it has a convergent subsequence, also denoted by $\left(\mathbf {L} _{k}\right)_{k}$, with limit $\mathbf {L} $. It can be easily checked that this $\mathbf {L} $ has the desired properties, i.e. $\mathbf {A} =\mathbf {L} \mathbf {L} ^{*}$, and $\mathbf {L} $ is lower triangular with non-negative diagonal entries: for all $x$ and $y$, $\langle \mathbf {A} x,y\rangle =\left\langle \lim \mathbf {A} _{k}x,y\right\rangle =\langle \lim \mathbf {L} _{k}\mathbf {L} _{k}^{*}x,y\rangle =\langle \mathbf {L} \mathbf {L} ^{*}x,y\rangle \,.$ Therefore, $\mathbf {A} =\mathbf {L} \mathbf {L} ^{*}$. Because the underlying vector space is finite-dimensional, all topologies on the space of operators are equivalent. So $\left(\mathbf {L} _{k}\right)_{k}$ tends to $\mathbf {L} $ in norm means $\left(\mathbf {L} _{k}\right)_{k}$ tends to $\mathbf {L} $ entrywise. This in turn implies that, since each $\mathbf {L} _{k}$ is lower triangular with non-negative diagonal entries, $\mathbf {L} $ is also. Proof by QR decomposition Let $\mathbf {A} $ be a positive semi-definite Hermitian matrix. Then it can be written as a product of its square root matrix, $\mathbf {A} =\mathbf {B} \mathbf {B} ^{*}$. Now QR decomposition can be applied to $\mathbf {B} ^{*}$, resulting in $\mathbf {B} ^{*}=\mathbf {Q} \mathbf {R} $ , where $\mathbf {Q} $ is unitary and $\mathbf {R} $ is upper triangular. Inserting the decomposition into the original equality yields $A=\mathbf {B} \mathbf {B} ^{*}=(\mathbf {QR} )^{*}\mathbf {QR} =\mathbf {R} ^{*}\mathbf {Q} ^{*}\mathbf {QR} =\mathbf {R} ^{*}\mathbf {R} $. Setting $\mathbf {L} =\mathbf {R} ^{*}$ completes the proof. Generalization The Cholesky factorization can be generalized to (not necessarily finite) matrices with operator entries. Let $\{{\mathcal {H}}_{n}\}$ be a sequence of Hilbert spaces. Consider the operator matrix $\mathbf {A} ={\begin{bmatrix}\mathbf {A} _{11}&\mathbf {A} _{12}&\mathbf {A} _{13}&\;\\\mathbf {A} _{12}^{*}&\mathbf {A} _{22}&\mathbf {A} _{23}&\;\\\mathbf {A} _{13}^{*}&\mathbf {A} _{23}^{*}&\mathbf {A} _{33}&\;\\\;&\;&\;&\ddots \end{bmatrix}}$ acting on the direct sum ${\mathcal {H}}=\bigoplus _{n}{\mathcal {H}}_{n},$ where each $\mathbf {A} _{ij}:{\mathcal {H}}_{j}\rightarrow {\mathcal {H}}_{i}$ is a bounded operator. If A is positive (semidefinite) in the sense that for all finite k and for any $h\in \bigoplus _{n=1}^{k}{\mathcal {H}}_{k},$ we have $\langle h,\mathbf {A} h\rangle \geq 0$, then there exists a lower triangular operator matrix L such that A = LL*. One can also take the diagonal entries of L to be positive. Implementations in programming libraries • C programming language: the GNU Scientific Library provides several implementations of Cholesky decomposition. • Maxima computer algebra system: function cholesky computes Cholesky decomposition. • GNU Octave numerical computations system provides several functions to calculate, update, and apply a Cholesky decomposition. • The LAPACK library provides a high performance implementation of the Cholesky decomposition that can be accessed from Fortran, C and most languages. • In Python, the function cholesky from the numpy.linalg module performs Cholesky decomposition. • In Matlab, the chol function gives the Cholesky decomposition. Note that chol uses the upper triangular factor of the input matrix by default, i.e. it computes $A=R^{*}R$ where $R$ is upper triangular. A flag can be passed to use the lower triangular factor instead. • In R, the chol function gives the Cholesky decomposition. • In Julia, the cholesky function from the LinearAlgebra standard library gives the Cholesky decomposition. • In Mathematica, the function "CholeskyDecomposition" can be applied to a matrix. • In C++, multiple linear algebra libraries support this decomposition: • The Armadillo (C++ library) supplies the command chol to perform Cholesky decomposition. • The Eigen library supplies Cholesky factorizations for both sparse and dense matrices. • In the ROOT package, the TDecompChol class is available. • In Analytica, the function Decompose gives the Cholesky decomposition. • The Apache Commons Math library has an implementation which can be used in Java, Scala and any other JVM language. See also • Cycle rank • Incomplete Cholesky factorization • Matrix decomposition • Minimum degree algorithm • Square root of a matrix • Sylvester's law of inertia • Symbolic Cholesky decomposition Notes 1. Benoit (1924). "Note sur une méthode de résolution des équations normales provenant de l'application de la méthode des moindres carrés à un système d'équations linéaires en nombre inférieur à celui des inconnues (Procédé du Commandant Cholesky)". Bulletin Géodésique (in French). 2: 66–67. doi:10.1007/BF03031308. 2. Press, William H.; Saul A. Teukolsky; William T. Vetterling; Brian P. Flannery (1992). Numerical Recipes in C: The Art of Scientific Computing (second ed.). Cambridge University England EPress. p. 994. ISBN 0-521-43108-5. Retrieved 2009-01-28. 3. Golub & Van Loan (1996, p. 143), Horn & Johnson (1985, p. 407), Trefethen & Bau (1997, p. 174). 4. Horn & Johnson (1985, p. 407). 5. "matrices - Diagonalizing a Complex Symmetric Matrix". MathOverflow. Retrieved 2020-01-25. 6. Schabauer, Hannes; Pacher, Christoph; Sunderland, Andrew G.; Gansterer, Wilfried N. (2010-05-01). "Toward a parallel solver for generalized complex symmetric eigenvalue problems". Procedia Computer Science. ICCS 2010. 1 (1): 437–445. doi:10.1016/j.procs.2010.04.047. ISSN 1877-0509. 7. Golub & Van Loan (1996, p. 147). 8. Gentle, James E. (1998). Numerical Linear Algebra for Applications in Statistics. Springer. p. 94. ISBN 978-1-4612-0623-1. 9. Higham, Nicholas J. (1990). "Analysis of the Cholesky Decomposition of a Semi-definite Matrix". In Cox, M. G.; Hammarling, S. J. (eds.). Reliable Numerical Computation. Oxford, UK: Oxford University Press. pp. 161–185. ISBN 978-0-19-853564-5. 10. Krishnamoorthy, Aravindh; Menon, Deepak (2011). "Matrix Inversion Using Cholesky Decomposition". 1111: 4144. arXiv:1111.4144. Bibcode:2011arXiv1111.4144K. {{cite journal}}: Cite journal requires |journal= (help) 11. So, Anthony Man-Cho (2007). A Semidefinite Programming Approach to the Graph Realization Problem: Theory, Applications and Extensions (PDF) (PhD). Theorem 2.2.6. 12. Golub & Van Loan (1996, Theorem 4.1.3) 13. Arora, J.S. Introduction to Optimum Design (2004), p. 327. https://books.google.com/books?id=9FbwVe577xwC&pg=PA327 14. Matlab randn documentation. mathworks.com. 15. ?potrf Intel® Math Kernel Library 16. Fang, Haw-ren; O'Leary, Dianne P. (8 August 2006). "Modified Cholesky Algorithms: A Catalog with New Approaches" (PDF). {{cite journal}}: Cite journal requires |journal= (help) 17. Watkins, D. (1991). Fundamentals of Matrix Computations. New York: Wiley. p. 84. ISBN 0-471-61414-9. 18. Nocedal, Jorge (2000). Numerical Optimization. Springer. 19. Fang, Haw-ren (24 August 2007). "Analysis of Block LDLT Factorizations for Symmetric Indefinite Matrices". {{cite journal}}: Cite journal requires |journal= (help) 20. Based on: Stewart, G. W. (1998). Basic decompositions. Philadelphia: Soc. for Industrial and Applied Mathematics. ISBN 0-89871-414-1. 21. Osborne, M. (2010), Appendix B. References • Dereniowski, Dariusz; Kubale, Marek (2004). "Cholesky Factorization of Matrices in Parallel and Ranking of Graphs". 5th International Conference on Parallel Processing and Applied Mathematics (PDF). Lecture Notes on Computer Science. Vol. 3019. Springer-Verlag. pp. 985–992. doi:10.1007/978-3-540-24669-5_127. ISBN 978-3-540-21946-0. Archived from the original (PDF) on 2011-07-16. • Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Baltimore: Johns Hopkins. ISBN 978-0-8018-5414-9. • Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. ISBN 0-521-38632-2. • S. J. Julier and J. K. Uhlmann. "A General Method for Approximating Nonlinear Transformations of ProbabilityDistributions". • S. J. Julier and J. K. Uhlmann, "A new extension of the Kalman filter to nonlinear systems", in Proc. AeroSense: 11th Int. Symp. Aerospace/Defence Sensing, Simulation and Controls, 1997, pp. 182–193. • Trefethen, Lloyd N.; Bau, David (1997). Numerical linear algebra. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-361-9. • Osborne, Michael (2010). Bayesian Gaussian Processes for Sequential Prediction, Optimisation and Quadrature (PDF) (thesis). University of Oxford. • Ruschel, João Paulo Tarasconi, Bachelor degree "Parallel Implementations of the Cholesky Decomposition on CPUs and GPUs" Universidade Federal Do Rio Grande Do Sul, Instituto De Informatica, 2016, pp. 29-30. External links History of science • Sur la résolution numérique des systèmes d'équations linéaires, Cholesky's 1910 manuscript, online and analyzed on BibNum (in French and English) [for English, click 'A télécharger'] Information • "Cholesky factorization", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Cholesky Decomposition, The Data Analysis BriefBook • Cholesky Decomposition on www.math-linux.com • Cholesky Decomposition Made Simple on Science Meanderthal Computer code • LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems (DPOTRF, DPOTRF2, details performance) • ALGLIB includes a partial port of the LAPACK to C++, C#, Delphi, Visual Basic, etc. (spdmatrixcholesky, hpdmatrixcholesky) • libflame is a C library with LAPACK functionality. • Notes and video on high-performance implementation of Cholesky factorization at The University of Texas at Austin. • Cholesky : TBB + Threads + SSE is a book explaining the implementation of the CF with TBB, threads and SSE (in Spanish). • library "Ceres Solver" by Google. • LDL decomposition routines in Matlab. • Armadillo is a C++ linear algebra package • Rosetta Code is a programming chrestomathy site. on page topic. • AlgoWiki is an open encyclopedia of algorithms’ properties and features of their implementations on page topic • Intel® oneAPI Math Kernel Library Intel-Optimized Math Library for Numerical Computing ?potrf, ?potrs Use of the matrix in simulation • Generating Correlated Random Variables and Stochastic Processes, Martin Haugh, Columbia University Online calculators • Online Matrix Calculator Performs Cholesky decomposition of matrices online. Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software
Wikipedia
\begin{document} \title{Remarks on the non-vanishing conjecture} \begin{abstract}We discuss a difference between the rational and the real non-vanishing conjecture for pseudo-effective log canonical divisors of log canonical pairs. We also show the log non-vanishing theorem for rationally connected varieties under assuming Shokurov's ACC conjectures. \end{abstract} \section{Introduction}\label{real-intro} Throughout this article, we work over $\mathbb{C}$, the complex number field. We will freely use the standard notations in \cite{kamama}, \cite{komo}, and \cite{bchm}. In this article we deal a topic related to the abundance conjecture: \begin{conj}[Abundance conjecture]\label{conj-abun-nef}Let $(X,\Delta)$ be a projective log canonical pair such that $\Delta$ is an effective $\mathbb{Q}$-divisor and $K_X+\Delta$ is nef. Then $K_X+\Delta$ is semi-ample. \end{conj} Let $\mathbb{K}$ be the real number field $\mathbb{R}$ or the rational number field $\mathbb{Q}$. The following conjecture seems to be the most difficult and important conjecture for proving Conjecture \ref{conj-abun-nef}: \begin{conj}[Non-vanishing conjecture]\label{conj-non-vani-k}Let $(X,\Delta)$ be a projective log canonical pair such that $\Delta$ is an effective $\mathbb{K}$-divisor and $K_X+\Delta$ is pseudo-effective. Then there exists an effective $\mathbb{K}$-divisor $D$ such that $D \sim_{\mathbb{K}}K_X+\Delta$. \end{conj} Note that the above conjecture is obviously true for big log canonical divisors. Thus it is important for pseudo-effective log canonical divisors which are not big. In this article, we study a difference between Conjecture \ref{conj-non-vani-k} for $\mathbb{K}=\mathbb{Q}$ and $\mathbb{R}$. One of the importence of Conjecture \ref{conj-non-vani-k} for $\mathbb{K} =\mathbb{R}$ is motivated in Birkar's framework on the existence of minimal models \cite{bir-exiII}. In his construction, Conjecture \ref{conj-non-vani-k} must be formulated for {\em log canonical} pairs with {\em $\mathbb{R}$-boundary} when we construct minimal models for even smooth projective varieties. For reducing Conjecture \ref{conj-non-vani-k} in the case where $\mathbb{K}=\mathbb{R}$ to the case where $(X,\Delta)$ is kawamata log terminal with $\mathbb{Q}$-boundary, we need the following two conjectures (cf.~Lemma \ref{term pe fibration lemma}): \begin{conj}[Global ACC conjecture, cf.{~\cite[Conjecture 2.7]{bSh}, \cite[Conjecture 8.2]{dhp-ext}}]\label{gacc} Let $d\in \mathbb{N}$ and $I \subset [0,1]$ a set satisfying the DCC. Then there is a finite subset $I_0 \subset I$ such that if \begin{enumerate} \item $X$ is a projective variety of dimension $d$, \item $(X,\Delta )$ is log canonical, \item $\Delta =\sum \delta _i \Delta _i $ where $\delta _i \in I$, \item $K_X+\Delta \equiv 0$, \end{enumerate} then $\delta _i \in I_0$. \end{conj} \begin{conj}[ACC conjecture for log canonical thresholds, cf. {\cite[Conjecture 1.7]{bSh}, \cite[Conjecture 8.4]{dhp-ext}}]\label{acclct} Let $d \in \mathbb{N}$, $\Gamma \subset [0,1]$ be a set satisfying the DCC and, let $S \subset \mathbb{R} _{\geq 0}$ be a finite set. Then the set $$\{{\rm lct}(X,\Delta;D )|\ (X,\Delta )\ {\rm is\ lc},\ \dim X=d,\ \Delta \in \Gamma,\ D\in S\}$$ satisfies the ACC. Here $D$ is $\mathbb{R}$-Cartier and $\Delta\in \Gamma$ (resp.~$D\in S$) means $\Delta=\sum \delta _i\Delta _i$ where $\delta _i\in \Gamma$ (resp.~$D=\sum d_iD_i$ where $d_i\in S$) and ${\rm lct}(X,\Delta;D )={\rm sup}\{ t\geq 0|(X,\Delta +tD)\ {\rm is\ lc}\}$. \end{conj} The proofs of the above two conjectures are announced by Hacon--$\mathrm{M^{c}}$Kernan--Xu. See \cite[Remark 8.3]{dhp-ext}. Namely the main theorem of this article is the following: \begin{thm}\label{main-real}Assume that the global ACC conjecture (\ref{gacc}) in dimension $\leq n$, the ACC conjecture for log canonical thresholds (\ref{acclct}) in dimension $\leq n$, and the abundance conjecture (\ref{conj-abun-nef}) in dimension $\leq n-1$. Then the non-vanishing conjecture (\ref{conj-non-vani-k}) for $n$-dimensional klt pairs in the case where $\mathbb{K}=\mathbb{Q}$ implies that for $n$-dimensional lc pairs in the case where $\mathbb{K}=\mathbb{R}$. \end{thm} Under assuming that Conjecture \ref{gacc} and Conjecture \ref{acclct} hold, by combining with \cite[Theorem 8.8]{dhp-ext}, we can reduce Conjecture \ref{conj-non-vani-k} in the case where $\mathbb{K}=\mathbb{R}$ to the case where $X$ is smooth and $\Delta=0$. The proof of Theorem \ref{main-real} was inspired by Section $8$ in \cite{dhp-ext} and discussions the author had with Birkar in Paris. In Section \ref{RC-non}, we also show the log non-vanishing theorem (= Theorem \ref{main-RC-non-van}) for rationally connected varieties by the same argument. \end{ack} \begin{note}\label{notation} A variety $X/Z$ means that a quasi-projective normal variety $X$ is projective over a quasi-projective variety $Z$. A rational map $f: X \dashrightarrow Y/Z $ denotes a rational map $X \dashrightarrow Y$ over $Z$. For a contracting birational map $X \dashrightarrow Y/Z$ and an $\mathbb{R}$-Weil divisor $D$ on $X$, an $\mathbb{R}$-Weil divisor $D^Y$ means the strict transform of $D$ on $Y$. \end{note} \section{On the existence of minimal models after Birkar}\label{real-exi-mm}In this section we introduce the definitions of minimal models in the sense of Birkar--Shokurov and some results on the existence of minimal models after Birkar. \begin{defi}[cf. {\cite[Definition 2.1]{bir-exiII}}]\label{real-def-minimalmodelsenseofBS} A pair $(Y/Z,B_Y)$ is a \emph{log birational model} of $(X/Z,B)$ if we are given a birational map $\phi\colon X\dashrightarrow Y/Z$ and $B_Y=B^\sim+E$ where $B^\sim$ is the birational transform of $B$ and $E$ is the reduced exceptional divisor of $\phi^{-1}$, that is, $E=\sum E_j$ where $E_j$ are the exceptional over $X$ prime divisors on $Y$. A log birational model $(Y/Z,B_Y)$ is a \emph{nef model} of $(X/Z,B)$ if in addition\\\ (1)$(Y/Z,B_Y)$ is $\mathbb{Q}$-factorial dlt, and\\\ (2)$K_Y+B_Y$ is nef over $Z$.\\\ And we call a nef model $(Y/Z,B_Y)$ a \emph{log minimal model of $(X/Z,B)$ in the sense of Birkar--Shokurov} if in addition\\\ (3) for any prime divisor $D$ on $X$ which is exceptional over $Y$, we have $$ a(D,X,B)<a(D,Y,B_Y). $$ \end{defi} \begin{rem}\label{rem_real_bir1}The followings are remarks: \begin{itemize} \item[(1)] Conjecture \ref{conj-non-vani-k} in the case where the dimension $\leq n-1$ and $\mathbb{K}=\mathbb{R}$ implies the existence of relative log minimal models in the sense of Birkar--Shokurov over a quasi-projective base $Z$ for effective dlt pairs over $Z$ in dimension $n$. See \cite[Corollary 1.7 and Theorem 1.4]{bir-exiII}. \item[(2)] Conjecture \ref{conj-non-vani-k} in the case where the dimension $\leq n-1$ and $\mathbb{K}=\mathbb{R}$ implies Conjecture \ref{conj-non-vani-k} in the case where the dimension $\leq n$ and $\mathbb{K}=\mathbb{R}$ over a non-point quasi-projective base $Z$. See \cite[Lemma 3.2.1]{bchm}. \item[(3)] When $(X/Z,B)$ is purely log terminal, a log minimal model of $(X/Z,B)$ in the sense of Birkar--Shokurov is the traditional one as in \cite{komo} and \cite{bchm}. See \cite[Remark 2.6]{bir-exiI}. \end{itemize} \end{rem} \section{Proof of Theorem \ref{main-real}}\label{section-proof of main real} In this section, we give the proof of Theorem \ref{main-real}. The proof of following lemma is essentially same as the proof of \cite[Proposition 8.7]{dhp-ext}. \begin{lem}[cf.{ \cite[Proposition 8.7]{dhp-ext}}]\label{term pe fibration lemma}Assume that the global ACC conjecture (\ref{gacc}) in dimension $\leq n$ and the ACC conjecture for log canonical thresholds (\ref{acclct}) in dimension $\leq n$. Let $(X,\Delta)$ be a $\mathbb{Q}$-factorial projective dlt pair such that $\Delta$ is an $\mathbb{R}$-divisor and $K_X+\Delta$ is pseudo-effective. Suppose that there exists a sequence of effective divisors $\{\Delta_i\}$ such that $\Delta_i\leq \Delta_{i+1}$, $K_X+\Delta_i$ is not pseudo-effective for any $i\geq 0$, and $$\lim_{i \to \infty}\Delta_i=\Delta.$$ Then there exists a contracting birational map $\varphi:X \dashrightarrow X'$ such that there exists a projective morphism $f':X' \to Z$ with connected fibers satisfying: \begin{itemize} \item[(1)] $(X',\Delta')$ is $\mathbb{Q}$-factorial log canonical and $\rho(X'/Z)=1$, \item[(2)] $K_{X'}+\Delta' \equiv_{f'} 0$, \item[(3)] $\Delta'-\Delta'_i$ is $f'$-ample for some $i$, and \item[(4)] $\mathrm{dim}\,X > \mathrm{dim}\,Z,$ \end{itemize} where $\Delta'$ and $\Delta'_i$ are the strict transform of $\Delta$ and $\Delta_i$ on $X'$. \end{lem} \begin{proof}Set $\Gamma_i=\Delta-\Delta_i$. Then $K_X+\Delta_i+x\Gamma_i$ is also not pseudo-effective for every non-negative number $x<1$. For any $i$ and non-negative number $x<1$, we can take a Mori fiber space $f_{x,i}:Y_{x,i} \to Z_{x,i}$ of $(X,\Delta_i+x\Gamma_i)$ by \cite{bchm}. Then there exists a positive number $\eta_{x,i}$ such that $$K_{Y_{x,i}}+ \Delta^{Y_{x,i}}_{i}+ \eta_{x,i} \Gamma^{Y_{x,i}}_{i} \equiv_{f_{x,i}} 0.$$ Note that $x <\eta_{x,i} \leq 1$ and $x\leq \mathrm{lct}(Y_{x,i}, \Delta^{Y_{x,i}}_{i};\Gamma^{Y_{x,i}}_{i})$ since $K_{Y_{x,i}}+ \Delta^{Y_{x,i}}_{i}+ \Gamma^{Y_{x,i}}_{i}$ is pseudo-effective. \begin{cl}\label{rem-real-lct-term} When we consider an increasing sequence $\{x_j\}$ such that $$\lim_{j \to \infty}x_j=1,$$ it holds that $\mathrm{lct}(Y_{x_j,i}, \Delta^{Y_{x_j,i}}_{i};\Gamma^{Y_{x_j,i}}_{i})\geq 1$ for $j\gg 0$ \end{cl} \begin{proof}[Proof of Claim \ref{rem-real-lct-term}] Put $$l_{j,i}=\mathrm{lct}(Y_{x_j,i}, \Delta^{Y_{x_j,i}}_{i};\Gamma^{Y_{x_j,i}}_{i}).$$ Assume by contradiction that $l_{j,i}< 1$ for some infinitely many $j$. Fix such an index $j_0$. Then we take a $j_1$ such that $l_{j_1,i}< 1$ and $l_{j_0,i}< x_{j_1} < 1.$ Since $l_{j_1,i}< 1$, we take $l_{j_1,i}< x_{j_2} < 1.$ By repeating, we construct increasing sequences $\{x_{j_k}\}_{k}$ and $\{l_{j_k,i}\}_{k}$. Actually this is a contradiction to Conjecture \ref{acclct}. \end{proof} Thus, for any $i$, there exists non-negative number $y_i<1$ such that $$\lim_{i \to \infty}y_i=1,$$ $$K_{Y_{y_i,i}}+ \Delta^{Y_{y_i,i}}_{i}+ \eta_{y_i,i} \Gamma^{Y_{y_i,i}}_{i} \equiv_{f_{y_i,i}} 0,$$ and $(Y_{y_i,i}, \Delta^{Y_{y_i,i}}_{i}+ \eta_{y_i,i} \Gamma^{Y_{y_i,i}}_{i})$ is log canonical from Claim \ref{rem-real-lct-term}. Set $\Omega_{i}=\Delta_{i}+ \eta_{y_i,i} \Gamma_{i}$ and $Y_i=Y_{y_i,i}$. Then we see the following: \begin{cl}\label{rem-real-gacc-term} It holds that $K_{Y_i}+\Delta^{Y_i}$ is $f_i:=f_{y_{i}, i}$-numerical trivial for some $i$. \end{cl} \begin{proof}[Proof of Claim \ref{rem-real-gacc-term}] We can take a subsequence $\{y_{k_j}\}$ of $\{y_{i}\}$ such that $$ \Omega_{k_{j}} \leq \Omega_{k_{j+1}} $$ since $\Omega_{i} \to \Delta$ when $i \to \infty$. From Conjecture \ref{gacc}, by taking a subsequence again, we may assume that it holds that $$\Omega_{k_{0}}^{Y_{k_0}}|_{F_{k_0}} = \Omega_{k_{l}}^{Y_{k_0}}|_{F_{k_0}}$$ for a general fiber $F_{k_0}$ of $f_{k_0}$ and $l>0$ since all coefficients of $\Omega_{k_{j}}^{Y_{k_j}}|_{F_{k_j}}$ have only finitely many possibilities. Let $i=k_0$, then we see that $$K_{Y_i}+\Delta^{Y_i} \equiv_{f_i} 0. $$ \end{proof} Thus we construct such a model as in Lemma \ref{term pe fibration lemma}. \end{proof} \begin{rem}\label{rem-real-non-non-posi} We do not know whether the above birational map $\varphi$ is $(K_X+\Delta)$-non-positive or not. \end{rem} \begin{proof}[Proof of Theorem \ref{main-real}] We will show it by induction on dimension. In particular we may assume that Conjecture \ref{conj-non-vani-k} in the case where the dimension $\leq n-1$ and $\mathbb{K}=\mathbb{R}$ holds. Now we may assume that $(X,\Delta)$ is a $\mathbb{Q}$-factorial divisorial log terminal pair due to a {\em dlt blow-up} (cf. \cite[Theorem 3.1]{kokov-lc-dubois}, \cite[Theorem 10.4]{fujino-fund} and \cite[Section 4]{fujino-ss}). First we show Theorem \ref{main-real} in the following case. \begin{case}\label{QtoRinKLT} $(X,\Delta)$ is kawamata log terminal and $\Delta$ is an $\mathbb{R}$-divisor. \end{case} \begin{proof}[Proof of Case \ref{QtoRinKLT}]\label{proofofQtoRinKLT}We may assume that we can take a sequence of effective $\mathbb{Q}$-divisors $\{\Delta_i\}$ such that $\Delta_i\leq \Delta_{i+1}$, $K_X+\Delta_i$ is not pseudo-effective for any $i\geq 0$, and $$\lim_{i \to \infty}\Delta_i=\Delta.$$ By Lemma \ref{term pe fibration lemma}, we can take a contracting birational map $\varphi:X \dashrightarrow X'$ such that there exists a projective morphism $f':X' \to Z$ with connected fibers satisfying: \begin{itemize} \item[(1)] $(X',\Delta')$ is $\mathbb{Q}$-factorial log canonical and $\rho(X'/Z)=1$, \item[(2)] $K_{X'}+\Delta' \equiv_{f'} 0$, \item[(3)] $\Delta'-\Delta'_i$ is $f'$-ample for some $i$, and \item[(4)] $\mathrm{dim}\,X > \mathrm{dim}\,Z,$ \end{itemize} where $\Delta'$ and $\Delta'_i$ are the strict transform of $\Delta$ and $\Delta_i$ on $X'$. By taking resolution of $\varphi$, we may assume that $\varphi$ is morphism. Thus we see that $\kappa_{\sigma}((K_X+\Delta)|_{F})=0$ for a general fiber of $f' \circ \varphi$, where $\kappa_{\sigma}(\cdot)$ is the numerical dimension (cf. \cite{nakayama-zariski-abun} and \cite{lehmann-comparing}). When $\dim\,Z=0$, we see that $\kappa_{\sigma}(K_X+\Delta)=0$. Then, from the abundance theorem of numerical Kodaira dimension zero for $\mathbb{R}$-divisors (cf. \cite[Theorem 4.2]{ambro-canonical}, \cite[V, 4.9. Corollary]{nakayama-zariski-abun}, \cite[Corollaire 3.4]{duruel-nu0}, \cite{kawamata-abunnuzero}, \cite{ckp-num}, and \cite[Theorem 1.3]{g3}), we may assume that $\dim\,Z \geq1$. Then, by Remark \ref{rem_real_bir1} and Kawamata's theorem (cf. \cite[Theorem 1.1]{fujino-kawamata}, \cite[Theorem 6-1-11]{kamama}), there exists a good minimal model $f'':(X'',\Delta'')\to Z$ of $(X,\Delta)$ over $Z$. And let $g:X'' \to Z'$ be the morphism of the canonical model $Z'$ of $(X,\Delta)$. Then $Z' \to Z$ is a birational morphism. From Ambro's canonical bundle formula for $\mathbb{R}$-divisors (cf.~\cite[Theorem 4.1]{ambro-canonical} and \cite[Theorem 3.1]{fg2}) there exists an effective divisor $\Gamma_{Z'}$ on $Z'$ such that $K_{X''}+\Delta'' \sim_{\mathbb{R}}g^*(K_{Z'}+\Gamma_{Z'})$. By the hypothesis on induction, we can take an effective divisor $D'$ on $Z'$ such that $K_{Z'}+\Gamma_{Z'}\sim_{\mathbb{R}}D'$. \end{proof} Next we show Theorem \ref{main-real} in the case where $(X,\Delta)$ is divisorial log terminal and $\Delta$ is an $\mathbb{R}$-divisor. \begin{case}\label{QtoRinDLT} $(X,\Delta)$ is divisorial log terminal and $\Delta$ is an $\mathbb{R}$-divisor. \end{case} \begin{proof}[Proof of Case \ref{QtoRinDLT}]\label{proofofQtoRinDLT} We take a decreasing sequence $\{\epsilon_i\}$ of positive numbers such that $\lim_{i \to \infty}\epsilon_i=0$. Let $S=\sum S_k$ or $0$ be the reduced part of $\Delta$, $S_k$ its components, and $\Delta_i=\Delta-\epsilon_iS$. We show Theorem \ref{main-real} by induction on the number $r$ of the components of $S$. If $r=0$, Case \ref{QtoRinKLT} implies Conjecture \ref{conj-non-vani-k} for $K_X+\Delta$. When $r>0$, we may assume that $K_{X} +\Delta_i$ is not pseudo-effective from Case \ref{QtoRinKLT} and $K_X+\Delta-\delta S_k$ is not pseudo-effective for any $k$ and $\delta >0$. Then by Lemma \ref{term pe fibration lemma} we can take a contracting birational map $\varphi:X \dashrightarrow X'$ such that there exists a projective morphism $f':X' \to Z$ with connected fibers satisfying: \begin{itemize} \item[(1)] $(X',\Delta')$ is $\mathbb{Q}$-factorial log canonical and $\rho(X'/Z)=1$, \item[(2)] $K_{X'}+\Delta' \equiv_{f'} 0$, \item[(3)] $\Delta'-\Delta'_i$ is $f'$-ample for some $i$, and \item[(4)] $\mathrm{dim}\,X > \mathrm{dim}\,Z,$ \end{itemize} where $\Delta'$ and $\Delta'_i$ are the strict transform of $\Delta$ and $\Delta_i$ on $X'$. Take log resolutions $p: W \to X$ of $(X,\Delta)$ and $q:W \to X'$ of $(X',\Delta')$ such that $\varphi \circ p=q$. Set the effective divisor $\Gamma$ satisfying $$K_W+\Gamma =p^*(K_X+\Delta)+E, $$ where $E$ is an effective divisor such that $E$ has no common components with $\Gamma$. Set the strict transform $\widetilde{S_k}$ and $\widetilde{S}$ of $S_k$ and $S$ respectively on $W$. From Lemma \ref{term pe fibration lemma} (3) $\Supp\, \widetilde{S}$ dominates $Z$. By the same arguments as the proof of Case \ref{QtoRinKLT}, we may assume that $\dim\,Z\geq 1.$ Then, by Remark \ref{rem_real_bir1}, the abundance conjecture (\ref{conj-abun-nef}) in dimension $\leq n-1$, and \cite[Theorem 4.12]{fg3} (cf. \cite[Corollary 6.7]{fujino-bpf}), there exists a good minimal model $f':(W',\Gamma')\to Z$ of $(W,\Gamma)$ in the sense of Birkar--Shokurov over $Z$. If some $\widetilde{S_k}$ contracts by the birational map $W \dashrightarrow W'$ (may not be contracting), then $K_W+\Gamma-\delta \widetilde{S_k}$ is pseudo-effective for some $\delta >0$ from the positivity property of the definition of minimal models (cf. Definition\,\ref{real-def-minimalmodelsenseofBS}). Thus $K_X+\Delta-\delta S_k( = p_*(K_W+\Gamma-\delta \widetilde{S_k}))$ is also pseudo-effective. But this is a contradiction to the assumption of $(X,\Delta)$. Thus we see that any $\widetilde{S_k}$ dose not contract by the birational map $W \dashrightarrow W'$. Let $g:W' \to Z'$ be the morphism of the canonical model $Z'$ of $(W,\Gamma)$. Then $Z' \to Z$ is a birational morphism since $\kappa_{\sigma}((K_W+\Gamma)|_{F})=0$ for a general fiber $F$ of $f' \circ q$. Thus some strict transform $T_k$ of $\widetilde{S_k}$ on $W'$ dominates $Z'$. Now $K_{W'}+\Gamma' \sim_{\mathbb{R}} g^*C$ for some $\mathbb{R}$-Cartier divisor $C$ on $Z'$. By hypothesis of the induction on dimension, there exists an effective $\mathbb{R}$-divisor $D_{T_k}$ on $T_k$ such that $$(K_{W'}+\Gamma')|_{T_k}=K_{T_k}+\Gamma_{T_k} \sim_{\mathbb{R}}D_{T_k}.$$ Since $T_k$ dominates $Z'$, there also exists some effective $\mathbb{R}$-divisor $G$ such that $G \sim_{\mathbb{R}}C$. Thus $$K_{W'}+\Gamma' \sim_{\mathbb{R}} g^*G\geq0.$$ This implies the non-vanishing of $K_X+\Delta$. \end{proof} We finish the proof of Theorem \ref{main-real}. \end{proof} \section{Log non-vanishing theorem for rationally connected varieties}\label{RC-non} From the same argument as the proof of Case \ref{QtoRinKLT} we see the following theorem: \begin{thm}\label{main-RC-non-van}Assume that the global ACC conjecture (\ref{gacc}) and the ACC conjecture for log canonical thresholds (\ref{acclct}) in dimension $\leq n$. Let $X$ be a rationally connected variety of dimension $n$ and $\Delta$ an effective $\mathbb{Q}$-Weil divisor such that $K_X+\Delta$ is $\mathbb{Q}$-Cartier and $(X,\Delta)$ is kawamata log terminal. If $K_X+\Delta$ is pseudo-effective, then there exists an effective $\mathbb{Q}$-Cartier divisor $D$ such that $D \sim_{\mathbb{Q}} K_X+\Delta$. \end{thm} \begin{proof}We show by induction on dimension. First, we may assume that $X$ is smooth by taking a log resolution of $(X,\Delta)$. From \cite[Proposition 8.7]{dhp-ext}, the pseudo-effective threshold of $\Delta$ for $K_X$ is also a rational number. Thus we may assume that $K_X+\Delta-\epsilon \Delta$ is not pseudo-effective for any positive number $\epsilon$. We take a decreasing sequence $\{\epsilon_i\}$ of positive numbers such that $\lim_{i \to \infty}\epsilon_i=0$. Let $\Delta_i=\Delta-\epsilon_i \Delta$. Then, by the same argument as the proof of Case \ref{QtoRinKLT}, we may assume that there exists a projective morphism $f:X \to Z$ of connected fibers to normal variety $Z$ such that $\kappa_{\sigma}((K_X+\Delta)|_{F})=0$ for a general fiber $F$ of $f$ and $\mathrm{dim}\,X > \mathrm{dim}\,Z$. Moreover we see that $Z$ is also a rationally connected variety. Then we see that Theorem \ref{main-RC-non-van} by \cite[Lemma\,4.4]{gl1} (cf. \cite{laigood}) and the hypothesis of induction. \end{proof} \end{document}
arXiv
Intergenerational mobility in Korea Soobin Kim ORCID: orcid.org/0000-0002-7909-16081 This study investigates intergenerational earnings mobility in Korea for sons born between 1958 and 1973 and compares Korea's mobility to that of other nations. It uses data from the Korea Labor and Income Panel Study and the Household Income and Expenditure Survey conducted by the Korean National Statistics Bureau. Since no single Korean dataset includes information on both sons' and their fathers' adult earnings, this study follows the two-sample approach previously applied in Korea by Ueda (J Asian Econ 1–22, 2013), whose estimated intergenerational earnings elasticity is 0.22, and extends the analysis by using fathers' earnings from a more approximal cohort. The estimate of around 0.4 is similar to estimates for some already developed countries and smaller than typical estimates for recently developing countries. Intergenerational mobility refers to the persistence between parents' and children's outcomes. If parents' earnings do not impact much on their offspring's earnings, the degree of intergenerational earnings mobility is high, and it could be that relative economic disadvantages in the early years will persist to a lower extent in adulthood. That is, intergenerational earnings mobility explores the characteristics of inequality in economic opportunity as well. For a survey of relevant literature, see Solon (1999) and Black and Devereux (2010). Some features of Korea make it an interesting case for the study of intergenerational mobility. First, Korea experienced rapid and extensive economic growth in the past half century when the real GDP per capita increased 15-fold. At the same time, inequality in labor earnings steadily decreased from the 1970s to the 1990s. The extent to which these changes in labor market conditions is related to the high degree of intergenerational mobility is an interesting question. Second, the Korean education system is very competitive due to the strong desire of Koreans for education, and Korea went through a great expansion in education in the last few decades. At the same time, education has been viewed as a vehicle to the next highest level of schooling and a means of obtaining higher socio-economic status (Korea 1991). Thus, whether the intergenerational mobility varies with parental education is another relevant question to answer. Because of a lack of longitudinal data spanning two generations, only a limited number of studies on intergenerational earnings mobility in Korea have been done. Recent studies in Korea by Kim (2009) and Choi and Hong (2011) employed co-residing father-son pairs in the initial round of panel data. However, as noted by Solon (2002), this sample may display a different intergenerational association than would a more representative sample.Footnote 1 Moreover, as in most other empirical studies, they estimated intergenerational earnings elasticities using short-run proxies for permanent earnings, which may generate downward biases in estimates.Footnote 2 An important exception avoiding this difficulty is Ueda (2013) who utilized a two-sample method to impute fathers' permanent earnings and showed relatively higher estimated intergenerational earnings mobility in Korea. This study estimates intergenerational earnings mobility in Korea following the method presented in Ueda (2013) and extends empirical analysis in two dimensions. First, I use an additional national representative sample to better approximate the actual fathers' birth cohorts so that fathers' missing permanent earnings are more accurately imputed. I also carefully choose age ranges for each generation to minimize life-cycle bias that stems from using current earnings for lifetime earnings.Footnote 3 Second, I compare the intergenerational mobility of Korea with that of 13 other countries that come from the two-sample method. The intergenerational elasticity estimate of around 0.4 in Korea is similar to that in already developed countries and relatively smaller than recently developed or developing countries. The remainder of this study is organized as follows: Section 2 describes the methodology employed in early literature. Section 3 discusses the data. Section 4 presents the empirical results. Section 5 concludes with remarks. Literature review and method In this section, I provide a skeletal derivation of the intergenerational mobility developed in Solon (1992) and Björklund and Jäntti (1997). The basic empirical approach in intergenerational mobility literature is to estimate earnings elasticity, which is to estimate ρ 1 in the following equation. $$ {\kern125pt}y_{i}=\rho_{0}+\rho_{1}x_{i}+\epsilon_{i} $$ where y i is the log of the permanent component of the son's earnings in family i, x i is the log of the permanent component of the father's earnings in family i, and ε i is a random disturbance uncorrelated with x i . If y i and x i are observed directly from a random sample, one can estimate ρ 1 in Eq. (1) by applying least squares regression. Here the parameter ρ 1 is the intergenerational earnings elasticity and (1−ρ 1) can be interpreted as a measure of intergenerational mobility. Therefore, by comparing \(\hat {\rho }_{1}\) of each country, comparisons of intergenerational mobility across countries are possible; the higher \(\hat {\rho }_{1}\) is, the less mobile the society is.Footnote 4 However, in most studies, available measures of the earnings variable are current earnings in repeated cross-section samples or in longitudinal samples, and in practice, researchers have used short-run proxies of y it for long-run economic status variables of y i in time t, $$ {\kern125pt}y_{it}=\lambda_{t}y_{i}+h(\text{Age}_{it})+\nu_{it} $$ where λ t is the association between current and lifetime earnings at time t, which is allowed to vary over the life cycle, and ν it , the measurement error in y it as a proxy for y i , is assumed to be uncorrelated with y i and ε i . h(Age it ) is an arbitrary function of a son's age at time t such as a polynomial in age. If one has an appropriate measure of a father's long-run earnings but is forced to use current earnings as a proxy for the son's long-run earnings, plugging Eq. (1) into Eq. (2) yields $$ {\kern105pt}y_{it}=\lambda_{t}\rho_{0}+\lambda_{t}\rho_{1}x_{i}+h\left(\text{Age}_{it}\right)+\eta_{it} $$ where η it is equal to λ t ε i +ν it . In addition to the measurement error in lifetime earnings, Haider and Solon (2006) and Grawe (2006) presented empirical evidence of another source of inconsistency that short-run earnings deviate from long-run earnings over the life cycle: The probability limit of the least squares estimator of the coefficient of x i is equal to λ t ρ 1. Haider and Solon (2006) suggested the age ranges be used for both father and son around their mid-careers, which would more accurately represent lifetime earnings.Footnote 5 Another estimation problem exists when a single dataset containing earnings data for pairs of fathers and sons in a long-time series is unavailable. Björklund and Jäntti (1997) proposed a two-sample method to impute fathers' missing earnings from an auxiliary sample of a father's generation on the basis of a son's report on a father, such as education, industry, and occupation.Footnote 6 Let z i denote a set of fathers' socio-demographic variables such as education and occupation and assume that the permanent component of fathers' earnings is generated by the following relationship: $$ {\kern125pt}x_{i}=z_{i}\phi+\xi_{i} $$ where z i is orthogonal to ξ i by linear projection. From Eq. (4), fathers' long-run economic status variables are generated, \(\hat {x}_{i}=z_{i}\hat {\phi }\), with age controls in the potential fathers' sample.Footnote 7 Rewrite Eq. (1) as \(y_{i}=\rho _{0}+\rho _{1}\hat {x}_{i}+\epsilon _{i}+\rho _{1}(x_{i}-\hat {x}_{i})\) and plug into Eq. (2) gives $$ {\kern95pt}y_{it}=\lambda_{t}\rho_{0}+\lambda_{t}\rho_{1}\hat{x}_{i}+h(\text{Age}_{it})+\omega_{it} $$ where ω it is equal to \(\lambda _{t}\epsilon _{i}+\nu _{it}+\lambda _{t}\rho _{1}(x_{i}-\hat {x}_{i})\). Under regularity conditions described in the Appendix, the probability limit of the least squares estimator of the coefficient of x i is equal to $$ {\kern85pt}\text{plim}_{n\to\infty}\hat{\rho}_{1}=\frac{\lambda_{t}\rho_{1}\text{Var}(x_{i})+\text{Cov}\left(x_{i},\nu_{it}\right)}{\text{Var}(x_{i})} $$ which reduces to λ t ρ 1 if Cov(x i ,ν it )=0. (The proof can be reviewed in the Appendix). However, the consistency still depends on λ t even with the generated regressor, and it calls for researcher caution in choosing the appropriate age range as Haider and Solon (2006) proposed. Nybom and Stuhler (2016) used long series of Swedish income data that contain nearly complete income histories of both fathers and sons and verified Haider and Solon's implications that the life-cycle bias is smallest when incomes are observed around midlife and that the life-cycle bias cannot be eliminated at other ages.Footnote 8 Finally, ordinary least squares regression is applied to Eq. (5) to estimate ρ 1.Footnote 9 Generally, most studies with this methodology have two datasets: The first provides sons' economic status variables with sons' recollected information of fathers' education, industry, and occupational characteristics at the son's particular age during childhood. Those variables are used to generate fathers' missing economic status variables. The second dataset contains potential fathers' economic status variables with socio-demographic characteristics. This supplementary sample is used to predict fathers' economic status variables like earnings, based on fathers' socio-demographic characteristics when sons were at a specific age as reported in the first dataset. Then ρ 1 can be estimated from Eq. (5) with predicted fathers' earnings, \(\hat {x}_{i}\), in lieu of fathers' permanent earnings, x i . Similar to many other countries, Korea does not have a sufficiently long intergenerational panel dataset where explicit information of father-son pairs' economic status variables are observed. Several studies in Korea were done by employing the Korean Labor and Income Panel Study (KLIPS). Using KLIPS, Kim (2009) and Choi and Hong (2011) focused on father-son pairs who co-resided in 1998 and restricted sons who in subsequent years moved into a non-member household (for instance, through marrying). This homogeneous sample of co-resident father-son pairs is an endogenously selected sample and would demonstrate an intergenerational transmission of earnings different from the population. They averaged available earnings to overcome attenuation bias because current earnings are proxied for permanent earnings. However, including younger sons—around 30—and older fathers—in the late 50s—tends to lower estimates due to life-cycle bias. For monthly earnings, coefficients are 0.141 (0.042) and 0.349 (0.096) when the father's education is instrumented for the father's earnings. Ueda (2013) also used KLIPS to estimate intergenerational mobility in Korea and employed a two-sample method to impute actual fathers' permanent earnings using sons' recollections of their fathers' educational levels and occupations when they were 14. Among working men with positive wages aged 25–54 for fathers and 30–39 for sons, Ueda restricted the sons' sample to 2006 and pooled annual earnings for the potential fathers' sample observed over the period 2003–2006. The coefficient is 0.223 (0.072), but Ueda imputed a too-recent earnings function instead of choosing the fathers' sample in actual calendar time. KLIPS contains sons' earnings and their recollections of fathers when they were 14 and is the first Korean longitudinal survey on the labor market and income activities of households and individuals, collected from 1998 to 2008. During the first wave in 1998, a representative sample of 5000 households and their members (15 and over), covering more than 13,000 individuals, was interviewed using the sampling frame from the census, and they became the original panel of households and household members. In addition, Household Income and Expenditure Survey (HIES) is repeated cross-section survey data that are the only publicly available data at an individual level with economic status variables such as labor earnings, family income information of each household, and socio-demographic characteristics. Survey data are available since 1982; however, education information was added to the survey since 1985. HIES, as in KLIPS, used the sampling frame of the census, which supports the argument that both datasets are representative samples of the Korean labor market. Monthly labor earnings are recorded pre-tax in HIES and net of taxes in KLIPS. The pre-tax labor earnings in KLIPS can be calculated because tax on labor earnings is also available in KLIPS from 2004. One data limitation of KLIPS is that the income of self-employed workers is recorded by after-tax value whereas HIES does not provide income information for self-employed workers. This renders it harder to estimate accurate mobility when self-employed fathers are included.Footnote 10 In this study, labor earnings are the main focus, because most previous studies used earnings and it enables international comparison of intergenerational mobility. In addition, earnings mobility is better suited to measure mobility based on an individual's merit than do other economic status variables.Footnote 11 KLIPS and HIES have recorded education, occupation, and industry in different categories. Especially occupation and industry variables are recorded with three digits in KLIPS, but in one digit and two digits in HIES, respectively. Since the categories used for industry and occupation in KLIPS are finer than those used in HIES, those variables are matched according to the HIES category. After recoding categories to have a homogeneous classification across samples, seven different levels of education, nine industry groups, and seven occupational groups are available to predict fathers' missing earnings. The number of predictors for fathers' missing earnings as well as the number of groups of each variable are relatively richer than in previous studies in other countries.Footnote 12 In the analysis, I use two waves of KLIPS for the sons' sample and both KLIPS and HIES for the potential fathers' sample. When replicating Ueda's empirical results, I use KLIPS in 2006 for the sons' sample and KLIPS in 2003 for the potential fathers' sample. Since the age gap between sons in KLIPS in 2006 and potential fathers in KLIPS in 2003 is three, to use more approximal cohorts of actual fathers, I retrieve the sons' sample from KLIPS in 2008 and the potential fathers' sample from HIES in 1985. These two samples are 23 years apart which thus enables matching of the father's generation more closely to actual fathers than does using 2003 for the potential fathers' sample.Footnote 13 Preferred age range for both generations is between 35 and 50 as the errors-in-variables bias in sons' earnings stays small, modifying the results from Haider and Solon (2006) given that Korean male workers generally enter the labor market about 3–5 years later than in the USA due to mandatory military service obligations.Footnote 14 Both KLIPS in 2008 and HIES in 1985 are restricted to working men aged between 35 and 50 with positive wages, which leaves 1700 observations in KLIPS and 1780 in HIES.Footnote 15 Especially in HIES, the fathers' sample was further restricted to those with a positive number of children aged 6–19 in 1985. Fathers or sons who lived in foreign countries when their sons were 14 are excluded. Narrowing the sample to those with all education, industry, and occupation variables recorded, the number of observation drops to 1666 in KLIPS and 1577 in HIES. Descriptive statistics of variables used for the main sample and the supplemental sample are summarized in Table 1. Table 1 Descriptive statistics Empirical results To extend the empirical results from Ueda (2013), the analysis starts by following his identification strategy of applying the two-step method to a single dataset, KLIPS, and introduces HIES for the potential fathers' sample. Ueda (2013) averaged annual earnings between 2003 and 2006 for potential fathers and retrieved sons' earnings from 2006 and restricted ages for sons to 30–39 and for fathers to 25–54. To provide results similar to Ueda, I retrieve sons' earnings from KLIPS in 2006 and potential fathers' annual earnings from 2003 and restrict the same age ranges for sons and fathers. To implement the two-sample method, in the first step in Eq. (4), fathers' log earnings in 2003 are regressed on age, age-squared, industry, occupation, and education variables followed by sample selection rules described in the previous section. Then, as in Eq. (5), sons' log earnings in 2006 from KLIPS are regressed on generated fathers' permanent earnings, age, and age squared of sons.Footnote 16 Standard errors are estimated by the bootstrap method following Björklund and Jäntti (1997).Footnote 17 Table 2 summarizes results and the estimate replicating Ueda's approach is 0.205 with a bootstrapped standard error of 0.057, which is similar to Ueda's baseline estimate of 0.223. Ueda used education and occupation to predict fathers' missing earnings, and when I use those two variables as predictors, the estimate is 0.244 (0.094). When the later round in 2008 is used for the sons' sample, the estimate is 0.310 (0.060) which suggests that detailed matching of potential fathers with actual fathers could be important. Table 2 Intergenerational earnings elasticity Restricting to the preferred age range of 35–50 for both generations, the estimate in panel D increases to 0.334 (0.057), partly due to excluding young fathers. Results are consistent with previous studies on life-cycle bias; inclusion of younger sons or older fathers lowers estimates. That is, the correlation between a father's age (son's age) at measurement and the size of \(\hat {\rho }_{1}\) is negative (positive). The next two panels examine whether the elasticity is different with respect to the father's self-employment status. Nine hundred and ninety-one out of 1666 sons have self-employed fathers when they were 14, and the estimates are 0.144 (0.083) for sons with self-employed fathers and 0.218 (0.061) for sons with employed fathers, which frees concern that the self-employment status of fathers might significantly affect the estimates. Approximating pseudo-fathers' earnings with recent cohorts, however, implicitly assumes that potential fathers' characteristics in 2003 are close to those for actual fathers, and uses information from the younger-father generation. In other words, if the average age gap between fathers and sons is 30, then fathers' actual ages in 2003, whose sons are aged 30–39 in 2008, are 55–64 instead of 25–54. Moreover, occupation, industry, and education distribution in 2003, used for potential fathers' characteristics, are more similar to those for sons in 2008 than to those for actual fathers. Thus, results of this approach are vulnerable if one supposes significant changes occurred in the wage structure in recent decades. To retrieve potential fathers' information from a more approximal cohort of actual fathers, I use HIES and generate pseudo-fathers' earnings based on sons' recollections on fathers' characteristics. The role of HIES By retrieving potential fathers' information from HIES in 1985, the father-son age gap becomes more realistic and the distribution of earnings predictors including education, occupation, and industry becomes closer to those of actual fathers remembered by sons than to those of potential fathers in KLIPS 2003. Age ranges for both generations are restricted to 35–50 as it best reflects the feature of the Korean labor market that mandatory military service generally delays men from joining it. Moreover, the preferred age range better represents mid-career earnings, and this specification with three earnings predictors for fathers is served as the baseline model.Footnote 18 By excluding younger sons in their later 20s and early 30s and older fathers above 50, the estimate increases to 0.386 (0.064) in panel G in Table 2.Footnote 19 Table 3 further reports regression results with several different sample specifications. Some concern might arise that the occupation distribution of potential fathers and real fathers are imperfectly matched. Although required information from the first step is the sample average of earnings in each predictor category, in panel A, the occupation categories are merged and reorganized to generate similar distributions. However, the number of categories does not change estimates significantly. In fact, estimates lie in the range 0.401 to 0.407 when the number of occupation categories is changed from 6 to 4, which indicates that the estimates are robust to occupation specifications. Thus, a different occupation category distribution has negligible impact on estimates. Table 3 Sensitivity of intergenerational earnings elasticity The age range of 35–50 is chosen to have λ t close to 1 so that the measurement error is close to the classical errors-in-variables. Many studies using current earnings to proxy for permanent earnings averaged earnings over years to deal with the measurement error following Solon (1992). Estimates of intergenerational earnings elasticity become larger as fathers' earnings are averaged over more years. As potential fathers are taken from HIES in 1985 and HIES is repeated cross-section data, calculating missing father's average earnings is challenging. In addition, Nybom and Stuhler (2016) provided evidence that changing the age span for sons has more impact on life-cycle bias than changing that of fathers. Thus, sons' earnings are averaged over years, and the results in panel B show that the estimates increase as earnings are averaged over more years. In the base model, all three earnings predictors are used. If one changes the combination of earnings predictors and uses a subset of predictors, sample size increases by only nine, which frees the concern of having a smaller sample size in exchange for having more predictors. Results in panel C indicate that the estimates change from 0.35 to 0.59, suggesting that researchers should pay attention when they choose appropriate predictors. Equation (6) implies that the estimator with a generated regressor is inconsistent if father's earnings predictors are correlated with son's earnings (Cov(x i ,ν it )=0). For example, if the father's education has a positive effect on son's earnings, then the estimator may be upward biased. However, the extent to which other predictors such as father's industry or occupation are correlated with son's earnings is less clear and so is the direction of bias. In addition, first-stage results from Table 4 show that the industry variable explains relatively less variations in earnings than occupation or education does, which could result in a higher \(\hat {\rho }_{1}\) of 0.59. On the other hand, all other estimates that used father's education as a predictor are close to 0.39. For comparison, majority of other countries' studies on intergenerational elasticity with two-sample estimation, documented in Table 5, did not use an industry variable to predict fathers' earnings. However, it is not clear in which direction the estimate would move if an industry variable is included.Footnote 20 Table 4 Choice of father's earnings predictors Table 5 Comparable intergenerational earnings elasticity with two-sample estimation Table 5 summarizes the evidence of intergenerational mobility from 13 other countries that come from two-sample estimation. For comparability with the Korea results, the table focuses on the earnings elasticity estimates of father-son pairs and lists the age ranges and sets of predictors used to generate fathers' earnings. While Nybom and Stuhler (2016) pointed out that the bias in elasticity estimates can differ across countries and cohort even if earnings are measured at the same age, we might expect similarities in its broad patterns. The intergenerational elasticity estimate around 0.4 in Korea is similar to that of already developed countries and relatively smaller than recently developed or developing countries. That is, the mobility in Korea is relatively higher than other developing countries (e.g., 0.69 in Brazil and 0.52 in Chile).Footnote 21 Some studies, for instance Piraino (2007) in Italy, investigated the channels in the transmission of economic status and found parental education's contribution to the intergenerational mobility. Korea went through a great expansion in education in the last few decades, and the parent-child schooling correlation among 20–69 sons in 2008 is only 0.333, one of the lowest values according to Hertz et al. (2008).Footnote 22 In particular, approximately 60% of sons in 2008 are educated beyond high schools, whereas about 50% of their fathers have education equal or less than middle school. At the same time, there is a differential probability of attaining post-secondary education degree by father's status. For example, the probability of attaining college or advanced degree is 32 percentage points higher for sons whose fathers are educated more than middle school. As the wage gap between sons with a college or advanced degree and those with no education beyond high school is 100% in 2008, I estimate the role of education as a channel of intergenerational transmission by adding the son's education dummy variables to Eq. (5). The resulting \(\hat {\rho }_{1}=0.196\) suggests that education explains 49% of the observed persistence, which is similar to the previous findings in the USA (Bowles and Gintis 2002; Blanden et al. 2014). Additional analysis shows that intergenerational mobility differs with respect to father's education. In particular, sons whose fathers have an education equal or less than middle school have the highest intergenerational elasticity estimate of 0.415. On the contrary, the elasticity estimate for sons whose fathers have a high school degree is 0.252. Finally, the estimate for sons whose fathers have a college or more advanced degree is 0.193, which indicates the highest intergenerational earnings mobility. The extent to which the differential intergenerational mobility by the father's education translates into the earnings inequality is important for future research. This study examines intergenerational earnings mobility in Korea with the two-sample estimation method to generate the father's missing permanent earnings by combining a panel dataset, which includes the son's earnings and recollection information on the father's socio-demographic characteristics, and a cross-section dataset, which contains earnings and socio-demographic information of potential fathers. Results indicate that the measurement error in sons' current earnings as a proxy for permanent earnings is a source of inconsistency even when fathers' earnings are generated. Thus, the working father-son sample is restricted to age 35–50 to be least affected by the life-cycle bias, and the elasticity estimate is around 0.4. Estimated intergenerational earnings elasticity is similar to estimates for some already developed countries and smaller than typical estimates for recently developing countries. Previous studies on Korean intergenerational earnings elasticity tend to have lower estimates than 0.4. Some included younger sons and older fathers in the sample, and those factors contributed to lower estimates. Moreover, focusing on a homogeneous sample of co-residing father-son pairs may result in lower estimates. Ueda (2013) also employed two-sample estimation; however, less attention was paid to detailed matching, as an inaccurate period of observation for the potential fathers' sample was used for imputation.Footnote 23 Thus, this study contributes to more acute estimation of mobility, with two representative samples aiming to match pairs correctly by choosing the right age range for both generations, which better represents permanent earnings. Perhaps one of the most important remaining issues to deal with is the life-cycle bias in Korea. As Nybom and Stuhler (2016) suggested, life-cycle bias will differ quantitatively across countries and cohorts, and small age deviations can cause notable changes in elasticity estimates, which appears to be relevant in the Korean context. For example, male workers in Korea generally have to serve in the army from their late teens, which on average delays labor market participation timing by 3 to 5 years compared to the USA. Since data access to individual earnings histories for multiple generations is limited in Korea, instead of analyzing the framework as in Haider and Solon (2006) or in Nybom and Stuhler (2016), alternative approaches to studying the life-cycle bias in Korea are required in the future. I derive the consistency of OLS estimator \(\widehat {\rho _{1}}\) in Eq. (7), where the dependent variable has a measurement error due to using the proxy and the independent variable is generated from an auxiliary regression. $$ {\kern125pt}y_{it}=\rho_{1}\hat{x}_{i}+\omega_{it} $$ where ω it is equal to \(\lambda _{t}\epsilon _{i}+\nu _{it}+\lambda _{t}\rho _{0}+h(\text {Age}_{it})+(\lambda _{t}-1)\rho _{1}\hat {x}_{i}+\lambda _{t}\rho _{1}(x_{i}-\hat {x}_{i})\). Write Eq. (1) as $$ {\kern125pt}y=x\rho+u $$ where x=f(x 1,θ), x 1 is a vector of variables from the first step that determines the unobservables, f(·), which is a 1×K vector of functions determined by the unknown vector θ, which is Q×1. Assume that \(\mathbb {E}(u|x_{1})=0\) and errors are independent across observations. Further assume that \(\hat {\theta }\) is a \(\sqrt {N}\)-consistent estimator of θ. Now let \(\hat {\rho }\) be the OLS estimator from the equation $$ {\kern125pt}y_{i}=\hat{x}_{i}\rho+\text{error}_{i} $$ where \(\hat {x}_{i}=f\left (x_{1i},\hat {\theta }\right)\) and \(\text {error}_{i}=u_{i}+\left (x_{i}-\hat {x}_{i}\right)\rho \), the ordinary least squares estimator is $$ \hat{\rho}=\left(\sum_{i=1}^{N}\hat{x}_{i}^{'}\hat{x}_{i}\right)^{-1}\left(\sum_{i=1}^{N}\hat{x}_{i}^{'}y_{i}\right) $$ Write \(y_{i}=\hat {x}_{i}\rho +\left (x_{i}-\hat {x}_{i}\right)\rho +u_{i}\), where x i =f(x 1i ,θ), then plugging this in and multiplying through by \(\sqrt {N}\) gives $$ \sqrt{N}\left(\hat{\rho}-\lambda_{t}\rho\right)=\left(N^{-1}\sum_{i=1}^{N}\hat{x}_{i}^{'}\hat{x}_{i}\right)^{-1}\left\{ N^{-1/2}\sum_{i=1}^{N}\hat{x}_{i}^{'}\left[\left(x_{i}-\hat{x}_{i}\right)\lambda_{t}\rho+\xi_{i}\right]\right\} $$ where ξ i =λ t ε i +ν it +λ t ρ 0+h(Age it ). Under the regularity condition stated in theorem 1 in Murphy and Topel (1985) or theorem 12.3 in Wooldridge (2010),Footnote 24 a mean value expansion of \(\hat {\theta }\) gives $$ N^{-1/2}\sum_{i=1}^{N}\hat{x}_{i}^{'}\xi_{i}=N^{-1/2}\sum_{i=1}^{N}x_{i}^{'}\xi_{i}+\left[N^{-1}\sum_{i=1}^{N}\nabla_{\theta} f\left(x_{1},\theta\right)^{\prime}\xi_{i}\right]\sqrt{N}\left(\hat{\theta}-\theta\right)+o_{p}(1) $$ Because \(\mathbb {E}\left (\nabla _{\theta }f(x_{1},\theta)^{'}\xi _{i}\right)=0\), it follows that \(N^{-1}\sum _{i=1}^{N}\nabla _{\theta }f(x_{1},\theta)^{'}\xi _{i}=o_{p}(1)\), and since \(\sqrt {N}(\hat {\theta }-\theta)=O_{p}(1)\), $$ {\kern85pt}N^{-1/2}\sum_{i=1}^{N}\hat{x}_{i}^{'}\xi_{i}=N^{-1/2}\sum_{i=1}^{N}x_{i}^{'}\xi_{i}+o_{p}(1) $$ Using similar reasoning, by mean value expansion $$ N^{-1/2}\sum_{i=1}^{N}\hat{x}_{i}^{'}\left(x_{i}-\hat{x}_{i}\right)\lambda_{t}\rho=-\left[N^{-1}\sum_{i=1}^{N}(\rho\otimes x_{i})^{'}\nabla_{\theta}f(x_{1},\theta)\right]\sqrt{N}(\hat{\theta}-\theta)+o_{p}(1) $$ Now assume that $$ {\kern85pt}\sqrt{N}(\hat{\theta}-\theta)=N^{-1/2}\sum_{i=1}^{N}r_{i}(\theta)+o_{p}(1) $$ where I assume \(\mathbb {E}[r_{i}(\theta)]=0\), which even holds for most estimators in nonlinear models.Footnote 25 If I assume that Cov(x i ,h(Age it ))=0, then $$ {\kern85pt}\text{plim}_{n\to\infty}\hat{\rho}=\frac{\lambda_{t}\rho \text{Var}(x_{i})+\text{Cov}(x_{i},\nu_{it})}{\text{Var}(x_{i})} $$ which reduces to λ t ρ if Cov(x i ,ν it )=0. For consistency, replacing x i with \(\hat {x}_{i}\) in an OLS estimation causes no problem as in Wooldridge (2010). Table 6 Father-son age difference Average age difference between fathers and sons in KLIPS 2005 and Census 2005. a Average age difference in the original samples. b Average age difference when the difference between KLIPS 2005 and Census 2005 is corrected Table 7 First-step regression In fact, they further restricted the sample to those sons who moved out to form a new household. This sample selection approach has a potential risk of endogenous sample selection; non-co-residence sons during certain birth years are out of the sample and the way they moved out is endogenous. Moreover, if the average son's age in the sample is older than the average or median home-living son's age, then the sample over-represents sons who left home at late ages. Francesconi and Nicoletti (2006) in the UK found a downward bias of up to 25% in intergenerational elasticity when the sample is restricted to co-residence father-son pairs. See Solon (1992) for details. Earnings vary with observed age, and a life-cycle pattern exists in the correlation between current observed and lifetime earnings, known as life-cycle bias. Studies showed estimates to be sensitive to not only the father's observed age but also the son's age. If, for instance, the son's earnings are observed in the early stage of his career, it causes a downward effect on the estimate. Theoretical and empirical analyses of life-cycle bias are well documented in the USA by Haider and Solon (2006), in Sweden by Böhlmark and Lindquist (2006) and by Nybom and Stuhler (2016), and in Germany by Brenner (2010). The evidence from these studies shows that income measures in the age range between the early-30s and the mid-40s should be least affected by life-cycle bias when dependent variables are proxied. There is no study of life-cycle bias for any Asian countries nor for generated regressors, yet I adopted their results and modified them based on Korean labor market features. An alternative way to measure the extent of intergenerational earnings mobility is to estimate intergenerational correlation, κ. $$\kappa=\left(\sigma_{0}/\sigma_{1}\right)\rho_{1} $$ where σ 1 is the standard deviation of a son's log earnings and σ 0 is the same variable for his father. By construction, κ is equal to ρ 1 only if the standard deviations of log earnings are the same for both generations. In a classical errors-in-variables model when λ t =1, the OLS estimate of λ t ρ 1 is unbiased even in the presence of the measurement error in the dependent variable. However, Haider and Solon (2006) showed that λ t varies over a life cycle, which needs not equal to one, and the estimator is biased by a factor of λ t . Also, see Solon (1992) for the attenuation bias when there is a classical measurement error in both the son's and the father's earnings. I impute fathers' missing earnings due to data availability, but the issue of measurement error by using current earnings for long-run earnings is incidental. This two-sample approach is sometimes incorrectly labeled as TS2SLS. However, it is not because not all exogenous second-stage regressors including the son's age variables are included in the first stage in the Eq. (4). In addition, Nybom and Stuhler (2016) provided examples when the unobserved idiosyncratic deviations from average income profiles might correlate within families or with family incomes, i.e., Cov(x it ,ν it )≠0. For example, sons with high-income fathers might acquire more education and have lower initial earnings and steeper slopes of earnings profiles. Thus, the income trajectories of sons from rich and poor families could be different even if individual characteristics are controlled for. Note that ρ 1 in Eq. (3) will not be equal to ρ 1 in Eq. (5) as composite errors differ except for \(x_{i}=\hat {x}_{i}\). One feasible expectation of the magnitude of ρ 1 is that ρ 1 in Eq. (5) would be larger than that in Eq. (3) if there is a positive correlation between fathers' socio-demographic variables and sons' economic status variable; Björklund and Jäntti (1997) and Ueda (2013) used it as an upper bound on the true estimates. Except for fathers' education, however, it is not clear how other fathers' industry or occupation variables can affect sons' earnings. Moreover, the direction of bias is even more questionable when life-cycle bias comes into consideration. Thus, in this study, I do not interpret \(\hat {\rho }_{1}\) in Eq. (5) as an upper bound of \(\hat {\rho }_{1}\) in Eq. (3). Hereafter, the value of ρ 1 is denoted as ρ 1 in Eq. (5). Results indicate that the elasticity estimate is robust to the treatment on the self-employed workers. See Björklund and Jäntti (2009) for more discussion on different income measures and their features. For instance, Björklund and Jäntti (1997) used fathers' education and occupation, Nicoletti and Ermisch (2008) used occupational prestige and education, and Lefranc (2011) used education. Using the average age difference between fathers and sons from the national census, the potential fathers' age range in 1985 is set to 35–50 when the sons were 14, which covers around 95% of the father-son pairs. Appendix: Table 6 demonstrates age differences between fathers and sons, and it is clear that statistics for KLIPS 2005 and National Census 2005 are closely similar; this can be verified easily in Appendix: Figure 1. This evidence justifies the use of KLIPS 2008 as a representative sample and restriction of samples based on the age information from KLIPS 2008. In fact, for sons 35–50 in 2008, their possible fathers were 34–68 in 1985; this covers 95% of fathers based on age difference information from census data in 2005. If I match the age range of 35–50 for fathers in 1985, I lose 20% of the sample; however, the estimates are similar. More information is provided in the next section. Between household head and non-head sons, differences exist in earnings and educational attainment. But excluding non-heads and restricting only to heads could be an endogenous selection. Moreover, there is no formal requirement to answer as a head but it is who represents the household. Thus, I included all male workers and presented the results for both samples. In addition, the national unemployment rate in Korea is around 5% in late 1980s and around 3.5% in 2000s, indicating that the excluded unemployed population is not troublesome. Note that estimates of age controls such as age and age squared of fathers are not used to generate fathers' missing earnings. This is because I am not predicting earnings at a particular age but am trying to predict fathers' long-run earnings, which requires the standardization on ages. First, I draw a bootstrap sample of fathers from KLIPS 2003 and run equation (4) to estimate parameters. Then I draw another bootstrap sample of sons from KLIPS 2006, from whose recollections data is used to generate fathers' earnings. I estimate ρ 1 in Eq. (5) and save estimates for 1000 replications. Murphy and Topel (1985) and Pagan (1984) showed that standard two-step procedures not accounting for generated regressor problems underestimate standard errors of the consistent second-step estimators and that corrected standard errors are larger than their uncorrected counterparts. If a researcher ignores the fact that fathers' earnings are generated and uses a bootstrap only in the second step, then standard errors are smaller than our approach, bootstrapping both steps, but still larger than those without bootstrapping in OLS. Key father's earnings predictors are chosen to maximize R 2 of the first-stage regression, and the results are summarized in Table 4. The adjusted R 2 in the first stage, 0.393, is relatively larger than the other studies in Table 5: Piraino (2007) with 0.322, Mocetti (2007) with 0.301, Nicoletti and Ermisch (2008) with 0.289, and Ueda (2013) with 0.23. Preferred first-step regression results are summarized in Appendix: Table 7 with an age range of 35–50 for both generations using all three earnings predictors. If I match the age range of 34–68 for potential fathers in 1985 covering 95% of the father-son pairs, the estimate is 0.397, very similar to the estimate in the baseline model. Thus, hereafter, the age range of fathers in 1985 is fixed at 35–50 instead of 34–68. When self-employed sons are excluded, the sample size decreases to 502, and the estimate is 0.409 (0.064). Further analysis shows that the estimate is robust to the treatment on the self-employed workers. Results are available upon request. In addition, for household heads, the sample size is 572 and \(\hat {\rho }_{1}\) is 0.351 (0.062). Heads earn approximately 15 to 30% more than non-head members, and this might result in a relatively lower estimate. If I exclude the agriculture sector in industry and in occupation categories, which mostly considers the sample residing in urban areas, the estimate is 0.337, the lowest among all models. It is reasonable to conjecture that the intergenerational mobility is higher in urban areas than in rural areas, accounting for job opportunities in those areas. Key comparable countries in Table 5 have different age ranges for fathers and sons and different sets of fathers' earnings predictors. Since each country has a different education-, industry-, and occupation structure and history and different worker quality, precise international comparison is more challenged, and no formal statistical test exists for comparison. For simplicity, when I match age ranges and sets of predictors with corresponding countries in Table 5, except for Chile where fathers' age-range information is unavailable, the relative mobility in Korea stay stable. Hertz et al.(2008) documented the international comparison of educational inheritance for sons 20–69. Some noticeable countries in Table 5 are Brazil (0.59), Chile (0.6), China (rural, 0.2), Italy (0.54), Sweden (0.4), UK (0.31), and USA (0.46). Real GDP per capita in Korea increased more than three times between 1985 and 2003, implying that the potential fathers' cohort in 1985, which is more proximal to actual fathers, is different from the cohorts in 2003. (a) \(D_{0}\equiv {\text {plim}}_{n\to \infty }N^{-1}\sum _{i=1}^{N}\hat {x}_{i}^{'}\hat {x}_{i}=\mathbb {E}(x'x)\), (b) f(·) is twice continuously differentiable in θ for each x 1 with the sample second moments of ∂ f/∂ θ uniformly bounded in the sense of \(\text {plim}_{n\to \infty }\left (N^{-1}\sum _{i=1}^{N}\hat {x}_{i}^{'}\hat {x}_{i}\right)\left [N^{-1}\sum _{i=1}^{N}\nabla _{\theta }f(x_{1},\theta)\xi _{i}\right ]=D_{1}\), where ∇ θ f(x 1,θ) is the K×Q Jacobian of \(\phantom {\dot {i}\!}f(x_{1},\theta)^{'}\), and (c) \(\hat {\theta }\) is a consistent estimator of θ. See chapters 6 and 12 in Wooldridge (2010) for details. Björklund, A, Jäntti M. Intergenerational income mobility and the role of family background. In: Oxford Handbook of Economic Inequality. Oxford: Oxford University Press: 2009. p. 491–521. Björklund A, Jäntti M. Intergenerational income mobility in Sweden compared to the United States. Am Econ Rev. 1997; 87(5):1009–18. Black, SE, Devereux P. Recent developments in intergenerational mobility. Handb Labor Econ. 2010. Blanden, J, Haveman R, Smeeding T, Wilson K. Intergenerational mobility in the United States and Great Britain: A comparative study of parent-child pathways. Rev Income Wealth. 2014; 60(3):425–49. Böhlmark, A, Lindquist MJ. Life-cycle variations in the association between current and lifetime income: replication and extension for Sweden. J Labor Econ. 2006; 24(4):879–96. Bowles, S, Gintis H. The inheritance of inequality. J Econ Perspect. 2002; 16(3):3–30. Brenner, J. Life-cycle variations in the association between current and lifetime earnings: evidence for German natives and guest workers. Labour Econ. 2010; 17(2):392–406. Cervini-Plá M. Exploring the sources of earnings transmission in Spain. Hacienda pública española:45–66. 2013. Choi, J, Hong GS. An analysis of intergenerational earnings mobility in Korea: father-son correlation in labor earnings. Korean Soc Secur Stud. 2011; 27(3):143–63. Dunn, CE. The intergenerational transmission of lifetime earnings: evidence from Brazil. BE J Econ Anal Policy. 2007; 7(2). Fortin, NM, Lefebvre S. Intergenerational income mobility in Canada. In: Labour Markets, Social Institutions, and the Future of Canada's Children: 1998. p. 89–553. Francesconi, M, Nicoletti C. Intergenerational mobility and sample selection in short panels. J Appl Econ. 2006; 21(8):1265–93. Gong, H, Leigh A, Meng X. Intergenerational income mobility in urban China. Rev Income Wealth. 2012; 58(3):481–503. Grawe, ND. Lifecycle bias in estimates of intergenerational earnings persistence. Labour Econ. 2006; 13(5):551–70. Haider, S, Solon G. Life-cycle variation in the association between current and lifetime earnings. Am Econ Rev. 2006; 96:1308–20. Hertz, T, Jayasundera T, Piraino P, Selcuk S, Smith N, Verashchagina A. The inheritance of educational inequality: international comparisons and fifty-year trends. BE J Econ Anal Policy. 2008; 7(2). Kim, H. An Analysis of intergenerational economic mobility in Korea. Korea Development Institute. 2009. R of Korea. Ministry of Education: Educational development in Korea 1988–1990; 1991. Report for the International Bureau of Education, Geneva. Seoul: Korean National Commission for UNESCO. Lefranc, A. Educational expansion, earnings compression and changes in intergenerational economic mobility: Evidence from French cohorts. 2011:1931–1976. Unpublished manuscript, University of Cergy. Lefranc, A, Ojima F, Yoshida T. Intergenerational earnings mobility in Japan among sons and daughters: levels and trends. J Popul Econ. 2014; 27(1):91–134. Leigh, A. Intergenerational mobility in Australia. BE J Econ Anal Policy. 2007; 7(2). Mocetti, S. Intergenerational earnings mobility in Italy. BE J Econ Anal Policy. 2007; 7(2). Murphy, KM, Topel RH. Estimation and inference in two-step econometric models. J Bus Econ Stat. 1985; 3:370–9. Nicoletti, C, Ermisch J. Intergenerational earnings mobility: changes across cohorts in Britain. BE J Econ Anal Policy. 2008; 7(2). Núñez, J, Miranda L. Intergenerational income and educational mobility in urban Chile. Estud Econ. 2011; 38(1):196–221. Nybom, M, Stuhler J. Heterogeneous income profiles and lifecycle bias in intergenerational mobility estimation. J Hum Resour. 2016; 51(1):239. Pagan, A. Econometric issues in the analysis of regressions with generated regressors. Int Econ Rev. 1984; 25:221–47. Piraino, P. Comparable estimates of intergenerational income mobility in Italy. BE J Econ Anal Policy. 2007; 7(2). Solon, G. Cross-country differences in intergenerational earnings mobility. J Econ Perspect. 2002; 16:59–66. Solon, G. Intergenerational mobility in the labor market. Handb Labor Econ. 1999. Solon, G. Intergenerational income mobility in the United States. Am Econ Rev. 1992; 82:393–408. Ueda, A. Intergenerational mobility of earnings in South Korea. J Asian Econ. 2013:1–22. Ueda, A, Sun F. Intergenerational economic mobility in Taiwan: Waseda University Working Paper; 2012. Wooldridge, J. Econometric analysis of cross section and panel data: MIT Press; 2010. I would like to thank Gary Solon, Steven Haider, and Chris Ahlin for helpful comments and sharing their insights. I am also grateful for comments and suggestions from seminar participants at the Canadian Economics Association, Midwest Economics Association, and Michigan State University. I would also like to thank the anonymous referee and the editor for the useful remarks. Responsible editor: David Lam College of Education, Michigan State University, 620 Farm Lane, Room 516, Erickson Hall, East Lansing, 48824-1038, MI, USA Soobin Kim Search for Soobin Kim in: Correspondence to Soobin Kim. Kim, S. Intergenerational mobility in Korea. IZA J Develop Migration 7, 21 (2017) doi:10.1186/s40176-017-0104-4 Intergenerational earnings mobility Generated regressor Two-sample estimation
CommonCrawl
# Vectors and matrices in 3D space To begin with 3D graphics, it's essential to understand vectors and matrices in 3D space. Vectors are mathematical objects that have both magnitude and direction, while matrices are arrays of numbers used to represent linear transformations. In 3D space, a vector can be represented as a column matrix with three rows and one column. For example, a vector $\vec{v}$ in 3D space can be represented as: $$ \vec{v} = \begin{bmatrix} x \\ y \\ z \end{bmatrix} $$ Matrices in 3D space are used to represent linear transformations, such as scaling, rotation, and translation. These transformations can be applied to vectors to create new vectors with modified properties. Consider a scaling transformation matrix $S$ that scales a vector by a factor of 2: $$ S = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} $$ Applying this matrix to the vector $\vec{v}$ results in a new vector $\vec{v}'$ that is twice as long: $$ \vec{v}' = S \vec{v} = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 2x \\ 2y \\ 2z \end{bmatrix} $$ ## Exercise Apply the scaling transformation matrix $S$ to the vector $\vec{v} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$. Calculate the new vector $\vec{v}'$. # Chain rule and its applications The chain rule is a fundamental concept in calculus that allows us to compute the derivative of a composite function. In 3D graphics, the chain rule is used to compute the gradient and Jacobian of functions that map from 3D space to a scalar value. For example, consider a function $f(x, y, z)$ that maps from 3D space to a scalar value. The gradient of $f$ is a vector that represents the direction of the steepest ascent of the function at a given point. The gradient can be computed using the chain rule as follows: $$ \nabla f = \begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \\ \frac{\partial f}{\partial z} \end{bmatrix} $$ The Jacobian of $f$ is a matrix that represents the partial derivatives of $f$ with respect to its arguments. It is used in 3D graphics to compute the transformation of a vector field under a linear transformation. Consider the function $f(x, y, z) = x^2 + y^2 + z^2$. The gradient of $f$ at the point $\vec{p} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$ is: $$ \nabla f(\vec{p}) = \begin{bmatrix} \frac{\partial f}{\partial x}(\vec{p}) \\ \frac{\partial f}{\partial y}(\vec{p}) \\ \frac{\partial f}{\partial z}(\vec{p}) \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} $$ ## Exercise Compute the gradient of the function $f(x, y, z) = x^2 + y^2 + z^2$ at the point $\vec{p} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$. # Linear transformations and their properties Linear transformations are functions that map vectors to vectors while preserving the operations of addition and scalar multiplication. In 3D graphics, linear transformations are used to manipulate objects in space, such as scaling, rotation, and translation. For example, consider a scaling transformation that scales a vector by a factor of 2. This transformation can be represented as a matrix: $$ S = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} $$ Properties of linear transformations include: - Additivity: $T(u + v) = T(u) + T(v)$ - Homogeneity: $T(\alpha u) = \alpha T(u)$ - Preservation of the zero vector: $T(0) = 0$ ## Exercise Show that the scaling transformation matrix $S$ satisfies the properties of linear transformations. # Matrix operations and their applications in 3D graphics Matrix operations are essential in 3D graphics for manipulating vectors and performing linear transformations. Some common matrix operations include matrix addition, matrix multiplication, and matrix inversion. For example, consider two matrices $A$ and $B$ with the same dimensions. Matrix addition is performed element-wise, while matrix multiplication is performed by multiplying the corresponding rows of $A$ with the corresponding columns of $B$. Let $A$ be the matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ Let $B$ be the matrix: $$ B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} $$ The matrix addition $A + B$ is: $$ A + B = \begin{bmatrix} 1 + 5 & 2 + 6 \\ 3 + 7 & 4 + 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix} $$ The matrix multiplication $AB$ is: $$ AB = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix} $$ ## Exercise Perform matrix addition and matrix multiplication on the matrices $A$ and $B$ as shown in the example. # Partial derivatives and their applications in 3D space Partial derivatives are used in 3D graphics to compute the gradient and Jacobian of functions that map from 3D space to a scalar value. The gradient is a vector that represents the direction of the steepest ascent of the function at a given point, while the Jacobian is a matrix that represents the partial derivatives of the function with respect to its arguments. For example, consider the function $f(x, y, z) = x^2 + y^2 + z^2$. The gradient of $f$ at the point $\vec{p} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$ is: $$ \nabla f(\vec{p}) = \begin{bmatrix} \frac{\partial f}{\partial x}(\vec{p}) \\ \frac{\partial f}{\partial y}(\vec{p}) \\ \frac{\partial f}{\partial z}(\vec{p}) \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} $$ The Jacobian of $f$ is a matrix that represents the partial derivatives of $f$ with respect to its arguments: $$ J_f(\vec{p}) = \begin{bmatrix} \frac{\partial f}{\partial x}(\vec{p}) & \frac{\partial f}{\partial y}(\vec{p}) & \frac{\partial f}{\partial z}(\vec{p}) \end{bmatrix} = \begin{bmatrix} 2 & 4 & 6 \end{bmatrix} $$ ## Exercise Compute the gradient and the Jacobian of the function $f(x, y, z) = x^2 + y^2 + z^2$ at the point $\vec{p} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$. # Vector calculus and its applications in 3D graphics Vector calculus is a branch of calculus that deals with vector-valued functions and their derivatives. In 3D graphics, vector calculus is used to compute the gradient and curl of vector fields, which are essential for simulating physical phenomena such as fluid flow and electromagnetic fields. For example, consider a vector field $\vec{F}(x, y, z) = \begin{bmatrix} F_x(x, y, z) \\ F_y(x, y, z) \\ F_z(x, y, z) \end{bmatrix}$. The gradient of $\vec{F}$ is a matrix that represents the partial derivatives of the components of $\vec{F}$ with respect to their arguments: $$ \nabla \vec{F} = \begin{bmatrix} \frac{\partial F_x}{\partial x} & \frac{\partial F_x}{\partial y} & \frac{\partial F_x}{\partial z} \\ \frac{\partial F_y}{\partial x} & \frac{\partial F_y}{\partial y} & \frac{\partial F_y}{\partial z} \\ \frac{\partial F_z}{\partial x} & \frac{\partial F_z}{\partial y} & \frac{\partial F_z}{\partial z} \end{bmatrix} $$ ## Exercise Compute the gradient of the vector field $\vec{F}(x, y, z) = \begin{bmatrix} x^2 + y^2 \\ y^2 + z^2 \\ z^2 + x^2 \end{bmatrix}$ at the point $\vec{p} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$. # 3D transformations and their applications in computer graphics 3D transformations are used in computer graphics to manipulate objects in space, such as scaling, rotation, and translation. These transformations can be represented as matrices and applied to vectors to create new vectors with modified properties. For example, consider a scaling transformation that scales a vector by a factor of 2. This transformation can be represented as a matrix: $$ S = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} $$ Applying this matrix to the vector $\vec{v} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$ results in a new vector $\vec{v}'$ that is twice as long: $$ \vec{v}' = S \vec{v} = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} $$ ## Exercise Apply the scaling transformation matrix $S$ to the vector $\vec{v} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$. Calculate the new vector $\vec{v}'$. # Applications of multivariable calculus in 3D graphics pipelines Multivariable calculus is essential in 3D graphics pipelines, where it is used to compute the gradient and Jacobian of functions that map from 3D space to a scalar value. These concepts are used to simulate physical phenomena such as fluid flow and electromagnetic fields, and to optimize rendering algorithms. For example, consider the rendering of a 3D object using the Phong reflection model. The Phong model computes the intensity of light reflected from the object's surface, which depends on the angle between the light direction, the view direction, and the surface normal. The gradient of the intensity function is used to compute the surface normal at each point in space. ## Exercise Describe how the Phong reflection model uses the gradient of the intensity function to compute the surface normal at a point in space. # Optimization and approximation techniques in 3D graphics Optimization and approximation techniques are essential in 3D graphics for simulating physical phenomena and rendering objects efficiently. These techniques are used to compute the gradient and Jacobian of functions that map from 3D space to a scalar value, and to optimize rendering algorithms. For example, consider the rendering of a 3D object using the Phong reflection model. The Phong model computes the intensity of light reflected from the object's surface, which depends on the angle between the light direction, the view direction, and the surface normal. The gradient of the intensity function is used to compute the surface normal at each point in space. ## Exercise Describe how the Phong reflection model uses the gradient of the intensity function to compute the surface normal at a point in space. # Real-world examples of multivariable calculus in 3D graphics Multivariable calculus is used in real-world applications of 3D graphics, such as computer-aided design (CAD), computer-aided manufacturing (CAM), and virtual reality (VR). These applications require the computation of the gradient and Jacobian of functions that map from 3D space to a scalar value, and the manipulation of objects in space using linear transformations. For example, in CAD, the gradient of a function that maps from 3D space to a scalar value can be used to compute the surface normal at each point in space. This information is essential for rendering objects with realistic lighting and shading. ## Exercise Describe how the gradient of a function that maps from 3D space to a scalar value is used in CAD to compute the surface normal at each point in space. # Course Table Of Contents 1. Vectors and matrices in 3D space 2. Chain rule and its applications 3. Linear transformations and their properties 4. Matrix operations and their applications in 3D graphics 5. Partial derivatives and their applications in 3D space 6. Vector calculus and its applications in 3D graphics 7. 3D transformations and their applications in computer graphics 8. Applications of multivariable calculus in 3D graphics pipelines 9. Optimization and approximation techniques in 3D graphics 10. Real-world examples of multivariable calculus in 3D graphics
Textbooks
Zoll surface In mathematics, particularly in differential geometry, a Zoll surface, named after Otto Zoll, is a surface homeomorphic to the 2-sphere, equipped with a Riemannian metric all of whose geodesics are closed and of equal length. While the usual unit-sphere metric on S2 obviously has this property, it also has an infinite-dimensional family of geometrically distinct deformations that are still Zoll surfaces. In particular, most Zoll surfaces do not have constant curvature. Zoll, a student of David Hilbert, discovered the first non-trivial examples. See also • Funk transform: The original motivation for studying the Funk transform was to describe Zoll metrics on the sphere. References • Besse, Arthur L. (1978), Manifolds all of whose geodesics are closed, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 93, Springer, Berlin, doi:10.1007/978-3-642-61876-5 • Funk, Paul (1913), "Über Flächen mit lauter geschlossenen geodätischen Linien", Mathematische Annalen, 74: 278–300, doi:10.1007/BF01456044 • Guillemin, Victor (1976), "The Radon transform on Zoll surfaces", Advances in Mathematics, 22 (1): 85–119, doi:10.1016/0001-8708(76)90139-0 • LeBrun, Claude; Mason, L.J. (July 2002), "Zoll manifolds and complex surfaces", Journal of Differential Geometry, 61 (3): 453–535, doi:10.4310/jdg/1090351530 • Zoll, Otto (March 1903). "Über Flächen mit Scharen geschlossener geodätischer Linien". Mathematische Annalen (in German). 57 (1): 108–133. doi:10.1007/bf01449019. External links • Tannery's pear, an example of Zoll surface where all closed geodesics (up to the meridians) are shaped like a curved-figure eight.
Wikipedia
A symbolic model checking approach in formal verification of distributed systems Alireza Souri ORCID: orcid.org/0000-0001-8314-90511 na1, Amir Masoud Rahmani1 na1, Nima Jafari Navimipour2 na1 & Reza Rezaei3 na1 Model checking is an influential method to verify complex interactions, concurrent and distributed systems. Model checking constructs a behavioral model of the system using formal concepts such as operations, states, events and actions. The model checkers suffer some weaknesses such as state space explosion problem that has high memory consumption and time complexity. Also, automating temporal logic is the main challenge to define critical specification rules in the model checking. To improve the model checking weaknesses, this paper presents Graphical Symbolic Modeling Toolkit (GSMT) to design and verify the behavioral models of distributed systems. A behavioral modeling framework is presented to design the system behavior in the forms of Kripke structure (KS) and Labeled Transition System (LTS). The behavioral models are created and edited using a graphical user interface platform in four layers that include a design layer, a modeling layer, a logic layer and a symbolic code layer. The GSMT generates a graphical modeling diagram visually for creating behavioral models of the system. Also, the temporal logic formulas are constructed according to some functional properties automatically. The executable code is generated according to the symbolic model verifier that user can choose the original model or reduced model with respect to a recursive reduced model. Finally, the generated code is executed using the NuSMV model checker for evaluating the constructed temporal logic formulas. The code generation time for transforming the behavioral model is compared to other model checking platforms. The proposed GSMT platform has outperformed evaluation than other platforms. Today, distributed systems have developed complex components more and more [1, 2]. By increasing performance of complex systems such as service composition [3], task scheduling [4] and fault tolerance [5], simulation analysis cannot evaluate entire of the system levels [6, 7]. Also, the simulation results are rested to some design under test platforms [8] that omit the part of the existing state space of the system [9]. Formal verification is a mathematically correctness provable approach for the complex distributed systems which is well-suitable for NP-hard problems [10, 11]. Recent scientific studies analyzed their case studies using mathematical verification approaches such as model checking [4, 12,13,14,15,16,17,18], process algebra [19,20,21,22,23,24], formal concept analysis [25] and theorem proving [26,27,28,29] methods. Among the mentioned approaches, model checking [30] is a well-known verification technique to evaluate the functional properties of a distributed system automatically [31, 32]. The main goal of the model checking is to find the property violations and limitations in the system behavior with the counterexamples [33]. However, there are some limitations for model checking such as state space explosion and temporal logic design [34]. For improving these limitations, the symbolic model checking [35, 36] with Binary Decision Diagram (BDD) has been presented by McMillan [34]. Some industrial tools such as NuSMV [37], PAT [38], Spin [39], and UPPAAL [40] are well-known for analyzing the system behavior correctness [41,42,43]. But, these tools have some limitations such as weak graphical user interface, the complexity of programming language and generating the automated temporal specification rules for verifying the system behavior [44,45,46,47]. To illustrate the temporal logic formulas, some model checkers such as NuSMV have supported the generated specification rules in forms of Linear Temporal Logic (LTL) and Computation Tree Logic (CTL) [17, 37, 48,49,50]. Also, creating a critical specification rule for checking in the generated state space of the system behavior is an important challenge for model checkers. When the state space increases exponentially, checking and discovering the critical specification rules to measure the correctness of the system is confused [51, 52]. As yet, model checkers do not guarantee automated specification rules generation [53, 54]. In addition, a model checker needs to automated formal design that supports the Kripke structure (KS) and Labeled Transition System (LTS) modeling methods. In model checking, some characteristic points consolidate an irrefrangible relationship between integrated abstract model and the concrete system behavior. The characteristic points include specifying descriptive features, designing precise model, configuring desired feature selection, and generating comprehensive specification rules. This relationship is confident that the correctness of the integrated abstract model using model checking is very reliable to evaluate concrete system behavior. If we emphasize some characteristic points for designing and modeling system behavior [55], then the accurate verification results are obtained from model checking. This paper presents an easy to use and user-friendly Graphical Symbolic Modeling Toolkit (GSMT) to simplify model checking the system behavior. We advocate the use of fully automated designing methods to check the correctness of the system behavior. The refinement of design, modeling and verification levels lead the behavior correctness procedure to increase the accuracy. An integrated architecture is also designed for each level according to the simple relationship among the existing objects of the proposed framework. This framework not only follows the contributions of the existing model checkers but also adds some important points to verify the system behavior using model checking. The contributions of this research are as follows: Presenting a graphical model checking framework to facilitate the system behavior design. Providing a modeling platform to support the KS and LTS models. Generating the LTL and CTL specification rules of the system model according to the functional properties such as deadlock, reachability, and safety conditions. Presenting a high-level order of recursive reduced Kripke and labeled models to ameliorate the state space explosion problem. Facilitating the verification procedure using NuSMV. The paper structure is organized as follows, "Related work" section illustrates a brief review of the presented related frameworks and toolsets. In "GSMT framework" section, we address a conceptual explanation of the GSMT framework. Also, this section introduces the current four layers in the automated verification approach. Moreover, the formal descriptions of the system behavior are illustrated to handle the model checking the specification rules. "Experimental analysis" depicts a descriptive case study to evaluate the verification procedure for the proposed framework with the other approaches according to some experimental results. Finally, "Conclusion and future work" provides the conclusion and some open subjects on this topic as the future works. In this section, some related studies are discussed briefly which contain modeling and descriptive translators and automated verification frameworks according to some important features and challenges. Castelluccia et al. [56] presented a formal framework to design web applications according to the UML method. The key feature of this framework is based on LTS model checking and CTL formulas. First, a design of the model is generated in forms of the UML-based platform with the XMI format. Then, the framework translated the proposed UML-based platform to the extensible SMV codes. Li et al. [54] proposed a translator framework to exchange Programmed Logic Controllers (PLC) for executable verification codes using utility block chart language. The framework presented a formal modeling approach to specifying the model structure using a Boolean explanation method. The model is translated to some modules of SMV codes. This translator supports just CTL formulas to embed in code generation. Designing the model structure is not automatic because the extensibility of the model checking approach is covered. Also, this framework supports a command-line authentication to avoid invalid inputs according to its powerful editor environment. The main disadvantages of this framework are as follows: the requirement patterns as the specification rules are input manually; the LTL formulas are not supported; the framework has not illustrated the correctness of the functional properties such as reachability and deadlock. Abdelsadiq [57] presented a high-level modeling framework for Contractual Business-to-Business relations (CB2B) to apply e-contract models in the e-business management system. The CB2B models support a set of the conceptual model that includes truths, actions, responsibilities and exclusions for checking contract agreement. First, the designed model translated to Event–Condition–Action (ECA) structure according to Process Metalanguage (Promela) language. Then, a set of simple LTL formulas is generated manually. Both temporal specifications and ECA model are translated to executable codes for the Spin model checker. The main limitations of this framework are as follows: (1) the design level of the formal modeling is omitted; (2) specification rules are very simple; (3) an editable platform for user interface has not been indicated. Caltais et al. [58] proposed a framework conversion to interact between the System Modelling Language (SysML)-based models and NuSMV symbolic model checker. The SysML-Ja is a toolset that translates the structural SysML-based models in forms of block diagrams and state diagrams to symbolic modules of SMV codes. This translation is retrieved from the LTS model by some events and actions. The relationships between each block/state diagram are converted to a transition command in SMV code. Some specification rules are input at the end of the SMV codes manually. There are some limitations in this framework as follows: (1) the generation of specification rules has not been considered in the structure of the framework; (2) the graphical modeling stage is omitted in this framework. Furthermore, Deb et al. [59] have presented an inherent sequence state transition modeling transformation framework for concurrent systems. They used the Naive algorithm to handle the rise of the state space. First, requirements are translated to the LTS model with respect to a set of sequences states. In the editor environment, the LTS model is converted according to the Multi-dimensional Lattice Paths (MLP) to the SMV codes. The framework can add a simple CTL formula to the generated SMV code to verify it. However, when a large model is loaded in this framework, the state space has been increased highly. When the system behavior has a multi-tenant structure, the translated modules cannot interact with them by transition methods. In addition, the functional properties have not been verified in this framework using NuSMV. Meenakshi et al. [60] have presented a converter environment between Simulink models and input language of model checkers automatically. The system engineers can develop the structural models in Simulink environments such as MATLAB informally. Hence, this converter tool can be useful to transform the Simulink model as the input to a formal description approach in forms of NuSMV model checker codes. The proposed tool covers all of the block diagrams that organize the structural model of the Simulink. There are some limitations in this tool compared with the other instruments: the LTL specifications are not considered in this tool to translate into SMV codes; a graphical modeling diagram is not illustrated to avoid the state space examination. In addition, the practical feature of this model does not support a complex industrial model for translating to the SMV codes. Vinárek et al. [61] proposed a translator framework between use case models and NuSMV model checker. The authors described a formal explanation of the Formal verification of Annotated Models (FOAM) framework using a user/actor model. The use case model is converted to a textual behavior automaton based on a priority connection. The textual behavior automaton is translated to a configurable LTS model [62]. The main disadvantages of this tool are as follows: first, this translator has not the editor environment to illustrate code generation; second, this tool has not covered the LTL specifications for checking the correctness of the use case models. There is just a demo environment for this tool rather than a practical translator environment. Szwed [63] presented a translator plugin to convert a business model to executable model checking code. This plugin specifies all of the direct elements of the business model that connect with each symbolic state in the business layer. In translation procedure, a set of the business processes are specified as the atomic states and the business tasks are specified as the events. The CTL formulas are added by the user manually. A graphical model is presented after translating SMV codes. Some limitations of this plugin are as follow: The verification method is executed without any correctness procedure; also, the LTL formulas are not supported. However, the execution time and reachability states are not compared with the other frameworks. Jiang and Qiu [64] have proposed an Spin2NuSMV (S2N) converter framework between Spin models and NuSMV codes. This framework presents a conversion procedure for transforming a high-level model in forms of Promela language into a low-level model as a state transition system in SMV code. Each process in the Spin model has been translated to a state with events coverage asynchronously. However, this framework cannot support the temporal logic transformation since NuSMV covers both LTL and CTL logics and Spin just generates LTL logic in the opposite. In addition, when a complex model is transformed into the SMV codes, some channels connection between processes are omitted. Szpyrka et al. [65] have presented a translator framework to convert state graph of a colored Petri-net model to an executable SMV code. Each net is converted to a state and each guard is transformed into an atomic proposition. The translated model is shown in forms of a Kripke model in NuSMV. A graphical reachability graph is generated after the translation procedure that is very confused and irregular. Also, the translated model is not displayed as a graphical model. This tool has a simple environment that imports a Petri-net model and translates to the SMV code in editor environment. The temporal logic formulas are added to end of the code manually. Also, the timed-Petri-net models cannot translate to SMV codes. According to the discussed and reviewed translator frameworks in model checking approach, the comparison of the related frameworks has illustrated in Table 1. The main factors of this view include existed case study, the modeling method, design method, temporal logic provision, and model checker interaction. All of the translator frameworks added the temporal specifications to the SMV code manually. Our presented framework generates all of the temporal logics in forms of the embedded specification rules in SMV code. In addition, NuSMV supports two temporal logics to design the specification rules of the system. Table 1 Comparison of the related frameworks according to the verification structure To the best of our knowledge, all related frameworks proposed a translator to provide both code generation/execution. Also, editor platforms support just one modeling template such as LTS or KS and one temporal logic formula for the system behavioral model. At complementing with them, our GSMT framework presents (1) automated design approach for formal descriptions of the system, (2) a compositional behavioral modeling for system behavior in forms of LTS and KS models, (3) generating the visual model diagram of the designed behavior, (4) constructing detailed temporal logic formulas in terms of CTL and LTL, and (5) symbolic automated verification approach using NuSMV. GSMT framework This section provides a conceptual description of the proposed framework with some key explanations. The important feature of the GSMT is its flexible modeling and checking capability that represents the common collaboration between two main steps of the formal verification approach. This flexibility is the prominent point of a translator framework that supports all technical features of the behavioral correctness of a complex system. In this section, the framework architecture is explained comprehensively. Also, the presented recursive reduction approach is illustrated in this section. GSMT behavioral models The GSMT navigates the behavioral model to a complete design, actual modeling, and automated translation approach. Figure 1 displays a conceptual architecture of GSMT. The GSMT architecture includes four dependent layers as follow: design, modeling, logic and symbolic code. After designing the proposed model, a behavioral model is constructed by the framework. The behavioral model is translated to an LTS or KS model. The translated model can get two results for converting to the final SMV code that includes the original model and reduced model. Concurrently in the logic layer, the specification rules are generated automatically. Then, the final generated code is executed in NuSMV to check the generated specification rules automatically. The GSMT architecture Design layer is an interactive level to navigate the fundamental of behavioral model features. This layer has performed following three obligations: Specifying design type of the behavioral model in forms of KS or LTS. Creating the structural features of the behavioral model such as states, actions, and atomic propositions. Creating the system exploration according to the relationship between the features. Figure 2 illustrates a flowchart diagram that describes the design layer in the GSMT framework. First, the design method is specified for constructing a behavioral model. Depending on the state-based or action-based model checking approaches, two methods can be chosen for this procedure in terms of KS and LTS. When the design method is specified, the basic features of the behavioral model such as states, transitions and actions should be initialized. We address a formal description of the existing methods briefly. The flowchart diagram of the design layer For the KS model, there are some features according to Kripke structure definition [66, 67]. The method is a state-based framework and the states are labeled with a name. The user can input a set of states and atomic propositions for the initialization section. A Kripke structure is a five-tuple KS = (Q, I, P, R, L) where [68]: Q is a set of states. I is the set of initial states: \(I \in Q\). P is a set of atomic propositions. R is a set of transition relations \(R \subseteq Q \times Q\). L is a state labeling function \({\text{L}}:Q \to 2^{\text{p}}\). In the above definition, a path can be defined on the behavioral model as follow: A Kripke path KP is a finite sequence of the states and transitions starting from the state q1 and finishing at the state qn that \(\left( {q_{1} \;\;{\text{and}}\;\;q_{n} \in Q,\;\;\; {\text{p}} \in P} \right)\) denoted as [69]: $$\varvec{KP} = q_{1} \left( {p_{1} } \right) \to q_{2} \left( {p_{2} } \right) \to q_{3} \left( {p_{3} } \right) \ldots q_{n - 1 } \left( {p_{n - 1} } \right) \to q_{n} \left( {p_{n} } \right) \;\;{\text{such}}\;\;{\text{that}}\;\;\forall \left( {{\text{i}}, {\text{j}}} \right){:} \ \left( {q_{i} , p_{i} } \right) \in L \;\;{\text{and}}\;\;\; \left( {q_{\text{i}} , q_{\text{j}} } \right) \in R.$$ In the next method, the model is constructed as an LTS model that is the event-based framework and the transitions are labeled with a name [70, 71]. The user can initialize a set of states and actions to design the behavioral model. A Labeled Transition System LT is a 4-tuple LT = (S, M, A, T) where: S is a set of states. M is the set of initial state: \(M \in S\). A is a set of actions. T is a total transition relation: \(T \subseteq S \times A \times S\). This means, the relation \(s_{1} \mathop \to \limits^{a} s_{2} \left( {s_{ 1} , \;s _{2} \in S\;\;{\text{and}}\;\; {\text{a}} \in A} \right)\) is used for stating that \(\left( {s_{1} ,\;{\text{a}},\; s_{2} } \right) \in T\). Also, in the second method, a path on the behavioral model is described as follow: A Labeled path LP in the second method is a finite sequence of the events and actions starting from the state s1 and finishing at the state \(s_{n} \left( {s_{1} \;{\text{and}}\;\;s_{n} \in S} \right)\) denoted as [72]: $$\varvec{LP } = s_{1} \mathop \to \limits^{a1} s_{2} \mathop \to \limits^{a2} s_{3} \ldots s_{n - 1} \mathop \to \limits^{an - 1} s_{n} \;\;\;{\text{such}}\;\;{\text{that}}\;\;\forall \left( {{\text{k}}, {\text{v}}} \right): \left( {s_{k} , \;a_{v} ,\; s_{k + 1} } \right) \in T.$$ By using the Kripke Path KP and the Labeled Path LP, we create the state space exploration for the proposed behavioral models in model checking. Modeling layer is a visual interaction level illustrating the graphical models of the behavioral model. This layer is classified to the following three steps: Configuring the transition relations between the expected attributes in the behavioral model. Translating the configurable relations to the graph-based relation machine. Generating a graphical state exploration diagram according to the graph-based relation machine. Figure 3 shows the modeling layer architecture for generating a visual state exploration diagram in the GSMT framework. First, each transition relation in the behavioral model is constructed according to the formulated paths in the above definitions. Due to the importance of the relation handling between expected states, transmitting the states by each event or an atomic proposition is done automatically. In this situation, any transition relation is not omitted in a complex behavioral model. After configuring the formal transition relations, a graph-based relation machine is translated for mapping on the state space exploration. This translation is based on the GraphVizFootnote 1 tool as a visual modeling software. Finally, a graphical state exploration diagram for the designed behavioral model has generated automatically. The generated output model is produced in form of dot format that has a hierarchical drawing architecture for modeling the system behavior. We prepare the editable version of the modeling format for the user that can save it to the other viewable formats like an image. Due to having the simple language structure in GraphViz, this platform is chosen for increasing the flexibility. The modeling layer architecture of the GSMT framework The logic layer is a formal descriptive level to demonstrate the temporal logic formulas in verification of the behavioral model. This layer has the following features: Extracting the transition relations as a set of specification rules. Converting the specification rules to a formula-based platform in forms of the LTL and CTL. Generating the existent permutation temporal formulas for all of the specification rules. Figure 4 shows the logic layer architecture to the automated construction of the temporal logic specifications in terms of reachability, deadlock, liveness, and safety conditions. Initially, the set of states, atomic propositions and actions are extracted to the permutation of the transition relations in a Finite State Machine (FSM). According to the following descriptions of the temporal logics, the conversion procedure is done for each property checking which includes deadlock condition, reachability asset, safeness property and liveness condition. For showing the specification properties, we explain CTL and LTL briefly. The logic layer architecture of the GSMT framework The CTL syntax is described as follows [16]: $$\alpha :: = {\text{True}}\left| p \right|\neg \alpha |\alpha \wedge \alpha^{\prime}|\alpha \vee \alpha^{\prime}|AX \alpha \left( p \right) |AG \alpha \left( p \right) |AF \alpha \left( p \right) |EG \alpha \left( p \right) |EX \alpha \left( p \right) |EF \alpha \left( p \right)$$ True is a true proposition. The p is an atomic proposition where the \(\alpha\) formula can hold atomic proposition p with a sentence or statement according to following syntax \(\alpha\) (p) which is both true or false value. The \(\alpha\) is ranged over CTL formulas. The \(\neg \alpha\) (not), \(\alpha \wedge \alpha^{{\prime }}\) (and) and \(\alpha \vee \alpha^{{\prime }}\) (or) are logical syntaxes on the formulas. A (always) and E (eventually) are the general quantifiers on all of the paths. G (globally), X (next state) and F (in the future) are contracted in the entire of each path. Also, LTL syntax is explained as follow [73, 74]: $$\beta :: = {\text{True}}\left| q \right|\neg \beta |\beta \wedge \beta^\prime |\beta \vee \beta^\prime |G \beta \left( q \right) |F \beta \left( q \right) |X \beta \left( q \right) |\beta U\beta^\prime$$ The q is an atomic proposition where a \(\beta\) formula gets atomic proposition q with a declarative statement according to following syntax \(\beta\) (q) which is both true or false value. The \(\beta\) is a range over LTL formulas. The \(\neg \beta\) (not), \(\beta \wedge \beta^{\prime }\) (and) and \(\beta \vee \beta^{{\prime }}\) (or) are logical syntaxes on the formulas. The \(\beta U\beta^{\prime }\) means that \(\beta\) is true and enabled until \(\beta^{{\prime }}\) is activated. According to the specified temporal syntaxes, three categorizations are performed to generate all of the expected specification rules in the system behavior automatically. The user can select each property according to the model analysis. The generated temporal properties are added to the end of the code. For example, we have the simple template of some specification properties for the LTS model as follows: (Deadlock freedom) AG !(state & action); (Liveness) AG (state & action) → AF (state & action); (Reachability) AG (EF(state & action → state & action)); Symbolic code layer is a fully automated verification approach for executing the generated symbolic codes in the NuSMV interactive model checker. This layer navigates the following tasks: Translating the expected attributes to the hierarchically structured programming platform. Transforming the hierarchically structured platform to the SMV codes. Adding the generated specification formulas to the end of the code. Reducing the expected attributes to ameliorate the state space explosion. Confirming the reduced behavioral model as the optimally generated SMV code. Generating the executable SMV code for automated verification in NuSMV. Figure 5 displays the symbolic code layer architecture to automated verification of the behavioral model. First, the modeled structure is translated to a hierarchical-based platform to preserve the expected transition relations. Then, the hierarchical-based platform is transformed into the SMV code configuration. In this position, the user has two methods for producing final code. The original SMV code of the behavioral model via the expected specification rules are generated automatically. Also, the user can request the reduced behavioral model to ameliorate the state space explosion in a complex system. The GSMT generates a reduced SMV code for executing in the NuSMV. In the verification phase, the NuSMV reads the generated code and transforms it into a flat hierarchical model. Then, the existing variables are encoded for constructing Ordered Binary Decision Diagram (OBDD) [75] platform. Finally, the constructed model is built for checking the behavioral correctness of the system. The symbolic code layer architecture of the GSMT framework After describing the GSMT architecture, we present the recursive reduction approach for the GSMT. Recursive reduced model The reduced model generally is based on a linear reduction in some related approaches [70, 76,77,78]. The complex systems have a set of impermissible states that are composed of the parallel relational processes. The similarity of the attributes and transition relations increase the number of state space size. Whatever the number of states and transitions are decreased, the state space is compacted because the size of the state space has been increased exponentially. We use a vicinity matrix for recursive reduced model. To describe the state space reduction, the first step is ordering the vicinity matrix of the state space according to the transition relations. After generating the vicinity matrix, a recursive reduced algorithm is executed for refining the state space. According to the reduction algorithms [76, 77], we have a minimization equivalence method that the model size is defined for comparing the minimality and reduced model [76]. Model size is shown by |Sm| with the number of states and transitions. In other words, we conclude \(\left| {{\text{S}}_{\text{m}} } \right| \le \left| {{\text{S}}_{\text{m}}^{{\prime }} } \right|\) if and only if the number of all attributes (states and transitions) of the Sm is smaller than \({\text{S}}_{\text{m}}^{{\prime }}\) [79]. The Sm is an original model and SR is its reduced model. A minimal equivalence Me is an equal relation, when \(\left| {{\text{S}}_{\text{R}} } \right| \le \left| {{\text{ S}}_{\text{m}} } \right|\) (the size of the reduced model is smaller than the original model), then \({\text{S}}_{\text{m}} \equiv {\text{S}}_{\text{R}}\) (the original model is equivalence with reduced model) if and only if the minimal equivalence \({\text{S}}_{\text{R}} \approx M_{e} \approx {\text{S}}_{\text{m}}\) is established. Consequently, the reduced model SR is replaced on the original model Sm for ameliorating the state space explosion [67]. Figure 6a is the original KS model by a set of states (S0, S1, S2, S3, S4, S5, S6) and Fig. 6b is a reduced KS model. In the original Kripke model, there are three states S3, S4 and S5 by same atomic proposition {x} in the KS model that are merged together in set of labeling functions ((S0, {x}), (S1, {z}), (S2, {y}), (S3, S4, S5, {x}), (S6, {z})). First, a vicinity matrix is created for the original Kripke model. a The original model. b The reduced model of the GSMT reduction algorithm Figure 7 depicts the design of the vicinity matrix for the original and reduced models. For a sample, in the original matrix (Fig. 7a), there are two neighborhood values according to the transition relation method. When the value of S1S3 is equal to the value of S1S4 (PSi, j = PSi, j+1) that means there is a same proposition for the proposed states, then the reduced approach is applied. Initially, the S4 and S5 are transmitted to the \(S_{3}^{\prime }\) and the proposition {x} is omitted for them. Second, each inputted edge to the S4 and S5 is inputted to the \(S_{3}^{\prime }\) and each outputted edge from the S4 and S5 is outputted from the \(S_{3}^{\prime }\). Then, the remaining Kripke model is mapped to the new Kripke model as a reduced model (Fig. 7b) by set of states (\(S_{0}^{\prime }\), \(S_{1}^{\prime }\), \(S_{2}^{\prime }\), \(S_{3}^{\prime }\), \(S_{4}^{\prime }\)) and set of labeling functions ((\(S_{0}^{\prime }\), {x}), (\(S_{1}^{\prime }\), {z}), (\(S_{2}^{\prime }\), {y}), (\(S_{3}^{\prime }\), {x}), (\(S_{4}^{\prime }\), {z})). The number of two states and three edges are deleted from the original Kripke model. Finally, the relation of minimal equivalence between KOriginal and \({\text{K}}_{\text{Reduced}}^{{\prime }}\) is established as follows: The vicinity matrix of a original Kripke model, b reduced Kripke model The size of the reduced Kripke model is lower than original Kripke model | \({\text{K}}_{\text{Reduced}}^{{\prime }}\) | \(\le\) | KOriginal | and the original Kripke model is equivalence with reduced Kripke model \({\text{K}}_{\text{Original}} \equiv {\text{K}}_{\text{Reduced}}^{{\prime }}\). Figure 8 depicts the recursively reduced algorithm based on the vicinity matrix of labeling functions. This algorithm provides two conditions for the reduced model for searching each matrix array. First, both neighbor values by a vicinity condition are specified if S(i, j) = S(i, j + 1), then the reduction procedure is applied. Second, each loop condition occurs for two states if S(i, j) = S(j, i), then the reduction procedure is applied. Searching matrix arrays are done until there is no array that applies in two conditions. The recursive reduction algorithm of the GSMT This section illustrates some experimental case studies to evaluate the GSMT framework. First, a brief exploration of the GSMT environment is presented. Then, some case studies are illustrated to demonstrate the performance evaluation of the framework. Finally, the verification results are shown in this section. User interface of GSMT The framework consists of three main windows that include modeling method selection, KS model window, and LTS model. In Fig. 9, a Kripke model platform is shown for creating Example 1 as a case study. Following sections illustrate the important regions in KS platform. At the first stage, the designer can input initial information for the behavioral model. The reduction method, generating temporal logics and generating SMV codes are done automatically. In the main text, all the existing layers have been illustrated with manual or automatic conditions. The KS window in the GSMT framework Add propose state: the user inputs a set of existing states on Add propose state button. The defined states are listed in the state list manually. Add initial state: the user should select an initial state from the state list which the initial state is displayed in the First state box manually. Add transition relation: it shows the transition relations that are constructed in forms of From/To structure. All of the transition relations are listed in transition relation list box. The interaction simplicity is a key point for users and engineers to design and model a complex system using GSMT manually. Add AP: it specifies the atomic propositions of each state using Add AP button manually. Generate behavioral model: it consists of a button which generates a graphical state transition diagram in form of the GraphViz output based on own structure codes automatically. Generate symbolic code: it is a symbolic code generation for constructing the final SMV code in the following textbox. This textbox is an editable platform for copying and modifying the SMV code. When the checkbox reduce is checked, then the model is reduced according to the reduction approach and the reduced final SMV code is generated. Also, the framework generates the new graphical diagram for reduced model automatically. Specification generators: by selecting the specification rules, the GSMT produces the temporal formulas automatically. In the column of CTL specification generator, there are 4 specification rules for adding to the end of the SMV code automatically. For example, the deadlock and reachability properties are selected to generate and add in the code. In addition, the LTL specification generator column has three specification rules. In Fig. 9, all of the properties are selected to add the end of the code. This example illustrates a translation procedure for a Kripke model to the SMV code. A verification approach is done based on the NuSMV model checker automatically. According to Definition 1, the formal description of the Kripke structure of Example 1 is as follow: Set of the states Q = (S1, S2, S3, S4, S5, S6). The initial state I = S1. The set of atomic propositions P = (p, q, r). The set of transition relations R = {(S1, S2), (S2, S3), (S2, S4), (S3, S5), (S4, S6)}. The state labelling functions L = ((S1, {p}), (S2, {q}), (S3, {q}), (S4, {p, q}), (S5, {p, r}), (S6, {p, r}). Figure 10 shows the graphical state transition diagram of Example 1 that is generated automatically using GSMT. After modeling the proposed behavioral model of the Example 1, the final SMV code is generated according to the symbolic code platform. The verification results of the Example 1 are as follows: The generated graphical transition diagram of Example 1 in the GSMT framework The execution time of this model is 158.5 ms, Generating 18 deadlock-free properties, Generating 180 reachability properties, Generating 180 liveness properties, Generating 720 safety properties. Figure 11 illustrates the executed SMV code in NuSMV for Example 1 automatically. In this figure, there is no deadlock problem in the example. The existing reachable states of the proposed model is 64 with system diameter 5. The numbers of allocated OBDD states are 297. After checking the CTL specifications, the 50% of the generated deadlock properties is true, the 77% of the reachability properties is true, the 100% of the liveness properties is true, and the 92% of the safety properties is true. The total number of the generated CTL properties of the Example 1 is 1098. The automated model checking environment of the Example 1 Figure 12 shows the LTS window for constructing the Example 2 as the suggested case study. According to the specified numbers of Fig. 12, the translation procedure of the LTS to the final SMV code is shown. The LTS window in the GSMT framework Add action: the user inputs a set of existing actions on Add propose action button manually. Add initial action: the user selects an initial action from the action list which the initial action is shown in the First action box manually. Add transition relation: it shows the transition relations that are constructed in forms of From/To/By structure. All of the transition relations are listed in transition relation list box manually. Generate a behavioral model: it consists of a button which generates a graphical state transition diagram in form of the GraphViz output automatically. This example illustrates a translation procedure for a labeled model to the SMV code. A modeling and verification approach is done based on the NuSMV model checker automatically. According to Definition 3, the formal description of the LTS of Example 2 is as follow: Set of the state S = (S1, S2, S3, S4, S5, S6). The initial state M = S1. The set of atomic propositoins A = (a1, a2, a3, a4, a5, a6). The set of transition relations T = {(S1, a1, S2), (S2, a2, S3), (S2, a2, S4), (S3, a3, S5), (S4, a4, S6), (S5, a5, S6), (S5, a3, S3), (S6, a6, S1)}. Figure 13 shows the graphical state transition diagrams for Example 2 that is generated automatically using GSMT. Figure 13a shows the original LTS model and Fig. 13b depicts the reduced LTS model after applying the reduce approach on the original model. After modeling the proposed behavioral model of the Example 2, the final SMV code is generated according to the symbolic code platform. The verification results of the Example 2 are as follows: The generated graphical transition diagram of Example 2 in a original and b reduced model The execution time of this model is 328.9 ms; Generating 42 deadlock-free properties; Generating 1260 reachability properties; Generating 1260 liveness properties; Generating 3450 safety properties. Figure 14 illustrates the executed SMV code of Example 2 in the NuSMV automatically. In this figure, there is no deadlock problem. The existing reachable states of the proposed model is 2680 with system diameter 5. The numbers of allocated OBDD states are 296. After checking the CTL specifications, the 55% of the generated deadlock properties is true, the 75% of the reachability properties is true, the 100% of the liveness properties is true, and the 94% of the safety properties is true. The total number of the generated CTL properties of the Example 2 is 6012. For comparing the performance of GSMT and the other translator frameworks, some test case examples are analyzed. In this experiment, an Intel® Core™ i5-6200U @ 2.30 GHz CPU, and 8 GB memory in Windows 10 have been used. The first level of the performance evaluation is analyzing the verification time of the original and reduced models that we perform some test cases to analyze the GSMT framework. The details of these test cases are illustrated in Table 2. These case studies are generated randomly. Table 2 Test cases of the GSMT analysis Figure 15 demonstrates the verification time for ten test cases (10 to 100,000 state explorations) in forms of original and reduced models. This result specifies that the reduced model of the GSMT provides a substantial performance in the verification time. When the number of the state space attributes are increased, the verification time of the state exploration is grown. In this situation, the reduced model can significantly decrease the verification time of system verification. Verification time of test cases verification in original and reduced models Also, Table 3 shows the number of states and transitions of 10 test cases of Table 2 in order to the percentage of state space reduction using the proposed recursive reduced model of the GSMT framework. The reduction average of the state space using GSMT is 18.54%. The reduced models have minimal equivalency relations with the original models. Table 3 Comparison of the state space reduction for the test cases in GSMT The second level of the performance analysis is to compare the code generation time for ten test cases (10 to 100,000 state explorations). We implemented the existing case studies in three famous translator frameworks SysML-ja [58], IStar [59], and FOAM [61] to compare with the performance of the GSMT framework. Since the selected framework supports just the LTS model, the structure of existing examples has been considered in forms of LTS for a fair measurement. Figure 16 depicts the code generation time for specifying the case studies. By increasing the number of the states and transitions in each example, the generation time is grown exponentially. As the result, the GSMT framework generates the final code by minimum time. Generation time of final code in translator frameworks Some model checking converters follow up a standard translation architecture such as a design code structure, the specification properties definition, and an executive verifiable code. The proposed framework not only supports the standard platforms but also represents a specification rules generator, behavioral model generation, and space reduction approach. However, interconnecting some verification approaches such as process algebraic methods and theorem proving tools is a key challenge in complex software and hardware development. The experimental results acquired via some individual test cases obviously demonstrate that the recursive reduction method improves significantly the execution and verification time. Nevertheless, the increasing of the generated specification rules can influence code generation complexity and rise to check the time of the properties negatively, and enhancement of the system correctness positively. The limitations of the GSMT framework can be improved with applying the evolutionary algorithms in the model checking approach. For example, the reduction time of the reduced model can significantly be decreased using greedy algorithms. For analyzing the completeness and soundness of a complex system, the model checking approach is time-consuming and the theorem proving frameworks such as IsabelleFootnote 2 and SPASSDFootnote 3 tools can influence to prove these problems. Also, middleware converters between the concrete and the verifiable model are very useful to correctness evaluation of the complex systems. The important challenge of these converters is the approximation of the verifiable model to the implementation model. Table 4 shows the comparison of the related frameworks and the GSMT according to the verification environment factors in terms of code generation mode, editor layer, graphical modeling creation, property generation section, and reduction approach. In this evaluation, the GSMT support all of the verification environment factors automatically. Table 4 Assessment principles for related frameworks in the verification environment In this research, a GSMT is presented with respect to simplifying the behavioral modeling software systems. It consists of the behavioral modeling in form of the LTS and the KS, generating a graphical state exploration diagram of the behavioral model, generating the expected specification rules automatically, translating the behavioral model to the SMV codes, and reduction of the state space. The important functionality of the GSMT is the implementation of the syntactic reduced approach that ameliorates the state space explosion. Also, the framework generates the specification rules for proofing the correctness of the model automatically. In order to use the NuSMV, the GSMT supports both the LTL and CTL formulas to add the final code for execution in the interactive environment. The experimental results of the GSMT shown that this framework has usability and simplicity for behavioral modeling software and hardware systems. In comparison analysis, the reduction approach can significantly decrease the execution time for model verification. In addition, the framework has a sufficient execution time for generating final executable SMV code rather than the other translation model checking frameworks. In checking the generated specification properties for each model in average, the 55% of the generated deadlock properties is true, the 73.5% of the reachability properties is true, the 100% of the liveness properties is true, and the 93% of the safety properties is true. In the future work, we will add some key features such as contracting the formal specification using pi-calculus and model checking in an integrated framework, improving the specification rules generation according to behavioral model satisfactory, refining the state space reduction percentage for the complex systems, and applying the multi-action transition associations for decreasing the state space complexity. http://www.graphviz.org/. http://isabelle.in.tum.de/. http://www.spass-prover.org/. Mitsch S, Passmore GO, Platzer A (2014) Collaborative verification-driven engineering of hybrid systems. Math Comput Sci 8:71–97 Li Y, Tao F, Cheng Y, Zhang X, Nee AYC (2017) Complex networks in advanced manufacturing systems. J Manuf Syst 43:409–421 Vakili A, Navimipour NJ (2017) Comprehensive and systematic review of the service composition mechanisms in the cloud environments. J Netw Comput Appl 81:24–36 Keshanchi B, Souri A, Navimipour NJ (2017) An improved genetic algorithm for task scheduling in the cloud environments using the priority queues: formal verification, simulation, and statistical testing. J Syst Softw 124:1–21 Glaßer C, Pavan A, Travers S (2011) The fault tolerance of NP-hard problems. Inf Comput 209:443–455 Higashino WA, Capretz MAM, Bittencourt LF (2016) CEPSim: modelling and simulation of complex event processing systems in cloud environments. Future Gener Comput Syst 65:122–139 Suh Y-K, Lee KY (2018) A survey of simulation provenance systems: modeling, capturing, querying, visualization, and advanced utilization. Hum Centric Comput Inf Sci 8:27 Dill DL (1998) What's between simulation and formal verification? (extended abstract). In: Presented at the proceedings of the 35th annual design automation conference, San Francisco, California, USA Li K, Liu L, Zhai J, Kosgoftaar TM, Shao M, Liu W (2017) Reliability evaluation model of component-based software based on complex network theory. Qual Reliab Eng Int 33(3):543–550 Khan W, Ullah H, Ahmad A, Sultan K, Alzahrani AJ, Khan SD et al (2018) CrashSafe: a formal model for proving crash-safety of Android applications. Hum Centric Comput Inf Sci 8:21 Kim J, Won Y (2017) Patch integrity verification method using dual electronic signatures. J Inf Process Syst 13 Hu K, Lei L, Tsai W-T (2016) Multi-tenant verification-as-a-service (VaaS) in a cloud. Simul Model Pract Theory 60:122–143 Jafari Navimipour N (2015) A formal approach for the specification and verification of a Trustworthy Human Resource Discovery mechanism in the Expert Cloud. Expert Syst Appl 42:6112–6131 Jafari Navimipour N, Habibizad Navin A, Rahmani AM, Hosseinzadeh M (2015) Behavioral modeling and automated verification of a Cloud-based framework to share the knowledge and skills of human resources. Comput Ind 68:65–77 Souri A (2016) Formal specification and verification of a data replication approach in distributed systems. Int J Next Gener Comput 7(1):18–37 Souri A, Jafari Navimipour N (2014) Behavioral modeling and formal verification of a resource discovery approach in Grid computing. Expert Syst Appl 41:3831–3849 Souri A, Norouzi M, Safarkhanlou A, Sardroud SHEH (2016) A dynamic data replication with consistency approach in data grids: modeling and verification. Balt J Mod Comput 4:546 Shen VRL, Wang Y-Y, Yu L-Y (2016) A novel blood pressure verification system for home care. Comput Stand Interfaces 44:42–53 Rezaee A, Rahmani AM, Movaghar A, Teshnehlab M (2014) Formal process algebraic modeling, verification, and analysis of an abstract Fuzzy Inference Cloud Service. J Supercomput 67:345–383 Ruiz MC, Cazorla D, Pérez D, Conejero J (2016) Formal performance evaluation of the Map/Reduce framework within cloud computing. J Supercomput 72:3136–3155 Hermanns H, Herzog U, Katoen J-P (2002) Process algebra for performance evaluation. Theoret Comput Sci 274:43–87 Tini S, Larsen KG, Gebler D (2017) Compositional bisimulation metric reasoning with probabilistic process calculi. Log Methods Comput Sci 12(4):2627 MathSciNet MATH Google Scholar Chen X, Wang L (2017) Exploring fog computing based adaptive vehicular data scheduling policies through a compositional formal method-PEPA. IEEE Commun Lett. 2017 Challenger M, Mernik M, Kardas G, Kosar T (2016) Declarative specifications for the development of multi-agent systems. Comput Stand Interfaces 43:91–115 Hao F, Sim D-S, Park D-S, Seo H-S (2017) Similarity evaluation between graphs: a formal concept analysis approach. JIPS 13:1158–1167 Sardar MU, Hasan O, Shafique M, Henkel J (2017) Theorem proving based formal verification of distributed dynamic thermal management schemes. J Parallel Distrib Comput 100:157–171 Srikanth A, Sahin B, Harris WR (2017) Complexity verification using guided theorem enumeration. In: Proceedings of the 44th ACM SIGPLAN symposium on principles of programming languages, pp 639–652 Xue T, Ying S, Wu Q, Jia X, Hu X, Zhai X et al (2017) Verifying integrity of exception handling in service-oriented software. Int J Grid Util Comput 8:7–21 Copet PB, Marchetto G, Sisto R, Costa L (2017) Formal verification of LTE-UMTS and LTE–LTE handover procedures. Comput Stand Interfaces 50:92–106 Edmund J, Clarke M, Grumberg O, Peled DA (1999) Model checking. MIT Press, Cambridge Leitner-Fischer F, Leue S (2013) Causality checking for complex system models. In: Giacobazzi R, Berdine J, Mastroeni I (eds) Proceedings of verification, model checking, and abstract interpretation: 14th international conference, VMCAI 2013, Rome, Italy, January 20–22, 2013. Springer Berlin Heidelberg, Berlin, pp 248–267 Merelli E, Paoletti N, Tesei L (2017) Adaptability checking in complex systems. Sci Comput Program 115–116:23–46 Baier C, Katoen J-P (2008) Principles of model checking (representation and mind series). The MIT Press, Cambridge McMillan KL (1993) Symbolic model checking. Kluwer Academic Publishers, Norwell Burch JR, Clarke EM, McMillan KL, Dill DL, Hwang LJ (1992) Symbolic model checking: 1020 states and beyond. Inf Comput 98:142–170 Souri A, Norouzi M (2015) A new probable decision making approach for verification of probabilistic real-time systems. In: 2015 6th IEEE international conference on software engineering and service science (ICSESS), pp 44–47 Cimatti A, Clarke E, Giunchiglia F, Roveri M (2000) NuSMV: a new symbolic model checker. Int J Softw Tools Technol Transfer 2:410–425 Sun J, Liu Y, Dong JS (2008) Model checking CSP revisited: introducing a process analysis toolkit. In: International symposium on leveraging applications of formal methods, verification and validation, pp 307–322 Holzmann GJ (1997) The model checker SPIN. IEEE Trans Softw Eng 23:279–295 Bengtsson J, Larsen K, Larsson F, Pettersson P, Yi W (1995) UPPAAL—a tool suite for automatic verification of real-time systems. In: International hybrid systems workshop, pp 232–243 Podivinsky J, Cekan O, Lojda J, Zachariasova M, Krcma M, Kotasek Z (2017) Functional verification based platform for evaluating fault tolerance properties. Microprocess Microsyst 52:145–159 Wang S, Huang K (2016) Improving the efficiency of functional verification based on test prioritization. Microprocess Microsyst 41:1–11 Balasubramaniyan S, Srinivasan S, Buonopane F, Subathra B, Vain J, Ramaswamy S (2016) Design and verification of Cyber-Physical Systems using TrueTime, evolutionary optimization and UPPAAL. Microprocess Microsyst 42:37–48 Kaufmann P, Kronegger M, Pfandler A, Seidl M, Widl M (2015) Intra- and interdiagram consistency checking of behavioral multiview models. Comput Lang Syst Struct 44(Part A):72–88 MATH Google Scholar López-Fernández JJ, Guerra E, de Lara J (2016) Combining unit and specification-based testing for meta-model validation and verification. Inf Syst 62:104–135 Amálio N, Glodt C (2015) A tool for visual and formal modelling of software designs. Sci Comput Program 98(Part 1):52–79 Holzmann GJ, Joshi R, Groce A (2008) New challenges in model checking. In: Grumberg O, Veith H (eds) 25 years of model checking: history, achievements, perspectives, Springer Berlin Heidelberg, Berlin, pp 65–76 Bozzano M, Villafiorita A (2006) The FSAP/NuSMV-SA safety analysis platform. Int J Softw Tools Technol Transfer 9:5 Głuchowski P (2016) NuSMV model verification of an airport traffic control system with deontic rules. In: Zamojski W, Mazurkiewicz J, Sugier J, Walkowiak T, Kacprzykj (eds) Dependability engineering and complex systems: proceedings of the eleventh international conference on dependability and complex systems DepCoS-RELCOMEX. June 27–July 1, 2016, Brunów, Poland, Springer International Publishing, Cham, pp 195–206 Safarkhanlou A, Souri A, Norouzi M, Sardroud SEH (2015) Formalizing and verification of an antivirus protection service using model checking. Procedia Comput Sci 57:1324–1331 Ngo VC, Legay A (2018) Formal verification of probabilistic SystemC models with statistical model checking. J Softw Evol Process 30:e1890 Li W, Hayes JH, Antoniol G, Guéhéneuc Y-G, Adams B (2016) Error leakage and wasted time: sensitivity and effort analysis of a requirements consistency checking process. J Softw Evol Process 28:1061–1080 Mercorio F (2013) Model checking for universal planning in deterministic and non-deterministic domains. AI Commun 26:257–259 Li J, Qeriqi A, Steffen M, Yu IC. Automatic translation from FBD-PLC-programs to NuSMV for model checking safety-critical control systems. 2016 Sharma PK, Ryu JH, Park KY, Park JH, Park JH (2018) Li-Fi based on security cloud framework for future IT environment. Hum Centric Comput Inf Sci 8:23 Castelluccia D, Mongiello M, Ruta M, Totaro R (2006) WAVer: a model checking-based tool to verify web application design. Electron Notes Theor Comput Sci 157:61–76 Abdelsadiq A (2013) A toolkit for model checking of electronic contracts Caltais G, Leitner-Fischer F, Leue S, Weiser J (2016) SysML to NuSMV model transformation via object-orientation Deb N, Chaki N, Ghose A (2016) Extracting finite state models from i* models. J Syst Softw 121:265–280 Meenakshi B, Bhatnagar A, Roy S (2006) Tool for translating Simulink models into input language of a model checker Vinárek J, Ŝimko V, Hnĕtynka P (2015) Verification of use-cases with FOAM tool in context of cloud providers. In: 2015 41st euromicro conference on software engineering and advanced applications, pp 151–158 Simko V, Hauzar D, Hnetynka P, Bures T, Plasil F (2015) Formal verification of annotated textual use-cases. Comput J 58:1495–1529 Szwed P (2015) Verification of ArchiMate behavioral elements by model checking. In: Saeed K, Homenda W (eds) Computer information systems and industrial management: 14th IFIP TC 8 international conference, CISIM 2015, Warsaw, Poland, September 24–26, 2015, proceedings, Springer International Publishing, Cham, pp 132–144 Jiang Y, Qiu Z (2012) S2N: model transformation from SPIN to NuSMV. In: Presented at the PROCEEDINGS of the 19th international conference on Model Checking Software, Oxford, UK Szpyrka M, Biernacka A, Biernacki J (2014) Methods of translation of petri nets to NuSMV language. In: CS&P, pp 245–256 Browne MC, Clarke EM, Grümberg O (1987) Characterizing Kripke structures in temporal logic. In: presented at the The International Joint Conference on theory and practice of software development on TAPSOFT '87, Pisa, Italy Reniers MA, Willemse TAC (2011) Folk theorems on the correspondence between state-based and event-based systems. In: Černá I, Gyimóthy T, Hromkovič J, Jefferey K, Králović R, Vukolić M, et al. (eds) SOFSEM 2011: theory and practice of computer science: 37th conference on current trends in theory and practice of computer science, Nový Smokovec, Slovakia, January 22–28, 2011. Proceedings, Springer Berlin Heidelberg, Berlin, pp 494–505 Ghobaei-Arani M, Rahmanian AA, Souri A, Rahmani AM (2018) A moth-flame optimization algorithm for web service composition in cloud computing: simulation and verification. Softw Pract Exp 48:1865–1892 Souri A, Nourozi M, Rahmani AM, Navimipour NJ (2018) A model checking approach for user relationship management in the social network. Kybernetes. https://doi.org/10.1108/K-02-2018-0092092 Bouneb M, Saidouni DE, Ilie JM (2015) A reduced maximality labeled transition system generation for recursive Petri nets. Formal Aspects Comput 27:951–973 Sibay GE, Braberman V, Uchitel S, Kramer J (2013) Synthesizing modal transition systems from triggered scenarios. IEEE Trans Softw Eng 39:975–1001 Souri A, Rahmani AM, Jafari Navimipour N (2018) Formal verification approaches in the web service composition: a comprehensive analysis of the current challenges for future research. Int J Commun Syst 31:1–27 Rozier KY (2011) Linear temporal logic symbolic model checking. Comput Sci Rev 5:163–203 Zhao Y, Rozier KY (2014) Formal specification and verification of a coordination protocol for an automated air traffic control system. Sci Comput Program 96(Part 3):337–353 Bollig B (2016) On the minimization of (complete) ordered binary decision diagrams. Theory Comput Syst 59:532–559 Sharma A (2012) A two step perspective for Kripke structure reduction. arXiv preprint arXiv:1210.0408 Gradara S, Santone A, Villani ML, Vaglini G (2004) Model checking multithreaded programs by means of reduced models. Electron Notes Theor Comput Sci 110:55–74 Flanagan C, Godefroid P (2005) Dynamic partial-order reduction for model checking software. In: Presented at the proceedings of the 32nd ACM SIGPLAN-SIGACT symposium on principles of programming languages, Long Beach, California, USA Reniers MA, Schoren R, Willemse TAC (2014) Results on embeddings between state-based and event-based systems. Comput. J 57:73–92 All authors read and approved the final manuscript. Alireza Souri, Amir Masoud Rahmani, Nima Jafari Navimipour and Reza Rezaei contributed equally to this manuscript Alireza Souri & Amir Masoud Rahmani Department of Computer Engineering, Tabriz Branch, Islamic Azad University, Tabriz, Iran Nima Jafari Navimipour Department of Computer Engineering, Saveh Branch, Islamic Azad University, Saveh, Iran Reza Rezaei Alireza Souri Souri, A., Rahmani, A.M., Navimipour, N.J. et al. A symbolic model checking approach in formal verification of distributed systems. Hum. Cent. Comput. Inf. Sci. 9, 4 (2019). https://doi.org/10.1186/s13673-019-0165-x Temporal logic Reduced model Kripke structure Labeled Transition System
CommonCrawl
\begin{document} \title{Uniform Lech's inequality} \author{Linquan Ma} \address{Department of Mathematics, Purdue University, West Lafayette, IN 47907 USA} \email{[email protected]} \author{Ilya Smirnov} \address{BCAM -- Basque Center for Applied Mathematics, Bilbao, Spain \quad and \quad IKERBASQUE, Basque Foundation for Science, Bilbao, Spain} \email{[email protected]} \maketitle \begin{abstract} Let $(R,\m)$ be a Noetherian local ring of dimension $d\geq 2$. We prove that if $\operatorname{e}(\widehat{R}_{\red})>1$, then the classical Lech's inequality can be improved uniformly for all $\m$-primary ideals, that is, there exists $\varepsilon>0$ such that $\operatorname{e}(I)\leq d!(\operatorname{e}(R)-\varepsilon)\ell(R/I)$ for all $\m$-primary ideals $I\subseteq R$. This answers a question raised in \cite{HMQS}. We also obtain partial results towards improvements of Lech's inequality when we fix the number of generators of $I$. \end{abstract} \section{Introduction} The origin of this paper is a simple inequality of Lech, proved in \cite{LechMultiplicity}, that connects the colength and the multiplicity of an $\m$-primary ideal in a Noetherian local ring $(R,\m)$. \begin{theorem}[Lech's inequality] \label{theorem: Lech} Let $(R,\m)$ be a Noetherian local ring of dimension $d$ and let $I$ be an $\m$-primary ideal of $R$. Then we have $$\operatorname{e}(I)\leq d!\operatorname{e}(R)\ell(R/I),$$ where $\operatorname{e}(I)$ denotes the Hilbert--Samuel multiplicity of $I$ and $\operatorname{e}(R):=\operatorname{e}(\m)$. \end{theorem} Lech observed that his inequality is never sharp if $d \geq 2$ (see \cite[Page 74, after (4.1)]{LechMultiplicity}): that is, when $d\geq 2$ we always have a strict inequality in Theorem~\ref{theorem: Lech}. The problem of improving Lech's inequality by replacing $\operatorname{e}(R)$ with a smaller constant was raised in \cite{HMQS}. This problem is partially motivated by \cite{Mumford}, where Mumford considered the quantity $$\sup_{\sqrt{I} = \m} \left \{\frac{\operatorname{e}(I)}{d!\ell(R/I)} \right \}$$ and showed that this has close connections with singularities on the compactification of the moduli spaces of smooth varieties constructed via Geometric Invariant Theory. The following conjecture is the proposed refinement of Lech's inequality (see \cite[Conjecture 1.2]{HMQS}): \begin{conjecture} \label{conj: asymptoticLech} Let $(R,\m)$ be a Noetherian local ring of dimension $d\geq 1$. \begin{enumerate} \item[(a)] If $\widehat{R}$ has an isolated singularity, i.e., $\widehat{R}_P$ is regular for all $P\in\operatorname{Spec}\widehat{R}-\{\m\}$, then \[ \lim_{N\to\infty} \sup_{\substack{\sqrt{I}=\m \\ \ell(R/I)> N}} \left\{\frac{\operatorname{e}(I)}{d!\ell(R/I)} \right\}=1. \] \item[(b)] We have $\operatorname{e}(\widehat{R}_{\red}) > 1$ if and only if \[ \lim_{N\to\infty} \sup_{\substack{\sqrt{I}=\m \\ \ell(R/I)> N}} \left\{\frac{\operatorname{e}(I)}{d!\ell(R/I)} \right\}<\operatorname{e}(R). \] \end{enumerate} \end{conjecture} Roughly speaking, we expect that the constant $\operatorname{e}(R)$ on the right hand side of Lech's inequality can usually be replaced by a smaller number as long as the colength of the ideal is large. The first part of Conjecture~\ref{conj: asymptoticLech} was established in \cite{HMQS} when $R$ has positive characteristic with perfect residue field and the second part of Conjecture~\ref{conj: asymptoticLech} when $R$ has equal characteristic. Our main goal in this article is to settle the second part of Conjecture~\ref{conj: asymptoticLech} in full generality by proving the following, which can be viewed as a uniform version of Lech's inequality: \begin{theorem}[=Theorem~\ref{theorem: uniform Lech}] \label{theorem: uniform Lech introduction} Let $(R, \mathfrak m)$ be a Noetherian local ring of dimension $d \geq 2$. Suppose $\operatorname{e}({\widehat{R}}_{\red})>1$. Then there exists $\varepsilon > 0$ such that for any $\m$-primary ideal $I$, we have $$ \operatorname{e}(I) \leq d!(\operatorname{e}(R) - \varepsilon) \ell (R/I). $$ \end{theorem} The main case of Conjecture~\ref{conj: asymptoticLech} (b) follows immediately from Theorem~\ref{theorem: uniform Lech introduction}, see Corollary~\ref{cor: asymptotic Lech}. Our approach to Theorem~\ref{theorem: uniform Lech introduction} is similar to the strategy in the equal characteristic case proved in \cite[Theorem 5.8]{HMQS}. However, the main reason that the argument in \cite{HMQS} does not carry to mixed characteristic is because it crucially relies on a refined version of Lech's inequality for ideals with a fixed number of generators in equal characteristic (see \cite[Proposition 5.7]{HMQS}, recalled in Theorem~\ref{theorem: Hanes}) which essentially follows from work of Hanes \cite[Theorem 2.4]{Hanes} on Hilbert--Kunz multiplicity. We do not know whether such a version of Lech's inequality holds in mixed charactersitic (though we expect it to hold, see Conjecture~\ref{conj: Hanes}). Due to the absence of this ingredient in mixed characteristic, we prove Theorem~\ref{theorem: uniform Lech introduction} by carefully passing to certain associated graded rings to reduce to an equal characteristic setting so that \cite[Proposition 5.7]{HMQS} can be applied. On the other hand, our strategy in the proof of Theorem~\ref{theorem: uniform Lech introduction} does allow us to obtain a weaker version of \cite[Proposition 5.7]{HMQS} valid in all characteristics for integrally closed ideals. A value of this result is not just in mixed characteristic, it also removes the need of a reduction modulo $p$ argument used in characteristic $0$ to deduce \cite[Theorem 5.8]{HMQS} from the result of Hanes. \begin{theorem}[=Corollary~\ref{cor: weak Hanes}] Let $d \geq 2$ and $N \geq d$ be two positive integers. Then there exists a constant $c = c(N, d) \in (0, 1)$ such that for any Noetherian local ring $(R, \m)$ of dimension $d$ and any $\m$-primary integrally closed ideal $I$ which can be generated by $N$ elements we have \[ \operatorname{e}(I) \leq d! c \operatorname{e}(R) \ell (R/I). \] \end{theorem} \section{Preliminaries} Throughout this article, all rings are commutative, Noetherian, with multiplicative identity $1$. We use $\ell (M)$ to denote the length of a finite $R$-module $M$ and $\mu(M)$ to denote the minimal number of generators of $M$. \begin{definition}\label{def HS} Let $(R, \mathfrak m)$ be a Noetherian local ring of dimension $d$ and $I$ be an $\mathfrak m$-primary ideal. The {\it Hilbert--Samuel multiplicity} of $I$ is defined as \[ \operatorname{e}(I) = \lim_{n\to\infty}\frac{d!\ell(R/I^n)}{n^d}. \] \end{definition} It is well-known that $\operatorname{e}(I)$ is always a positive integer. The Hilbert--Samuel multiplicity is closely related to integral closure. Recall that an element $x\in R$ is integral over an ideal $I$ if it satisfies an equation of the form $x^n + a_{1}x^{n-1} + \cdots + a_{n-1}x+ a_n=0$ where $a_k \in I^k$. The set of all elements $x$ integral over $I$ is an ideal and is denoted by $\overline{I}$, called the integral closure of $I$. The Hilbert--Samuel multiplicity is an invariant of the integral closure, i.e., $\operatorname{e}(I) = \operatorname{e}(\overline{I})$. Thus, we always have an inequality $\operatorname{e}(I)/\ell(R/I) \leq \operatorname{e}(\overline{I})/\ell(R/\overline{I})$. In particular, Conjecture~\ref{conj: asymptoticLech} can be restricted to integrally closed ideals. Another related concept is $\m$-full ideals. We briefly recall the definition following \cite{JWatanabe}. \begin{definition}\label{def m-full} Let $(R, \mathfrak m)$ be a Noetherian local ring. Let $\m=(x_1,\dots,x_n)$ and let $\widetilde{R} = R(t_1,\dots,t_n)$, and consider the general linear form $z = t_1x_1+\cdots+t_nx_n$. An ideal $I$ of $R$ is called $\mathfrak m$-{\it full} if $\mathfrak mI\widetilde{R} : z = I\widetilde{R}$. \end{definition} The following remark summarizes some useful properties of $\m$-full ideals \begin{remark}With notation as in Definition~\ref{def m-full}, we have \label{remark: property m-full} \begin{enumerate} \item If $I$ is $\mathfrak m$-full, then $I\widetilde{R}:z=I\widetilde{R}:\mathfrak m$ (\cite[Lemma~1]{JWatanabe}). \item If $I$ is integrally closed, then $I$ is $\mathfrak m$-full or $I = \sqrt{(0)}$ (\cite[Theorem~2.4]{Goto}). \item If $I$ is $\mathfrak m$-primary and $\mathfrak m$-full, then $\mu(I) \geq \mu(J)$ for any ideal $J\supseteq I$ (\cite[Theorem~3]{JWatanabe}). \item If $I$ is $\m$-primary and $\mathfrak m$-full, then $\mu(I) = \ell(\widetilde{R}/(z, I)\widetilde{R}) + \mu(I(\widetilde{R}/z\widetilde{R}))$ (\cite[Theorem~2]{JWatanabe}). \end{enumerate} \end{remark} \subsection* {The associated graded ring} Our key argument relies on passage to certain associated graded rings in order to transfer to the equal characteristic setting. We record some notations and simple facts about initial (form) ideals in associated graded rings. Let $J\subseteq R$ be an ideal and let $\operatorname{gr}_J(R)=\bigoplus_n J^n/J^{n+1}$ be the associated graded ring of $R$ with respect to $J$. If $I\subseteq R$ is another ideal then we will use $$\initial_J(I):= \bigoplus_n \frac{I\cap J^n + J^{n + 1}}{J^{n + 1}}\subseteq \operatorname{gr}_J(R)$$ to denote the initial ideal of $I$ (or form ideal in the notation of \cite{LechMultiplicity}) in the associated graded ring. Now let $(R,\m)$ be a Noetherian local ring and $I\subseteq R$ be an $\m$-primary ideal. It is well-known and easy to check that $\ell(\operatorname{gr}_J(R)/\initial_J(I)) = \ell(R/I)$. Furthermore, since $\initial_J(I)^n\subseteq \initial_J(I^n)$ and $\dim(R)=\dim(\operatorname{gr}_J(R))$, we have $\operatorname{e}(I) \leq \operatorname{e}(\initial_J(I))$. \begin{lemma}\label{lemma: double graded} Let $(R, \mathfrak m)$ be a Noetherian local ring. Then $\operatorname{gr}_{(\mathfrak m, T)} (R[T]) = \operatorname{gr}_{\mathfrak m} (R)[T]$. Moreover, via this identification, the initial ideal of a $T$-homogeneous ideal $I = \sum_k I_k T^k$ is $\sum_k \initial_{\mathfrak m} (I_k) T^k$. \end{lemma} \begin{proof} The first claim follows from the second by considering the unit ideal (so that $I_k=R$ for all $k$). We know that the image of $I$ on the left hand side is \[ \initial_{(\mathfrak m, T)} (I) = \bigoplus_{n \geq 0} \frac{I \cap (\mathfrak m, T)^n}{I \cap (\mathfrak m, T)^{n + 1}}. \] Since $I \cap (\mathfrak m, T)^n = \sum_{k = 0}^{n-1} (I_k \cap \mathfrak m^{n - k}) T^k + \sum_{k \geq n} I_kT^k$, by restricting to fixed $T$-degree components we may further decompose \[ \initial_{(\mathfrak m, T)} (I) = \bigoplus_{n \geq 0} \bigoplus_{k = 0}^n \frac{I_k \cap \mathfrak m^{n - k}}{I_k \cap \mathfrak m^{n +1 - k}} T^k = \bigoplus_{k \geq 0} \left( \bigoplus_{n \geq k} \frac{I_k \cap \mathfrak m^{n - k}}{I_k \cap \mathfrak m^{n +1 - k}} \right ) T^k = \bigoplus_{k \geq 0} \initial_{\mathfrak m} (I_k) T^k. \qedhere \] \end{proof} \section{Main Results} In this section we prove our main results. We begin with a few lemmas. \begin{lemma}\label{lemma: bound generators} Let $(R, \mathfrak m)$ be a Noetherian local ring and $I = I_0 + I_1 T + I_2T^2 + \cdots $ be a $T$-homogeneous ideal of finite colength in $R[T]$. Then $\mu(I) \leq \mu(I_0) + \ell (R/I_0)$. In particular, if $\dim(R)=1$, then $\mu(I)\leq \ell (R/I_0) + \operatorname{e}(R) + \ell (\lc_{\mathfrak m}^0 (R))$. \end{lemma} \begin{proof} We have containments $I_0 \subseteq I_1 \subseteq \cdots$ and this sequence will eventually include the unit ideal $R$. If $x_{k,1}, \ldots, x_{k, D_k}$ for $k \geq 0$ are such that their images form a minimal generating set for $I_{k+1}/I_k$, then it is easy to see that $I$ can be generated by the generators of $I_0$ and $\{x_{k, i}T^{n_k}, T^{n_C}\}_{k = 0, i = 1}^{k = C - 1, i = D_k}$. Therefore \[ \mu(I) \leq \mu(I_0) + \sum_{k \geq 0} \mu (I_{k+1}/I_k) \leq \mu(I_0) + \sum_{k \geq 0} \ell (I_{k+1}/I_k) = \mu(I_0) + \ell (R/I_0). \] For the second assertion note that $\mu(I_0) \leq \operatorname{e}(R) + \ell (\lc_{\mathfrak m}^0 (R))$ (for example, see \cite[Lemma 5.5]{HMQS}). \end{proof} We next prove a local Bertini-type result, this should be well-known to experts and the case of $s = 0$ follows from \cite[Theorem]{Hochster}. We thank Bernd Ulrich for suggesting the argument. \begin{lemma} \label{lemma: Ulrich} Let $(R,\m)$ be a Noetherian local ring which satisfies Serre's condition $(R_s)$ and has dimension at least $s+2$. If $\m=(x_1,\dots,x_n)$, then $R(t_1,\dots,t_n)/(t_1x_1+\cdots+t_nx_n)$ still satisfies $(R_s)$. \end{lemma} \begin{proof} Let $P$ be a height $s+1$ prime in $S = R[t_1,\ldots, t_n]$ that contains $z=t_1x_1+\cdots+t_nx_n$. It is enough to show that $S_P/zS_P$ is regular. Let $Q = P \cap R$. We first claim that $\operatorname{ht}(Q) \leq s$. For if $\operatorname{ht}(Q)= s+1$, then we must have $P = Q[t_1, \ldots, t_n]$, but then $P$ cannot contain $z$ because $Q \neq \mathfrak m$ (since $\operatorname{ht}(\m)=\dim(R)\geq s+2>\operatorname{ht}(Q)$), which is a contradiction. Thus, without loss of generality, we assume that $x_1 \notin Q$, so $R_Q[t_1, \ldots, t_n]/(z) \cong R_Q[t_2, \ldots, t_n]$ is regular because $R_Q$ is regular. Therefore, $(S/zS)_P \cong (R_Q[t_2, \ldots, t_n])_P$ is also regular. \end{proof} We will need the following version of Lech's inequality, which is proved in \cite{HMQS} using Hanes' work on Hilbert--Kunz multiplicity \cite[Theorem 2.4]{Hanes} and reduction mod $p>0$. \begin{theorem}[{\cite[Proposition 5.7]{HMQS}}] \label{theorem: Hanes} Let $d \geq 2$ and $N \geq d$ be two positive integers. Then there exists a constant $c = c(N, d) \in (0,1)$ such that for any equal characteristic Noetherian local ring $(R, \m)$ of dimension $d$ and any $\m$-primary ideal $I$ with $\mu(I)\leq N$, we have \[ \operatorname{e}(I) \leq d! c \operatorname{e}(R) \ell (R/I). \] In fact, one can take $c=(1-\frac{1}{N^{1/(d-1)}})^{d-1}$. \end{theorem} We now prove our main technical result. \begin{theorem}\label{theorem: main technical dim 2} Let $(R,\m)$ be a two-dimensional Noetherian complete local ring which satisfies Serre's condition $(R_0)$. If $\operatorname{e}(R)>1$, then there exists $\epsilon>0$ such that $\operatorname{e}(I)\leq 2(\operatorname{e}(R)-\epsilon)\ell(R/I)$ for all $\m$-primary ideals $I$. \end{theorem} \begin{proof} Let $P_1,\dots,P_n$ be the minimal primes of $R$ such that $\dim(R/P_i)=2$. Since $R$ is $(R_0)$, we know that $0$ has a primary decomposition $$0=P_1\cap P_2\cap \cdots \cap P_n \cap P_{n+1}\cap \cdots \cap P_m\cap Q_1\cap\cdots \cap Q_k$$ where $P_{n+1},\dots, P_m$ are (possibly) minimal primes of $R$ whose dimensions are less than $2$ and $Q_1,\dots,Q_k$ are (possibly) embedded components. If we replace $R$ by $\widetilde{R}=R/(P_1\cap\cdots\cap P_n)$, then it follows by the additivity formula for multiplicities that for all $\m$-primary ideals $I\subseteq R$, we have $\operatorname{e}(I, R)=\operatorname{e}(I, \widetilde{R})$ while $\ell(R/I)\geq\ell(\widetilde{R}/I\widetilde{R})$. It follows that $$\frac{\operatorname{e}(I)}{2\cdot\ell(R/I)}\leq \frac{\operatorname{e}(I\widetilde{R})}{2\cdot \ell(\widetilde{R}/I\widetilde{R})}.$$ Therefore to prove the result for $R$, it is enough to establish it for $\widetilde{R}$ (note that $\operatorname{e}(R)=\operatorname{e}(\widetilde{R})$). Thus we may replace $R$ by $\widetilde{R}$ to assume that $R$ is reduced and equidimensional. Let $x_1,\dots,x_n$ be a generating set of $\m$. Note that by Lemma~\ref{lemma: Ulrich}, $R(t_1,\dots,t_n)/(t_1x_1+\cdots + t_nx_n)$ satisfies $(R_0)$. Since it is excellent, its completion still satisfies $(R_0)$. Note that the depth of the completion of $R(t_1,\dots,t_n)$ is at least one (since this is true for $R$). Therefore, after replacing $R$ by the completion of $R(t_1,\dots,t_n)$, we may assume that there exists a nonzerodivisor $z\in \m$ such that $z$ is a part of a minimal reduction of $\m$ and $S = R/zR$ is $(R_0)$, and that the residue field of $R$ is infinite. Note that $\operatorname{e}(S)=\operatorname{e}(R)$ since $z$ is part of a minimal reduction of $\m$. By \cite[Proposition 4.10]{HMQS}, there exists $C$ such that for all ideals $J\subseteq S$ with $\ell(S/J)>C$, we have that $\operatorname{e}(J)\leq \frac{3}{2} \cdot \ell(S/J)$. Let us fix this $C$. We first consider an arbitrary $\m$-primary ideal $I\subseteq R$ such that $\ell(S/IS)\leq C$. We take the associated graded ring of $R$ with respect to the ideal $(z)$, and we use $\initial_z(I)$ to denote the initial ideal of $I$ in $\operatorname{gr}_z (R)$. Since $z$ is a nonzerodivisor, we have $\operatorname{gr}_{z} (R) \cong S[T]$. It follows that $$\initial_z(I)=I_0+I_1T+\cdots +I_{N-1}T^{N-1}+T^N$$ where $I_0\subseteq I_1\subseteq \cdots\subseteq I_{N-1}$ are $\m_S$-primary ideals in $S$. Note that our assumption on $I$ says that $\ell(S/I_0)\leq C$, which is a constant that does not depend on $I$, $N$, or any of the $I_i$. Since $\operatorname{e}(\initial_z(I))\geq \operatorname{e}(I)$ and $\ell(R/I)=\ell(S[T]/\initial_z(I))$, we have \begin{equation} \label{equation 1} \frac{\operatorname{e}(I)}{2\cdot \ell(R/I)} \leq \frac{\operatorname{e}(\initial_z(I))}{2\cdot \ell(S[T]/\initial_z(I))}. \end{equation} We next take the associated graded ring of $S[T]$ with respect to $(\mathfrak m, T)$. By Lemma~\ref{lemma: double graded} the initial ideal of $\initial_z(I)$ is \[ J := \initial_{\mathfrak m} (I_0) + \initial_{\mathfrak m} (I_1)T + \cdots + \initial_{\mathfrak m} (I_{N-1})T^{N-1} + T^N \subseteq \operatorname{gr}_{\m}(S)[T]. \] Note that $\operatorname{e}(J)\geq \operatorname{e}(\initial_z(I))$ and $\ell(S[T]/\initial_z(I))=\ell((\operatorname{gr}_{\m}S)[T]/J)$, thus we have \begin{equation} \label{equation 2} \frac{\operatorname{e}(\initial_z(I))}{2\cdot \ell(S[T]/\initial_z(I))} \leq \frac{\operatorname{e}(J)}{2 \cdot \ell (R/J)}. \end{equation} By Lemma~\ref{lemma: bound generators}, $\mu(J)\leq C + \operatorname{e}(S) + \ell (\lc_{ \initial_{\mathfrak m}(\m)}^0 (\operatorname{gr}_{\mathfrak m} (S))$. Since $\operatorname{gr}_{\mathfrak m} (S)[T]$ contains a field, $S/\m S$, and has dimension two, by Theorem~\ref{theorem: Hanes}, there exists a constant $0<\epsilon\ll1$ (which depends on $C$, $R$ and $S$, but not on $I$!) such that \begin{equation}\label{equation 3} \frac{\operatorname{e}(J)}{2 \cdot \ell (R/J)} \leq (1 - \epsilon) \operatorname{e}(\operatorname{gr}_{\mathfrak m} (S)[T]) = (1 - \epsilon) \operatorname{e}(R). \end{equation} Putting (\ref{equation 1}), (\ref{equation 2}), (\ref{equation 3}) together, we have proved the theorem for all $I$ such that $\ell(S/IS)\leq C$. We can further shrink $\epsilon$ to guarantee that $\frac{3}{2} < (1 - \epsilon) \operatorname{e}(R)$ since $\operatorname{e}(R)>1$. Finally, we use induction on $\ell(R/I)$ to show that $\epsilon$ works for all $\m$-primary ideal $I\subseteq R$. We may assume that $\ell(S/IS)>C$, then by \cite[Lemma 5.1]{HMQS} and the second paragraph of the proof, we have $$\frac{\operatorname{e}(I)}{2\cdot\ell(R/I)}\leq \max\left\{\frac{\operatorname{e}(I:z)}{2\cdot \ell(R/(I:z))}, \frac{\operatorname{e}(IS)}{\ell(S/IS)}\right\} \leq \max \left\{(1 - \epsilon) \operatorname{e}(R), \frac{3}{2} \right\} = (1 - \epsilon) \operatorname{e}(R), $$ where for the second inequality we are using induction on the colength. \end{proof} We next deduce the higher dimensional case from the two-dimensional case via induction on dimension, this is similar to the strategy in \cite[Theorem 5.8]{HMQS}, the only difference is that here we use Lemma~\ref{lemma: Ulrich} instead of Flenner's result \cite[Lemma 5.4]{HMQS} (in equal characteristic). \begin{corollary} \label{cor: main R0} Let $(R, \m)$ be a Noetherian local ring of dimension $d$ such that $\widehat{R}$ satisfies Serre's condition $(R_0)$. If $d\geq 2$ and $\operatorname{e}(R)>1$, then there exists $\epsilon>0$ such that $\operatorname{e}(I)\leq d!(\operatorname{e}(R)-\epsilon)\ell(R/I)$ for all $\m$-primary ideals $I$. \end{corollary} \begin{proof} We may assume that $R$ is complete. We use induction on $d \geq 2$. Theorem~\ref{theorem: main technical dim 2} provides the base case. Suppose $d\geq 3$ and that $x_1, \ldots, x_n$ is a generating set for $\mathfrak m$. We can replace $R$ by $R(t_1, \ldots, t_n)$. Then we consider $R' = R(t_1, \ldots, t_n)/(t_1x_1 + \cdots + t_nx_n)$. By Lemma~\ref{lemma: Ulrich}, $R'$ (and hence $\widehat{R'}$) still satisfies $(R_0)$, $\dim(R') = d- 1$, and $\operatorname{e}(R') = \operatorname{e}(R)$. By induction the assertion holds for $R'$. That is, there exists $\epsilon$ such that $\operatorname{e}(J) \leq (d-1)!(\operatorname{e}(R') - \epsilon) \ell (R'/J)$ for any $\m$-primary ideal $J\subseteq R'$. We use induction on $\ell (R/I)$ to show that the same $\epsilon$ works for $R$ (the initial case $I=\m$ is obvious). By \cite[Lemma 5.1]{HMQS} we have \[ \frac{\operatorname{e}(I)}{d!\ell (R/I)} \leq \max \left\{\frac{\operatorname{e}(I:z)}{d!\ell (R/(I:z))}, \frac{\operatorname{e}(IR')}{(d-1)!\ell (R'/IR')} \right\} \leq \operatorname{e}(R') - \epsilon=\operatorname{e}(R)-\epsilon. \] where $z=t_1x_1 + \cdots + t_nx_n$. This completes the proof. \end{proof} Here is our uniform Lech's inequality, now valid in all characteristics. \begin{theorem}[Uniform Lech's inequality] \label{theorem: uniform Lech} Let $(R, \mathfrak m)$ be a Noetherian local ring of dimension $d \geq 2$. Suppose $\operatorname{e}({\widehat{R}}_{\red})>1$. Then there exists $\varepsilon > 0$ such that for any $\m$-primary ideal $I$, we have $$ \operatorname{e}(I) \leq d!(\operatorname{e}(R) - \varepsilon) \ell (R/I). $$ \end{theorem} \begin{proof} This follows by the same argument as in \cite[Proof of Corollary 5.9]{HMQS}, we just replace the citation of \cite[Theorem 5.8]{HMQS} by Corollary~\ref{cor: main R0} above. \end{proof} Now we can prove Conjecture~\ref{conj: asymptoticLech}. \begin{corollary} \label{cor: asymptotic Lech} Let $(R,\m)$ be a Noetherian local ring of dimension $d\geq 1$. Then we have $\operatorname{e}(\widehat{R}_{\red}) > 1$ if and only if \[ \lim_{N\to\infty} \sup_{\substack{\sqrt{I}=\m \\ \ell(R/I)> N}} \left\{\frac{\operatorname{e}(I)}{d!\ell(R/I)} \right\}<\operatorname{e}(R). \] \end{corollary} \begin{proof} The ``if" direction was proved in \cite[Proposition 5.3]{HMQS}. For the ``only if" direction, the one-dimensional case was proved in \cite[Proposition 5.11]{HMQS}, and when $d\geq 2$, the result follows immediately from Theorem~\ref{theorem: uniform Lech}. \end{proof} \subsection{Uniform Lech's inequality for ideals with fixed number of generators} We conjecture that Theorem~\ref{theorem: Hanes} holds without the assumption on characteristic. We are able to show this for \emph{integrally closed} (more generally $\mathfrak m$-full) ideals, which is sufficient for giving a different proof of the Uniform Lech's inequality, see Remark~\ref{rmk: different proof}. The proof is different in dimension two and for higher dimensions. We start with the former, for which we will need the following corollary of Lemma~\ref{lemma: bound generators}. \begin{corollary}\label{cor: quadratic bound} Let $(R, \mathfrak m)$ be a Noetherian local ring and $I$ be an $\m$-full $\m$-primary ideal. Let $\m=(x_1,\dots,x_n)$ and define $S = R(t_1,\dots,t_n)/\lc^0_{\m} (R)R(t_1,\dots,t_n)$ with the general linear form $z = t_1x_1+\cdots+t_nx_n$. Then $\initial_z(IS)$, the initial ideal of $IS$ in $\operatorname{gr}_z (S)$, can be generated by at most $\mu(I)$ homogeneous elements. \end{corollary} \begin{proof} Since $S$ has positive depth, $z$ is a nonzerodivisor on $S$. By Lemma~\ref{lemma: bound generators}, we know that $\initial_z(IS)$ can be generated by at most $\ell (S/(I, z)) + \mu(I(S /zS))$ (homogeneous) elements. Both the minimal number of generators and the colength do not increase when passing to a quotient ring. Hence if we let $\widetilde{R} = R(t_1,\dots,t_n)$ then the above bound is no greater than $ \ell (\widetilde{R}/(I, z)\widetilde{R}) + \mu(I(\widetilde{R} /z\widetilde{R})). $ Because $I$ is $\m$-full, we know that $\mu(I)=\mu(I(\widetilde{R} /z\widetilde{R})) + \ell (\widetilde{R} /(I, z)\widetilde{R})$ by Remark~\ref{remark: property m-full}. \end{proof} \begin{theorem}\label{theorem: dim 2 generators} For any Noetherian local ring $(R, \m)$ of dimension two and any $\m$-primary $\m$-full ideal $I$ which can be generated by $N$ elements we have \[ \operatorname{e}(I) \leq 2 \left (1 - \frac 1{2N-2}\right) \operatorname{e}(R) \ell (R/I). \] \end{theorem} \begin{proof} We use the notation of Corollary~\ref{cor: quadratic bound}. Observe that passing from $R$ to $S$ does not affect multiplicity and does not increase the colength. Let $J:=\initial_z(IS)$ be the initial ideal of $IS$ in $\operatorname{gr}_z(S)\cong (S/zS)[T]$. We know that \begin{equation} \label{equation in two-dim Hanes} \frac{\operatorname{e}(I)}{2\cdot\ell(R/I)}\leq \frac{\operatorname{e}(IS)}{2\cdot\ell(S/IS)}\leq \frac{\operatorname{e}(J)}{2\cdot\ell(\operatorname{gr}_z(S)/J)}. \end{equation} We next write $J = J_0 + J_1T + \cdots + J_{K - 1}T^{K-1} + T^K$ as a $T$-homogenous ideal of $(S/zS)[T]$ with $J_{K-1} \neq R$. By Corollary~\ref{cor: quadratic bound}, we know that $J$ can be generated by at most $N$ homogeneous elements. We define the sequence $\{n_k\}$ that labels the distinct $J_i$ by setting $n_0 = 0$ and $n_{k+1} = \min \{n < K \mid J_n \neq J_{n_k}\}$. We now choose appropriately $\leq N - 1$ generators $\{a_{i, j}T^j\}$ of $J$, so that $a_{1, 0}, a_{2, 0}, \ldots $ generate $J_0=J_{n_0}$ and $J_{n_k}$ is generated by $a_{i, j}$ with $j \leq k$ (thus $J$ is generated by $T^k$ and $a_{i, j}T^{n_j}$). By adjoining one more generator to each $J_{n_k}$ if necessary, we may assume that the chosen generating set contain a minimal reduction of each $J_{n_k}$ as one of $a_{i, k}$ (note that each $J_{n_k}$ is an ideal in the one-dimensional ring $S/zS$). The total number of adjoined generators is at most $N - 2$, because there are at most $N - 1$ ideals $J_{n_k}$s and we do not need to adjoin a new generator to $J_{n_0}$ (since we can let $a_{1,0}$ be a minimal reduction of $J_0$). Inspired by positive characteristic methods, for each positive integer $q$ we define $J^{[q]}$ as the ideal generated by $\{a_{i, j}^qT^{jq}\}$ and define $J_{i}^{[q]}$ accordingly. Note that $J^{[q]}$ and each $J_i^{[q]}$ in principle might depend on the chosen generating set $\{a_{i,j}T^j\}$ but this will not be a problem. By definition, we have $$J^{[q]}= J_0^{[q]} + J_0^{[q]} T + \cdots J_0^{[q]} T^{q - 1} + J_1^{[q]}T^q + \cdots+ J_{1}^{[q]}T^{2q-1} + J_2^{[q]}T^{2q}+ \cdots .$$ It follows that $\ell ((S/zS)[T]/J^{[q]}) = q \sum_{i=0}^{K-1} \ell (S/(z, J_i^{[q]}))$. Note that since $\dim(S/zS) = 1$ and our selected list of generators contains a minimal reduction $a_i$ of every $J_i$, we have that \[\operatorname{e} (J_i) = \lim_{q \to \infty} \frac{\ell (S/(z, a_i^q))}{q} \geq \lim_{q \to \infty} \frac{\ell (S/(z, J_i^{[q]}))}{q} \geq \lim_{q \to \infty} \frac{\ell (S/(z, J_i^{q}))}{q} = \operatorname{e} (J_i). \] Therefore we have $$\lim_{q \to \infty} \frac{\ell((S/zS)[T]/J^{[q]})}{q^2}=\sum_{i=0}^{K-1}\frac{\ell (S/(z, J_i^{[q]}))}{q} =\sum_{i=0}^{K-1}\operatorname{e}(J_i).$$ Applying Lech's inequality (Theorem~\ref{theorem: Lech}) to each $J_i\subseteq S/zS$, we then have \begin{equation}\label{frq equation} \lim_{q \to \infty} \frac{\ell ((S/zS)[T]/J^{[q]})}{q^2} \leq \operatorname{e} (S/zS) \sum_{i=1}^{K-1} \ell (S/(z, J_i)) = \operatorname{e} (S/zS) \ell ((S/zS)[T]/J). \end{equation} At this point, we follow the argument in \cite[Theorem 2.4]{Hanes}. First, since $J^{[q]}$ is clearly contained in $J^q$ and is generated by at most $2N-2$ elements, for any integer $s$ we can surject $2(N-1)$ copies of $(S/zS)[T]/J^s$ onto $J^{[q]}/(J^{[q]} \cap J^{q + s})$. Thus $$\ell ((S/zS)[T]/J^{[q]}) \geq \ell ((S/zS)[T]/J^{q + s}) - 2(N-1) \ell ((S/zS)[T]/J^s).$$ As in \cite[Theorem 2.4]{Hanes}, setting $s = \lceil q/(2N-3) \rceil$ will yield \[ \lim_{q \to \infty} \frac{\ell ((S/zS)[T]/J^{[q]})}{q^2} \geq \frac{\operatorname{e}(J)}{2} \left (\left(1 + \frac{1}{2N-3} \right)^2 - \frac{2N-2}{2N-3} \right ) = \frac{\operatorname{e}(J)} 2 \left(1 + \frac{1}{2N-3} \right). \] Combining with (\ref{frq equation}) we obtain that \[ \frac{\operatorname{e}(J)}{2\cdot \ell ((S/zS)[T]/J)} \leq \operatorname{e}(S/zS) \left(1 + \frac{1}{2N-3} \right)^{-1} = \operatorname{e}(R) \left(1 + \frac{1}{2N-3} \right)^{-1}. \] This together with (\ref{equation in two-dim Hanes}) completes the proof. \end{proof} \begin{corollary}\label{cor: no Hanes bound} Let $(R, \m)$ be a two-dimensional Noetherian local ring. Let $\m=(x_1,\dots,x_n)$ and define $R' = R(t_1,\dots,t_n)/(t_1x_1+\cdots+t_nx_n)$. Then for any positive integer $N$ there exists $\varepsilon > 0$ such that for any $\mathfrak m$-primary ideal $I$ with $\ell (R'/IR') \leq N$, we have \[ \operatorname{e}(I)\leq 2(1 - \varepsilon)\operatorname{e}(R)\ell(R/I). \] \end{corollary} \begin{proof} We may assume $I$ is integrally closed since replacing $I$ by its integral closure $\overline{I}$ will not affect $\operatorname{e}(I)$ and will not increase $\ell(R/I)$. By Remark~\ref{remark: property m-full} and \cite[Lemma 5.5]{HMQS}, $\mu(I) \leq N + \mu(IR')$ is then bounded by a constant independent of $I$, so we may apply Theorem~\ref{theorem: dim 2 generators}. \end{proof} \begin{remark}\label{rmk: different proof} One can give an alternative proof of Theorem~\ref{theorem: main technical dim 2} (and thus the uniform Lech's inequality Theorem~\ref{theorem: uniform Lech}) via Corollary~\ref{cor: no Hanes bound}: in fact, Corollary~\ref{cor: no Hanes bound} handles exactly the case $\ell(S/IS)\leq C$ in the proof of Theorem~\ref{theorem: main technical dim 2}. This alternative approach avoids the use of Theorem~\ref{theorem: Hanes} (i.e., \cite[Proposition~5.7]{HMQS}), which benefits in equal characteristic $0$ as it avoids the reduction mod $p$ argument needed to prove Theorem~\ref{theorem: Hanes}. \end{remark} Finally, we treat the higher dimensional case, the proof turns out to be easier, but, unlike Theorem~\ref{theorem: dim 2 generators}, the bound is not sharp for the maximal ideal in a regular local ring. We need a couple of lemmas. The first one is due to Mumford. \begin{lemma}[{\cite[Proof of Lemma 3.6]{Mumford}}] \label{lemma: Mumford} Let $(R,\m)$ be a Noetherian local ring of dimension $d$ and let $I=I_0+I_1T+I_2T^2+\cdots+I_{N-1}T^{N-1}+T^N \subseteq R[T]$ be a $T$-homogeneous ideal of finite colength. Then we have $$\frac{\operatorname{e}(I)}{(d+1)!\ell(R[T]/I)} \leq \frac{\sum_{i=0}^{N-1}\operatorname{e}(I_i)}{d!\sum_{i=0}^{N-1}\ell(R/I_i)}\leq \max_i\left\{\frac{\operatorname{e}(I_i)}{d!\ell(R/I_i)}\right\}.$$ \end{lemma} \begin{lemma}\label{lemma: Lech colength} For any Noetherian local ring $(R, \m)$ of dimension $d \geq 2$ and any $\m$-primary ideal $I$ of colength at most $N$ we have \[ \operatorname{e}(I) \leq d! \left(1 - \frac{1}{d! N} \right ) \operatorname{e}(R) \ell (R/I). \] \end{lemma} \begin{proof} The proof of Lech's inequality via Noether's normalization given in \cite{HSV} shows that it is enough to prove the statement in an equicharacteristic regular local ring of dimension $d$. Since it was shown by Lech \cite[Page 74, after (4.1)]{LechMultiplicity} that in dimension at least two, we always have strict inequality in Theorem~\ref{theorem: Lech} (so that each rational number appeared on the left hand side below must be strictly less than $1$), it follows that \[ \max_{\ell(R/I)\leq N} \left \{\frac{\operatorname{e}(I)}{d! \ell (R/I)}\right\} \leq 1 - \frac{1}{d! N}. \qedhere \] \end{proof} \begin{theorem}\label{theorem: dim d generators} Let $d > 2$ and $N \geq d$ be two positive integers. Then there exists a constant $c = c(N, d) \in (0, 1)$ such that for any Noetherian local ring $(R, \m)$ of dimension $d$ and any $\m$-primary $\m$-full ideal $I$ which can be generated by $N$ elements we have \[ \operatorname{e}(I) \leq d! c \operatorname{e}(R) \ell (R/I). \] \end{theorem} \begin{proof} Let us write $\m=(x_1,\dots,x_n)$ and define $\widetilde{R} = R(t_1,\dots,t_n)$ with the general linear form $z = t_1x_1+\cdots+t_nx_n$. By Remark~\ref{remark: property m-full}, for any $\m$-primary $\m$-full ideal $I$ we have \[ \mu(I(\widetilde{R} /z\widetilde{R})) + \ell (\widetilde{R} /(I, z)\widetilde{R}) = \mu(I) \leq N. \] We apply the same reduction as in the first paragraph of the proof of Theorem~\ref{theorem: dim 2 generators}: we can pass from $R$ to $S := (R/\lc_{\m}^0 (R)) \otimes \widetilde{R}$ because this will not affect multiplicity and will not increase the colength, now $z$ is a nonzerodivisor on $S$ and we can further pass from $S$ to $\operatorname{gr}_z (S) \cong (S/zS)[T]$ and note that if $c$ works for all $T$-homogeneous ideals in $\operatorname{gr}_z (S)$, then it will work for $S$. We now write $\initial_z(IS)$, the initial ideal of $IS$ in $\operatorname{gr}_z(S)\cong (S/zS)[T]$, as $I_0+I_1T+I_2T^2+\cdots+I_{K-1}T^{K-1}+T^K$. By Lemma~\ref{lemma: Mumford}, it is enough to show that there exists $c>0$ such that $\operatorname{e}(I_i)\leq (d-1)!(1-c)\operatorname{e}(S/zS)\ell(S/(z,I_i))$. But now we have $\ell(S/(z,I_i))\leq N$ for each $I_i$ and $\dim(S/zS)=d-1\geq 2$. Thus the assertion follows from Lemma~\ref{lemma: Lech colength} applied to $S/zS$ (and we can actually take $c=1 - \frac{1}{(d-1)!N}$). \end{proof} \begin{corollary} \label{cor: weak Hanes} Let $d \geq 2$ and $N \geq d$ be two positive integers. Then there exists a constant $c = c(N, d) \in (0, 1)$ such that for any Noetherian local ring $(R, \m)$ of dimension $d$ and any $\m$-primary integrally closed ideal $I$ which can be generated by $N$ elements we have \[ \operatorname{e}(I) \leq d! c \operatorname{e}(R) \ell (R/I). \] \end{corollary} \begin{proof} Since we are in dimension at least two, any $\m$-primary integrally closed ideal is $\m$-full, see Remark~\ref{remark: property m-full}. The conclusion follows from Theorem~\ref{theorem: dim 2 generators} when $d=2$ and Theorem~\ref{theorem: dim d generators} when $d>2$. \end{proof} As we mentioned in the introduction, we expect Corollary~\ref{cor: weak Hanes} holds without assuming $I$ is integrally closed (and recall that this is true in equal characteristic by Theorem~\ref{theorem: Hanes}). \begin{conjecture} \label{conj: Hanes} Let $d \geq 2$ and $N \geq d$ be two positive integers. Then there exists a constant $c = c(N, d) \in (0, 1)$ such that for any Noetherian local ring $(R, \m)$ of dimension $d$ and any $\m$-primary ideal $I$ which can be generated by $N$ elements we have \[ \operatorname{e}(I) \leq d! c \operatorname{e}(R) \ell (R/I). \] \end{conjecture} \end{document}
arXiv
\begin{document} \begin{frontmatter} \begin{aug} \title{$K$-sample omnibus non-proportional hazards tests based on right-censored data} \author{\fnms{Malka} \snm{Gorfine*} \ead[label=e1]{[email protected]}} \affiliation{Tel Aviv University, Israel} \printead{e1} \and \author{\fnms{Matan} \snm{Schlesinger*}\ead[label=e2]{[email protected]} } \affiliation{Tel Aviv University, Israel} \printead{e2} \and \author{\fnms{Li} \snm{Hsu} \ead[label=e3]{[email protected]}} \affiliation{The Fred Hutchinson Cancer Research Center, Seattle USA} \printead{e3} \runtitle{$K$-sample omnibus non-proportional hazards tests} \runauthor{Gorfine, Schlesinger, Hsu} \end{aug} \begin{abstract} This work presents novel and powerful tests for comparing non-proportional hazard functions, based on sample-space partitions. Right censoring introduces two major difficulties which make the existing sample-space partition tests for uncensored data non-applicable: (i) the actual event times of censored observations are unknown; and (ii) the standard permutation procedure is invalid in case the censoring distributions of the groups are unequal. We overcome these two obstacles, introduce invariant tests, and prove their consistency. Extensive simulations reveal that under non-proportional alternatives, the proposed tests are often of higher power compared with existing popular tests for non-proportional hazards. Efficient implementation of our tests is available in the R package KONPsurv, which can be freely downloaded from {https://github.com/matan-schles/KONPsurv}. \end{abstract} \begin{keyword} \kwd{Consistent test} \kwd{Crossing hazards} \kwd{Non-parametric test} \kwd{Non-proportional hazards} \kwd{Permutation test} \kwd{Right censoring} \kwd{Sample-space partition}\\ \kwd{*Both authors contributed equally to this work.} \end{keyword} \end{frontmatter} \section{Introduction} For the task of comparing survival distributions of two or more groups using censored data, the logrank test is the most popular choice. Its optimality properties under proportional-hazard functions are well known. Although the logrank test is asymptotically valid, it may not be powerful when the proportional hazards assumption does not hold. There are a variety of situations in which the hazard functions are of non-proportional shape. For example, a medical treatment might have adverse effects in the short run, yet effective in the long run, or a treatment may be short-term beneficial but gradually lose its effect with time. In such scenarios the hazard functions cross. In general, the longer the follow-up period is, the more likely it is for various non-proportional scenarios to develop \citep{yang2010improved}. Other tests have been proposed that might be better choices for non-proportional hazards under the alternative. \citet{peto1972asymptotically} proposed a test which is similar to the logrank test, but more sensitive for differences in hazards at early survival times than at late ones. Pepe and Fleming \cite{pepe1989weighted,pepe1991weighted} suggested a weighted Kaplan--Meier (KM) test with a weight function consists of the geometric average of the two censoring survival-function estimators. \citet{yang2010improved} recently proposed another weighted logrank test whose weights are obtained by fitting their model \citep{yang2005semiparametric}, which includes the proportional hazards and the proportional odds models as special cases. In contrast to the logrank and other related tests, the test of \citet{yang2010improved} uses adaptive weights. Under proportional hazards alternatives, this new adaptively-weighted logrank test is optimal. When the hazards are non-proportional, the adaptive weights typically lead to improvement in power over the logrank test. The test of \citet{yang2010improved}, referred here as the Yang--Prentice test, is currently considered to be the leading one in terms of power, under a wide range of non-proportional hazards alternatives. However, this test is applicable only for two-sample problems. Moreover, it is not invariant to group labeling. Exchanging the group labels between treatment and control would result in a different $p$-value. Thus, in applications with no clear link between the groups to treatment/control labeling, such as in testing the differences between females and males, the Yang--Prentice test in its current form is inappropriate. In Section~3.2 we suggest an invariant version of the Yang--Prentice test. In the statistical literature of $K$-sample tests for non-censored data, there exist powerful consistent tests that are based on various sample-space partitions. These include the well-known Kolmogorv--Smirnov and Cramer--von Mises tests \citep{darling1957kolmogorov}, and the Anderson--Darling (AD) family of statistics \citep{pettitt1976two,scholz1987k}. In particular, \citet{thas2004extension} showed that the $K$-sample AD test is basically an average of Pearson statistics in $2 \times K$ contingency tables that are induced by observation-based partitions of the sample space into two subsets. They suggested an extension of the AD test, by considering higher partitions, up to 4. \citet{heller2013consistent} proposed the HHG test, a sample-space partition-based non-parametric test for detecting associations between two random vectors of any dimension. When one of the random vectors is a categorical one-dimensional variable, the problem reduces to the $K$-sample problem with an observation-based partition of the sample space into two subsets, using three intervals. This specific partition is adopted in this work and will be described in detail. \citet{heller2016consistent} extended the work of \citet{thas2004extension} by considering sample-space partitions higher than 4, and studied test statistics that aggregate over all partitions by summation or maximization and also by aggregating over different sizes of partitions. They showed, by extensive simulation studies, that increasing the number of partitions can increase power under complex settings in which the density functions intersect 4 times or more. In this work we present new powerful non-parametric and invariant tests for comparing two or more survival distributions using right-censored data. Our proposed methodology is demonstrated and applied using the specific sample-space partition of \citet{heller2013consistent}, which has been shown to be very powerful with 3 or less densities' intersections \citep{heller2016consistent}, under non-censored data. Right-censored data introduce two major difficulties: (i) the actual event times of censored observations are unknown; and (ii) the standard permutation procedure of label shuffling is invalid, in case the censoring distributions of the groups are unequal. We overcome these two obstacles and introduce novel consistent powerful tests for right-censored data. \section{$K$-sample tests based on sample-space partition} \subsection{Motivation and Notation} Let $X$ be a one-dimensional non-negative random variable, $X\in \mathbb{R}^+$, and let $Y$ be a categorical variable indicating the group label. Under the $K$-sample hypotheses testing, the null hypothesis is $H_0:F_1(x)=\ldots=F_K(x)$ for all $x\in \mathbb{R}^+$, and the alternative is $H_1:F_m(x)\neq F_k(x)$ for some $1\le m< k\le K$ and some $x\in \mathbb{R}^+$, where $F_k$ is the cumulative distribution function of group $k$, $k=1,\ldots,K$. We assume that the sample spaces on which these $K$ distributions are defined, coincide. $K$ random samples $\mathcal{A}_1,\ldots,\mathcal{A}_K$ are drawn from the respective distributions $F_1,\ldots, F_K$. Let $n_k$ be the total number of samples in group $k$, $k=1,\ldots,K$, and $n=\sum_{k=1}^{K}n_{k}$. Assume temporarily no censoring, and consider a pair of observations $X_i \in \mathcal{A}_k$ and $X_j$. Then, in the spirit of the HHG test \citep{heller2013consistent}, we consider the following partition induced by the pair $(i,j)$: $A_{11}(i,j)$ is the number of observations from group $k$ that the distance between the value of $X$ and $X_i$ is less than or equal to $|X_i - X_j|$; $A_{12}(i,j)$ is the number of observations outside group $k$ that the distance to $X_i$ is less than or equal to $|X_i - X_j|$; $A_{21}(i,j)$ is the number of observations from group $k$ that the distance to $X_i$ is larger than $|X_i - X_j|$; and $A_{22}(i, j)$ is the number of observations outside group $k$ that the distance to $X_i$ is larger than $|X_i - X_j|$. Interestingly, with no censoring, the $A_{lr}$'s, $l,r=1,2$, can be expressed through the empirical cumulative distribution functions. \textcolor{black}{For example, assume $X_i >X_j$ and $Y_i=Y_j=k$. Then, \begin{eqnarray} A_{11}(i,j) &=& \sum_{r=1,r\neq i,j}^n I\{|X_i-X_r| \leq X_i-X_j \}I\{Y_r=k\} \nonumber \\ &=& \sum_{r=1,r\neq i,j}^n I \{ X_j \leq X_r \leq 2X_i - X_j \}I\{Y_r=k\} \nonumber \\ &=& n_k\{\widehat{F}_k(2X_i - X_j) - \widehat{F}_k(X_{j}^-) \} - 2 \nonumber \, , \end{eqnarray} where $\widehat{F}_k(x)= n_k^{-1} \sum_{X_l\in \mathcal{A}_k} I (X_l \le x) $ and $\widehat{F}_k(x^-)= n_k^{-1}\sum_{X_l\in \mathcal{A}_k} I (X_l<x)$. The $-2$ above stands for excluding observations $i$ and $j$. In general, all observations with a distances to $X_i$ that is less than or equal to $|X_i - X_j|$ lie inside the interval $[a_{ij},b_{ij}]$, where $a_{ij}=\min(X_j,2X_i-X_j)$ and $b_{ij}=\max(X_j,2X_i-X_j)$, as illustrated in Figure \ref{interval_illustration}. } In general, for $Y_i=k$, we get \begin{eqnarray} A_{11}(i,j) &= & n_k\{\widehat{F}_k(b_{ij}) - \widehat{F}_k(a_{ij}^-) \} -1 - I(Y_i=Y_j)\, , \nonumber \\ A_{12}(i,j) &=& \sum_{m=1,m\neq k}^{K}n_m\big\{\widehat{F}_{m}(b_{ij}) - \widehat{F}_{m} (a_{ij}^-) \big\} -I(Y_i \neq Y_j) \, , \nonumber \\ A_{21}(i,j) &=& n_k - A_{11}(i,j) - 1 - I(Y_i=Y_j) \, , \nonumber \\ A_{22}(i,j) &=& \sum_{m=1,m\neq k}^{K}n_m - A_{12}(i,j) - I(Y_i \neq Y_j) \, . \nonumber \end{eqnarray} For each pair $(i,j)$, a $2 \times 2$ contingency table can be constructed with $A_{lr}(i,j)$ as the entry of cell $lr$, $l,r=1,2$, and a total sum of $n-2$. \textcolor{black}{Under the null hypothesis of equal distributions, the probability of belonging to cell $lr$ equals the product of the marginal probabilities of the $l$th row and the $r$th column.} Therefore, a summary statistic of a such contingency table can be based on either the Pearson chi-squared test statistic, or the log-likelihood ratio statistic. {Since it is unknown which pair $(i,j)$ yields the best sample-space partition that provides the largest summary statistic, the final test statistic is defined as a sum over all possible $n(n-1)$ partitions induced by the data.} A permutation-based $p$-value can be calculated, under random permutations of the group labels. To introduce the right-censored data, let $C \in \mathbb{R}^+$ be a non-negative random variable indicating the censoring time. Assume that $X$ and $C$ are conditionally independent given $Y$. Define $T$ to be the observed time, namely $T=\min(X,C)$ and let $\Delta=I(X\le C)$. Hence, the observed data consist of $K$ random samples that can be summarized by $(T_i, \Delta_i, Y_i),\ i = 1,\ldots,n$. Note that the different groups may have different censoring distributions. With right-censored data, our proposed test requires special care in evaluating the $A_{r,l}$'s, $r,l = 1,2$, by replacing the empirical distribution functions by their respective Kaplan--Meier (KM) estimators, as well as when applying a permutation test with unequal censoring distributions. Both issues are described in details below. \subsection{The Test Statistic} Let $\widetilde{F}_k$ be the KM estimator of the cumulative distribution function using all observations of group $k$. The KM estimator is defined only up to (and including) the last observed failure time. Define $\gamma_k$ to be the maximum time in which $\widetilde{F}_k$ can be used for the test statistic. If the largest observed time is a failure time, $\widetilde{F}_k(t)$ is known for the entire range of $t$. In this case, we define $\gamma_k$ to be the maximum possible value of $t$ required for the test, $2 \max_{i=1,\ldots,n}\{T_i\} - \min_{i=1,\ldots,n}\{T_i\}$. However, in case of censoring after the largest failure time, the KM estimator beyond the largest observed failure time is undefined. We thus define the maximum time $\gamma_k$ to be the largest observed failure time. Namely, $$\gamma_k = \left\{ \begin{array}{ll} 2\displaystyle\max_{i=1,\ldots,n}\{T_i\} - \displaystyle\min_{i=1,\ldots,n}\{T_i\} & \quad \mbox{if} \,\,\,\, \displaystyle\max_{i=1,\ldots,n} \{T_i \Delta_i I(Y_i=k) \}=\displaystyle\max_{i=1,\ldots,n} \{T_i I(Y_i=k) \} \\ \displaystyle\max_{i=1,\ldots,n} \{T_i \Delta_i I(Y_i=k)\} & \quad \mbox{if} \,\,\,\, \displaystyle\max_{i=1,\ldots,n} \{T_i \Delta_i I(Y_i=k) \} \neq \displaystyle\max_{i=1,\ldots,n} \{T_i I(Y_i=k) \} \end{array} \right. $$ Define $\gamma_{-k}$ to be the maximum time in which at least one of the other KM estimators, $\widetilde{F}_m$, $m=1,\ldots,K, m\neq k$, can be used for the test statistic, namely $\gamma_{-k}=\max_{m\neq k}\{\gamma_m\}$. Define $\tau_k$ to be the maximum time point the KM estimator is defined in group $k$ and in at least one more group, namely $\tau_k=\min\{\gamma_k,\gamma_{-k}\}$. Then, for each pair of observed failure times $T_i\in \mathcal{A}_k$ and $T_j$ such that $j\neq i,\ \Delta_i= \Delta_j=1$ and $b_{ij} \le \tau_k$, a $2\times2$ contingency table is constructed. The following $A_{lr}^*(i,j),\ l,r=1,2$, are the corresponding expressions of $A_{lr}(i,j)$ obtained by replacing $\widehat{F}$ by $\widetilde{F}$, the KM estimators. In case $\gamma_m<b_{ij}$, the observations of group $m$ will not be included in the contingency table induced by the pair $(i,j)$. Namely, for $j\neq i,\ \Delta_i= \Delta_j=1, b_{ij} \le \tau_k$ and $Y_i=k$, \begin{eqnarray} A_{11}^*(i,j)&=& n_k\{\widetilde{F}_k (b_{ij}) - \widetilde{F}_k(a_{ij}^-) \} -1-I(Y_i=Y_j)\, ,\nonumber \\ A_{12}^*(i,j)&=& \sum_{m=1,m\neq k}^{K}n_m\big\{\widetilde{F}_{m}(b_{ij}) - \widetilde{F}_{m} (a_{ij}^-) \big\}I(\gamma_m\ge b_{ij})\ -I(Y_i\neq Y_j)\, , \nonumber \\ A_{21}^*(i,j) &=& n_k - A_{11}^*(i,j) -1-I(Y_i=Y_j)\, , \nonumber \\ A_{22}^*(i,j) &=& \sum_{m=1,m\neq k}^{K}n_mI(\gamma_m\ge b_{ij}) - A_{12}^*(i,j)-I(Y_i\neq Y_j) \, . \nonumber \end{eqnarray} Only pairs of observed failure times are used for the sample-space partitioning (i.e., $\Delta_i=\Delta_j=1$), while censored observations contribute in $\widetilde{F}_k$, $k=1,\ldots,K$. Denote by $n(i,j)$ the number of observations in all the groups included in the contingency table induced by $(i,j)$, namely $n(i,j)=\sum_{k=1}^{K}n_kI(\gamma_k\ge b_{ij})$. Let $S_P$ and $S_{LR}$ be the summary statistic of each contingency table, based on Pearson chi-squared test statistic $$ S_P(i,j)=\frac{\{n(i,j)-2\}\{A^*_{12}(i,j)A^*_{21}(i,j)-A^*_{11}(i,j)A^*_{22}(i,j)\}^2}{A^*_{1\cdot}(i,j)A^*_{2\cdot}(i,j)A^*_{\cdot 1}(i,j)A^*_{\cdot 2}(i,j)} \, , $$ and the log-likelihood ratio statistic, $$ S_{LR}(i,j)=2\sum_{m=1,2}\sum_{k=1,2}A^*_{mk}(i,j)\log \bigg[\frac{\{n(i,j)-2\}A^*_{kl}(i,j)}{A^*_{m\cdot}(i,j )A^*_{\cdot k}(i,j)} \bigg] \, , $$ respectively, where $A^*_{.k}(i,j)= \sum_{m=1,2}A^*_{mk}(i,j)$ and $A^*_{m.}(i,j)=\sum_{k=1,2}A^*_{mk}(i,j)$. In case of at least one zero margin in the contingency table, $S_P(i, j)=0$ and $S_{LR}(i, j)=0$. Denote by $$ N=\sum_{k=1}^{K}\sum_{i=1, T_i \in \mathcal{A}_k}^n\sum_{j=1,j\neq i}^{n} \Delta_i \Delta_j I(T_j\le \tau_k) I(b_{ij} \le \tau_k) $$ the total number of tables constructed from the data. Then, our proposed sample-space partition test statistic for equality of $K$ distributions based on right-censored data is defined by $$ Q = \frac{1}{N}\sum_{k=1}^{K}\sum_{i=1, T_i \in \mathcal{A}_k}^n\sum_{j=1,j\neq i}^{n} S(i,j) \Delta_i \Delta_j I(b_{ij} \le \tau_k)\, . $$ where $S(i,j)$ is either the test statistic $S_P(i,j)$ or $S_{LR}(i,j)$. In the case of no right censoring, the number of tables is solely determined by the number of observations. In right-censored data, the number of tables is random and determined also by the actual observed values due to the restrictions $\Delta_i= \Delta_j=1$ and $b_{ij} \le \tau_k$. This issue is of high importance for the permutation stage of the test, as elaborated in the next subsection. \subsection{The Permutation Procedure} \label{Test_stat_section} Allegedly, a permuted test can be done based on random permutations of the group labels. However, if the censoring distributions of the $K$ groups are different, such a permutation test is invalid, since a significant result can be yielded under the null due to differences in the censoring distributions. In order to generate random permutations that are independent of $Y$, we adopt the imputation approach suggested by \citet{wang2010testing}. The main idea consists of randomly permuting the group labels, while for each observation assigned to a group different from the original one, a censoring time is imputed from the censoring distribution of the new assigned group. If the observation was originally censored, a survival time is also imputed, from the null survival distribution. Let $Y_1^{p},...,Y_n^{p}$ be a random permutation of the group labels. Define $(T_i^p, \Delta_i^p),\ i = 1, \ldots,n, $ by $$ (T_i^p, \Delta_i^p) = \left\{ \begin{array}{ll} (T_i,\Delta_i) & \quad \mbox{if} \,\,\,\, Y_i^p=Y_i \\ (\widetilde{T}_i,\widetilde{\Delta}_i) & \quad \mbox{if} \,\,\,\, Y_i^p \neq Y_i \end{array} \,\,\,\, , \right. $$ where $\widetilde{T}_i=\min(\widetilde{X}_i,\widetilde{C}_i)$, $\widetilde{\Delta}_i=I(\widetilde{X}_i \le \widetilde{C}_i)$. Furthermore, $\widetilde{C}_i$ is sampled from the estimated censoring distribution of group $Y_i^p$, based on the KM estimator of the censoring distribution of group $Y_i^p$, by reversing the roles of event and censoring. The KM estimator, denoted by $\widehat{G}_{C,Y_i^p}$, is defined up to the largest observed censoring value of that group. Then, each observed censoring time is sampled with probability equals to the jump size of the respective KM estimator. In case $\widehat{G}_{C,Y_i^p}$ is incomplete, i.e., $\widehat{G}_{C,Y_i^p}\big[\max_{j=1,\ldots,n} \{T_j(1-\Delta_j) I(Y_j=Y_i^p)\}\big]>0$, the largest observed time, $\max_{j=1,\ldots,n} \{T_j (1-\Delta_j)I(Y_j=Y_i^p)\}$, is sampled with probability $\widehat{G}_{C,Y_i^p}\big[\max_{j=1,\ldots,n} \{T_j(1-\Delta_j) I(Y_j=Y_i^p)\}\big]$. $\widetilde{X}_i$ is defined by $$ \widetilde{X}_i = \left\{ \begin{array}{ll} X_i & \quad \mbox{if} \,\,\,\, \Delta_i=1 \\ X_i^* & \quad \mbox{if} \,\,\,\, \Delta_i=0 \end{array} \,\,\,\, , \right. $$ where $X_i^{*}$ is sampled from an estimator of $\mbox{pr} (X_i>x|X_i>T_i)$, the conditional distribution of $X$ under the null hypothesis. In practice, $\mbox{pr} (X_i>x|X_i>T_i)$ is replaced by its KM estimator, using all observations from all groups that their observed time is larger than $T_i$. Denote this KM estimator by $\widehat{S}_{cond,T_i}$. The sampling based on $\widehat{S}_{cond,T_i}$ is done in the same fashion as in the above censoring sampling, but in case of incomplete distribution, the value $\max_{i=1,\ldots,n} (T_i \Delta_i) +\varepsilon$ ($\varepsilon $ is any positive number) is sampled with probability $\widehat{S}_{cond,T_i}\{\max_{i=1,\ldots,n} (T_i\Delta_i)\}$, and its respective event indicator is set to be $\Delta_i^p=0$, as there is no empirical evidence for the potential failure times beyond $\max_{i=1,\ldots,n}\{T_i \Delta_i\}$. When performing a permutation test, the reported $p$-value can be viewed as an approximation of the true $p$-value, based on all possible permutations. In the above imputation-based permutation procedure, additional variability in a $p$-value is expected due to random imputations. To reduce this variability, multiple imputations can be used, such that for each random imputation, $B$ permutations are generated. Assume $M$ imputations are used. Then the $p$-value is defined as the fraction of the test statistics among the $MB$ test statistics that are at least as large as the observed test statistic $Q$. In the following theorem it is argued that our proposed tests are consistent against all alternatives. The proof is presented in details in the Supplementary Materials. \\ \textbf{Theorem:} Let $X$ be a positive failure time random variable, either continuous or discrete, and $Y$ be a categorical random variable with $K$ categories. Let $\pi_k= \lim_{n\to\infty} n_k/n$, $k=1,\ldots,K$. Assume there are at least two cumulative distribution functions $F_g(x)$ and $F_m(x),\ g,m \in \{1,\ldots,K\}$, such that $F_g(x_0)\neq F_m(x_0)$ for some $x_0 \in \mathbb{R}^+$, $\pi_g>0,\ \pi_m>0$, and the conditional censoring distributions are such that $\mbox{pr}(C > x_0|Y=g)>0$ and $\mbox{pr}(C > x_0|Y = m)>0$. Then, the imputation-based permutation test with the test statistic $Q$ is consistent, namely, the power of the test increases to $1$ as $n\to \infty$. \subsection{Computation Time} Table \ref{running_time_table} provides the run time of the proposed tests of one dataset, $K=2$, under the null hypothesis, one imputation, and 1000 permutations, for different total sample sizes $n$ and $n_1=n_2=n/2$. These results were generated by a 6 years old Intel i7-3770 CPU 3.4 GHz, without paralleling with the different cores of the computer. The first two rows are for identical censoring distributions, and the last two are for different censoring distributions. Evidently, even with $n=1000$ observations and low censoring rates, the run time on such a simple computer is not longer than 3.5 minutes. We expect that an increase of the number of imputations and permutations will increase the run time in a linear fashion. \section{Simulation study} \label{Simulation_study} \subsection{Simulation Design} \label{Simulation_design_section} An extensive numerical study has performed to systematically examine the behavior of our proposed $K$-sample omnibus non-proportional hazards (KONP) tests under a wide range of alternatives, various sample sizes, and a wide range of censoring distributions, including unequal censoring distributions. The main part of the simulation study was dedicated to the popular 2-sample setting, but settings of $K=3,4,5$ were considered as well. As competitors under the 2-sample setting, the following tests were included: the logrank test; Peto--Peto weighted logrank test~\citep{peto1972asymptotically} {that uses a weight function that is very close to the pooled KM estimator}; Pepe--Fleming weighted KM test~\citep{pepe1989weighted} {with geometric mean of the two KM censoring-distribution estimators as a weight function}; and Yang--Prentice test, {an adaptive weighted logrank test where the adaptive weights utilize the hazard ratio obtained by fitting the model of Yang and Prentice~\cite{yang2005semiparametric}}. The tests of~\citet{uno2015versatile} are invalid under unequal censoring distributions (as demonstrated below), and thus are not included in the following power comparisons. Table \ref{unequal_cen} (main text) and tables \ref{scenario_description} and \ref{prop_design} in Appendix, provide a comprehensive summary of the 17 non-proportional hazards scenarios and 7 proportional or close to proportional hazards functions, that were studied. For each scenario, the failure and censoring distributions are explicitly provided, and the survival functions of the two groups are plotted. A reference is provided indicating the source of each setting. In short, Scenario A shows differences at mid time points, but similarity in early and late times. Scenarios B--D show differences in early times. Scenario E is of equal survival functions at early times and of proportional hazards at mid and late times. Scenarios F and G are with crossing hazards. Scenario H is of a U-shape hazards ratio. Scenarios I, J and K are with crossing hazards, based on the following hazards-ratio model of \citet{yang2005semiparametric}: \begin{equation}\label{hr_yp} \mbox{HR}(t)=\frac{\lambda_2(t)}{\lambda_1(t)}=\frac{\theta_1 \theta_2}{\theta_1 +(\theta_2-\theta_1)S_1(t)}\, , \,\,\,\, 0<t<\tau_0 \,, \,\,\,\, \theta_1,\theta_2>0 \, , \end{equation} where $\tau_0=\sup \{t:S_1(t)>0 \}$, $\lambda_2$ and $\lambda_1$ are the hazard functions of the two groups, and $S_1$ is the survival function of group $1$. Under Model (\ref{hr_yp}), $\theta_1 = \lim_{t\to 0} \mbox{HR}(t)$ and $\theta_2= \lim_{t\to \tau_0} \mbox{HR}(t)$. Also it is assumed that for a continuous function $S_1$, $\mbox{HR}(t)$ is a strong monotone function of $t$, i.e. $\mbox{sign} \{ d \mbox{HR}(t)/dt \}$ is the same for all $t \in (0, \tau_0)$. Scenarios I-1, I-2 and I-3 are with crossing hazards under Model (\ref{hr_yp}). In I-1 the hazard functions cross earlier compared to I-2 and I-3. Scenarios J-1, J-2 and J-3 are with crossing hazards in which the strong monotonicity assumption is violated. In J-1 and J-2 the hazards are piece-wise proportional, and the hazard functions cross in mid time points. In Scenario J-3 the hazard ratio is a continuous function of $t$, and the hazards cross at a late time point. Under Scenarios K-1, K-2 and K-3 Model (\ref{hr_yp}) is violated, but the strong monotonicity assumption of $\mbox{HR}(t)$ holds. In Scenarios K-1 and K-2 the hazards cross at early-mid times, and in Scenario K-3 at mid times. For each scenario described above, four different censoring distributions were considered, two with equal and two with unequal censoring distributions. Under equal censoring distributions, the censoring distributions were taken to be similar to the corresponding referenced paper. Exponential distributions were used for all other scenarios, with approximately $25\%$ or $50\%$ censoring rates. Under unequal censoring distributions, the censoring distributions of \citet{wang2010testing} were used (Table \ref{unequal_cen}). The specific values of $(a,b,\theta_1,\theta_2)$ are provided in tables \ref{scenario_description}-\ref{prop_design}. Under small differences, the censoring rates among the two groups are about $40\%$ and $55\%$, where $27\%$ and $55\%$ are the respective values under substantial differences. The various censoring settings are such that the power of a specific test under specific scenario is not necessarily increasing as the censoring rate decreases. Each of the configurations was studied with $n=100,200,300$ or 400, $n_1=n_2$, and performances are summarized based on 2000 replications. A smaller simulation study was done for $K>2$. As competitors, the logrank and Peto-Peto tests were included. For the null scenario $K=3,4,5$ were studied, and under Scenarios D and J-2, $K=3$ was examined. Various sample sizes and a wide range of censoring distributions were considered. A detailed description of these scenarios can be found in the Supplementary Materials. \subsection{The test of \citet{yang2010improved}} \label{YP_inv} Since the Yang--Prentice test is the strongest competitor in terms of power, for the $2$-sample setting, we highlight some of its properties. The Yang--Prentice test is based on Model (\ref{hr_yp}), where the indices 1 and 2 indicate the control and treatment groups, respectively. Since this model is asymmetric in terms of $F_1$ and $F_2$, the test is not group-label invariant. By exchanging the group labels between `treatment' and `control', a different $p$-value would be provided. This property is unique to this test, and all other tests considered in this work are invariant to group labeling. Consequently, in applications with no clear link between the two groups to treatment/control status (e.g., comparing females versus males, or young versus old), it is unclear how the Yang--Prentice test should be applied. In order to make the Yang--Prentice test invariant, we apply a permutation test based on the minimum $p$-value of the two labeling options. Specifically, let $PV_1$ and $PV_2$ be the $p$-values of the Yang--Prentice tests based on $(X_i, \Delta_i, Y_i),\ i = 1, \ldots,n$, and $(X_i, \Delta_i, \widetilde{Y}_i),\ i = 1, ...,n$, respectively, where $Y_i\in\{1,2\}$ and $\widetilde{Y}_i=I(Y_i=2)+2I(Y_i=1)$. Then, the Yang--Prentice invariant test statistic is defined by $Q_{YP}=\min \{PV_1,PV_2\}$. The $p$-value of the imputation-based permutation test, based on the statistic $Q_{YP}$, is the fraction of replicates of $Q_{YP}$ under random permutations of the data, as described in Section 2.3, that are at least as small as the observed test statistic. Since our work mainly focuses on invariant tests, we report and discuss in the main text the performance of the Yang--Prentice invariant test. The results of the original Yang-Prentice test, with the two possible options of labeling for treatment and control, can be found in the Supplementary Materials. The original Yang--Prentice test (implemented in the R package YPmodel) uses the asymptotic distribution of the test statistic under the null hypothesis. Often, its empirical size is greater than the nominal level, especially under small sample size (see the Supplementary Material). In a recent work of \citet{yang2019interim}, which deals with interim monitoring using adaptive weighted log-rank test, the author suggests using the re-sampling method of \citet{lin1993checking} instead of the original asymptotic approach \citep{yang2010improved}, for improving the type I error rate. The method of \citet{lin1993checking} is also based on asymptotic results. Alternatively, the imputation approach of \citet{wang2010testing} works very well in very small sample sizes, and it does not require asymptotic distribution. To make the comparison between the methods consistently, we use the imputation approach for both, our proposed method and the invariant version of Yang-Prentice test. \subsection{Simulation Results} \label{Simulation_results_section} Figure \ref{Null_equal_censoring} provides the empirical power of the tests under the null hypothesis, with equal and unequal censoring distributions. Evidently, under equal censoring distributions all the tests are valid, as the empirical size of the tests are reasonably close to the nominal value $0.05$. On the other hand, under the null hypothesis and unequal censoring distributions, the empirical sizes of Uno et al. tests are much higher than the nominal value 0.05. For example, under a sample size of $n=400$, and censoring rates of approximately $27\%$ and $55\%$, the empirical size of Uno et al. $V_2$ bona fide test is $0.099$; all the other Uno et al. tests' sizes are even higher. The empirical sizes of all the other tests are reasonably close to their nominal value. Thus, Uno's tests will not be considered in the rest of this simulation study. Figure \ref{ref_scen_fig} summarizes the empirical power of the tests under settings generated by others, while Figure \ref{our_scen_fig} is based on scenarios generated by us. As expected, the power of each test increases with the sample size. In some scenarios the power of the tests increases as the censoring rate increases, since in these settings the non-censored data are centered mainly at the parts of the hazards which are closer to proportionality and the censored data are mainly located at the non-proportionality area of the hazards. For example, in Scenario I-3 with $n=400$, as the censoring rate increases from about $25\%$ to about $50\%$, the power of Peto--Peto increases by $0.159$, Yang--Prentice by $0.2$, Pepe--Fleming by $0.218$, and logrank by $0.4$. The power increase in our KONP tests is much smaller, 0.012 and 0.009. {Evidently, our KONP tests are often more powerful compared to all other tests. These includes scenarios A, C, D, E, F, J-1, J-2, J-3, I-1, K-1, K-2, K-3. The superiority of KONP over Yang-Prentice under F and I-1 is surprising, since these scenarios follow their Model (\ref{hr_yp}). For example, with $n=400$ and $25\%$ censoring rate, the empirical power is about $90\%$ for KONP tests and only $70\%$ for Yang--Prentice. In Scenario G (close to proportional hazards), which also follows Model (\ref{hr_yp}), the results of Yang-Prentice and logrank are similar and often slightly better than KONP tests. In scenarios J-1 and J-2 our tests are substantially more powerful than all the competitors. For example, in Scenario J-2, $n=400$, and $25\%$ censoring rate, KONP tests with about $95\%$ power, while the second most powerful test is Yang--Prentice with a power of $48\%$.} {In scenarios G, I-2 and in some of the censoring rates in I-3, the Peto--Peto and Pepe--Fleming tests tend to be with the highest power. In contrast, in many of the scenarios in which the survival functions cross, their power are much lower than our tests. For example, in Scenario K-1, $n=400$, and $25\%$ censoring rate, the power of KONP is $89\%$, while Pepe--Fleming power is $42\%$ and Peto--Peto is $5\%$.} To conclude, for the $2$-sample setting, in most of the non-proportional hazards settings, our proposed KONP tests tend to be more powerful than the other tests, and the differences between $S_P$ and $S_{LR}$ are very small, if any. Results of settings with $K>2$ can be found in Table S1 and Figure S1 of the Supplementary Material. Based on these results we conclude that all the tests are valid, as the empirical size of the tests are reasonably close to $0.05$. For $K=3$ and scenarios D and J-2, we see similar results to those shown with $K=2$, as KONP tests are often much more powerful than the logrank and Peto--Peto test. For example, for scenario D with $n=200$ and $25\%$ censoring rates in all the three groups, the powers of KONP tests are approximately $92\%$, while of Peto--Peto is $49\%$, and of logrank is $18\%$. {Figure \ref{prop_res} summarizes the power of the 2-sample tests under proportional hazards or close to proportionality. Under these settings the logrank test is often with the highest power among the invariant tests, as expected. The invariant version of Yang--Prentice test is similar to the logrank test, and the proposed KONP tests, sometimes, have less power.} \subsection{A Robust Approach} Figures \ref{ref_scen_fig} and \ref{our_scen_fig} of the main text and Table S2 of the Supplementary Material indicate that under the non-proportional hazards scenarios and among the invariant tests, usually the proposed KONP tests are with the largest power. Under proportional hazards or close to proportionality, usually the logrank test is with the largest power among the invariant tests. The invariant version of Yang--Prentice test is similar to the logrank test under proportional hazards settings, since their model contains the proportional hazards model ($\theta_1=\theta_2$). In case one is interested in a robust powerful test under non-proportional or proportional hazards, the principle of minimum p-value could be adopted based on the elegant Cauchy-combination test of \citet{liu2018cauchy}, which is similar to the test based on the minimum $p$-value. Denote the $p$-values of our KONP tests by $p\mbox{-value}_P$ and $p\mbox{-value}_{LR}$, respectively, and the $p$-value of the logrank test by $p\mbox{-value}_{lgrnk}$. Then, based on \citet{liu2018cauchy} we define a new test statistic \begin{equation} Cau = [\tan \{ (0.5- p\mbox{-value}_P) \pi \}]/3 + [\tan \{ (0.5-p\mbox{-value}_{LR}) \pi \}]/3 + [\tan \{ (0.5-p\mbox{-value}_{lgrnk}) \pi \}]/3\, , \nonumber \end{equation} and its $p$-value is \begin{equation} p\mbox{-value}_{Cau} = 0.5 - (\arctan Cau)/\pi \, . \nonumber \end{equation} The candidate tests to be included in ${Cau}$ are powerful tests under non-proportional (i.e. KONP tests) or proportional hazards (i.e. logrank test and the invariant version of Yang--Prentice test). Due to the similarity in power performances between the logrank test and the invariant version of Yang--Prentice test, under proportional or close to proportional hazards, and due to the high computational burden of Yang--Prentice invariant test, only the logrank test is included. {Figure S3 of the Supplementary Materials provides the empirical Type-I error of the two KONP tests, the logrank test and the robust $Cau$ test. Evidently, the size of the tests are reasonably close to 0.05. Figures S4-S5 of the Supplementary Materials summarize the empirical power, based on 1000 replications, of the two KONP tests, the logrank test and the test statistic $Cau$. Often, the $Cau$ test loses some power comparing to the largest power among KONP and logrank, but the loss is relatively small. Table S4 of the Supplementary Materials provides the power values of these tests under all the studied alternatives, along with two additional combined test: the test of Lee~\cite{lee2007} which is based on the maximum of two weighted logrank test statistics; and the MaxCombo test~\cite{lin2019} which is based on the maximum of the logrank and three weighted logrank test statistics. (See Supplementary Materials for details.) Lee and MaxCombo tests perform very similarly. In general, our $Cau$ test outperforms Lee and the MaxCombo tests, in terms of power, in all the sub-scenarios (i.e., various sample size and censoring patterns) of type B, C, H, J-1, and J-2; and in some of the sub-scenarios of A, F, I-1, I-2, I-3, K-1, K-2, K-3, and P. All of these are non-proportional hazards scenarios. Under proportional hazards or close to proportionality, i.e., settings L, M, N O, P and Q, Lee's test or MaxCombo outperform $Cau$. Interestingly, the cases where Lee or MaxCombo tests are with higher power than $Cau$, the power loss by using $Cau$ is relatively small; while this is not always the case when $Cau$ outperforms Lee or MaxCombo. For example, under Setting B with $n=200$, the power of $Cau$ equals 0.919 while the power values of Lee and MaxCombo are 0.499 and 0.450, respectively.} Our R package KONPsurv \citep{schl2019} applies the above robust test $Cau$ as well. \section{Real data examples} \subsection{The Gastrointestinal Tumor Data} \label{read_data_section} The Gastrointestinal Tumor Study Group \citep{schein1982comparison} compared chemotherapy with combined chemotherapy and radiation therapy, in the treatment of locally unresectable gastric cancer. This dataset was used in \citet{yang2010improved} to demonstrate the utility of their test. Each treatment arm had 45 patients, and two observations of the chemotherapy group and six of the combination group were censored. The primary outcome measure was time to death. The KM survival curves of time to death, in each treatment group are provided in Figure~\ref{gastric_plot}. To apply the Yang--Prentice test, we considered chemotherapy as the control group and chemotherapy plus radiation therapy as the treatment group, which is named Yang--Prentice 1. The Yang--Prentice test with reversed group labeling is denoted by Yang--Prentice 2. Table \ref{gastric_pv} shows the $p$-values of testing for equality of the survival curves of time to death, of the two treatment groups, against a two-sided alternative, with each of the tests considered in the simulation study. For our tests and the Yang-Prentice invariant test, $10$ imputations and $10^4$ permutations for each imputation, were used. Evidently, the smallest $p$-values are observed under our proposed KONP tests and the Cauchy-combination test $Cau$. {\subsection{Urothelial Carcinoma} Few options exist for patients with locally advanced or metastatic urothelial carcinoma after progression with platinum-based chemotherapy. Powel et al. \cite{powles2018atezolizumab} aimed to assess the safety and efficacy of atezolizumab versus chemotherapy in this patient population. Their study consists of a multi-center, open-label, phase 3 randomised controlled trial conducted at 217 academic medical centers and community oncology practices mainly in Europe, North America, and the Asia-Pacific region. The primary endpoint was overall survival. Figure S3 of the Supplementary Material of \citet{powles2018atezolizumab} provides the overall survival KM curves of atezolizumab versus chemotherapy based on 316 and 309 patients, respectively. Although the detailed survival data are unavailable, inspired by \cite{Roych2019} we used Guyot et al. \cite{Guyot2012} algorithm that maps from digitised curves back to KM data, by finding numerical solutions to the inverted KM equations, using available information on number of events and numbers at risk. The DigitizeIt software was used for reading the coordinates of the KM curves from the published graph. Figure \ref{Uro_plot} provides the KM curves by treatment arm, and the last column of Table \ref{gastric_pv} shows the $p$-values of testing for equality of the survival curves of time to death against a two-sided alternative. Evidently, the smallest $p$-values are observed under our proposed KONP tests and the second best is the test of Lee. Our Cauchy combined test also performs robustly. } \section{Discussion and Conclusions} \label{Discussion} The proposed KONP tests are based on partition of the sample-space into two subsets, corresponding to three intervals, as of the HHG test \citep{heller2013consistent}. An extensive simulation study shows that when the hazard curves are non-proportional, the KONP tests are often more powerful than all the other tests. In particular, the proposed test is even more powerful than the Yang--Prentice test, under their model with non-proportional hazards. The simulation results show very little differences in power, if any, between the Pearson chi-squared test statistic and the log-likelihood ratio statistic. Since the chi-squared statistic is slightly more powerful, this test statistic is recommended. Other partitions and summary statistics can be easily adopted. In particular, one may consider the extended Anderson-Darling tests of \citet{thas2004extension} and \citet{heller2016consistent} with higher sample-space partitions and test statistics that aggregate over all partitions by summation or maximization. Nevertheless, for non-censored data, \citet{heller2016consistent} showed by simulations (see their Table 1), that increasing the number of partitions can improve power over the HHG 2-sample test under settings in which the density functions intersect 4 times or more. Otherwise, the HHG 2-sample test tends to be more powerful. Figure \ref{density_fig} in Appendix \ref{long_table} displays the densities of the 17 non-proportional hazards scenarios studied in this work. Evidently, the survival scenarios considered by others and by us are of less than 4 intersections. Simulation study of the Anderson-Darling test statistic with sample-space partition of two intervals, yields lower power than the proposed KONP tests. A comprehensive comparison with other sample-space partitions and aggregations could be a topic of future research. This work suggests tests that accommodate right-censored data. Since the tests are based on the KM estimator, it seems that a modification to left truncation might be possible. However, additional work is required to modify the imputation-permutation approach for left-truncated data. Implementation of our tests, KONP-P, KONP-LR and $Cau$, is available in the R package KONPsurv \citep{schl2019}, which can be freely downloaded from CRAN. \section*{Supplementary materials} \label{SM} Supplementary materials include: (1) The proof of the theorem. (2) Description of some of the tests included in the simulation study. (3) Description and results of $K$-sample simulation settings with $K=3,4,5$. (4) Plots of the empirical power results of the robust test $Cau$. (5) The exact empirical power used for generating all the plots in the main text and the Supplementary Material file. \begin{figure} \caption{Sample-Space Partition} \label{interval_illustration} \end{figure} \begin{table} \centering \caption{Computation Time (seconds) } \begin{tabular}{cccccc} Censoring rates & \multicolumn{5}{c}{$n$} \\ of groups 1 and 2 & 100 & 200 & 300 & 400 & 1000 \\ (25\%,25\%) & 1.7 & 7.1 & 16.5 & 30.0 & 204.2 \\ (50\%,50\%) & 0.9 & 3.5 & 8.0 & 14.5 & 97.3 \\ (27\%,55\%) & 0.9 & 3.8 & 8.8 & 16.1 & 114.1 \\ (40\%,55\%) & 0.8 & 3.2 & 7.5 & 13.7 & 93.8 \\ \end{tabular}\label{running_time_table} \end{table} \begin{table} \centering \caption{Unequal Censoring Distributions} \begin{tabular}{cccc} Group & Small difference & Substantial difference \\ 1 & $ \min \{U(a,b) , Exp(\theta_1) \}$ & $ \min \{U(a,b) , Exp(\theta_1) \}$ \\ 2 & $ \min \{U(a,b) , Exp(\theta_2) \}$ & $ U(a,b)$ \\ \end{tabular}\label{unequal_cen} \end{table} \begin{figure} \caption{Empirical Power Under the Null: top - equal censoring rates, bottom - unequal censoring rates} \label{Null_equal_censoring} \end{figure} \begin{figure}\label{ref_scen_fig} \end{figure} \begin{figure}\label{our_scen_fig} \end{figure} \begin{figure}\label{prop_res} \end{figure} \begin{table} \centering \caption{Examples: Gastrointestinal Tumor Study (GST) and Urothelial Carcinoma Data (UCD)} \begin{tabular}{lcc} Test & GST $p$-value & UCD $p$-value\\ KONP Pearson & 0.0109 & 0.0049 \\ KONP LR & 0.0108 & 0.0049 \\ Cauchy combination $Cau$ & 0.0164 & 0.0071 \\ Yang--Prentice 1 & 0.0304 & 0.0186 \\ Yang--Prentice 2 & 0.0800 & 0.0252 \\ Yang--Prentice Invariant & 0.0479 & 0.0219 \\ Logrank & 0.6350 & 0.0673 \\ Pepe--Fleming & 0.9464 & 0.2362 \\ Peto--Peto & 0.0465 &0.3807 \\ Lee & 0.0968 & 0.0054\\ MaxCombo & 0.0908 & 0.0061\\ \end{tabular}\label{gastric_pv} \end{table} \begin{figure} \caption{Gastrointestinal Tumor Study: KM Curves} \label{gastric_plot} \end{figure} \begin{figure} \caption{Urothelial Carcinoma Data: KM Curves} \label{Uro_plot} \end{figure} \begin{thebibliography}{18} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Yang and Prentice(2010)]{yang2010improved} Yang S and Prentice RL. \newblock Improved logrank-type tests for survival data using adaptive weights. \newblock \emph{Biometrics}, 66\penalty0 (1):\penalty0 30--38, 2010. \bibitem[Peto and Peto(1972)]{peto1972asymptotically} Peto R and Peto J. \newblock Asymptotically efficient rank invariant test procedures. \newblock \emph{Journal of the Royal Statistical Society. Series A}, pages 185--207, 1972. \bibitem[Pepe and Fleming(1989)]{pepe1989weighted} Pepe MS and Fleming TR. \newblock Weighted {K}aplan--{M}eier statistics: a class of distance tests for censored survival data. \newblock \emph{Biometrics}, pages 497--507, 1989. \bibitem[Pepe and Fleming(1991)]{pepe1991weighted} Pepe MS and Fleming TR. \newblock Weighted {K}aplan--{M}eier statistics: {L}arge sample and optimality considerations. \newblock \emph{Journal of the Royal Statistical Society. Series B}, pages 341--352, 1991. \bibitem[Yang and Prentice(2005)]{yang2005semiparametric} Yang S and Prentice RL. \newblock Semiparametric analysis of short-term and long-term hazard ratios with two-sample survival data. \newblock \emph{Biometrika}, 92\penalty0 (1):\penalty0 1--17, 2005. \bibitem[Darling(1957)]{darling1957kolmogorov} Darling DA. \newblock The {K}olmogorov--{S}mirnov, {C}ramer--von {M}ises tests. \newblock \emph{Annals of Mathematical Statistics}, 28\penalty0 (4):\penalty0 823--838, 1957. \bibitem[Pettitt(1976)]{pettitt1976two} Pettitt AN. \newblock A two-sample {A}nderson--{D}arling rank statistic. \newblock \emph{Biometrika}, 63\penalty0 (1):\penalty0 161--168, 1976. \bibitem[Scholz and Stephens(1987)]{scholz1987k} Scholz FW and Stephens MA. \newblock $k$-sample {A}nderson--{D}arling tests. \newblock \emph{Journal of the American Statistical Association}, 82\penalty0 (399):\penalty0 918--924, 1987. \bibitem[Thas and Ottoy(2004)]{thas2004extension} Thas O and Ottoy JP. \newblock An extension of the {A}nderson--{D}arling $k$-sample test to arbitrary sample space partition sizes. \newblock \emph{Journal of Statistical Computation and Simulation}, 74\penalty0 (9):\penalty0 651--665, 2004. \bibitem[Heller et~al.(2013)Heller, Heller, and Gorfine]{heller2013consistent} Heller R, Heller Y, and Gorfine M. \newblock A consistent multivariate test of association based on ranks of distances. \newblock \emph{Biometrika}, 100\penalty0 (2):\penalty0 503--510, 2013. \bibitem[Heller et~al.(2016)Heller, Heller, Kaufman, Brill, and Gorfine]{heller2016consistent} Heller R, Heller Y, Kaufman S, Brill B, and Gorfine M. \newblock Consistent distribution-free ${K}$-sample and independence tests for univariate random variables. \newblock \emph{Journal of Machine Learning Research}, 17\penalty0 (1):\penalty0 978--1031, 2016. \bibitem[Wang et~al.(2010)Wang, Lagakos, and Gray]{wang2010testing} Wang R, Lagakos SW, and Gray RJ. \newblock Testing and interval estimation for two-sample survival comparisons with small sample sizes and unequal censoring. \newblock \emph{Biostatistics}, 11\penalty0 (4):\penalty0 676--692, 2010. \bibitem[Uno et~al.(2015)Uno, Tian, Claggett, and Wei]{uno2015versatile} Uno H, Tian L, Claggett B, and Wei LJ. \newblock A versatile test for equality of two survival functions based on weighted differences of {K}aplan--{M}eier curves. \newblock \emph{Statistics in medicine}, 34\penalty0 (28):\penalty0 3680--3695, 2015. \bibitem[Yang(2019)]{yang2019interim} Yang S. \newblock Interim monitoring using the adaptively weighted log-rank test in clinical trials for survival outcomes. \newblock \emph{Statistics in medicine}, 38\penalty0 (4):\penalty0 601--612, 2019. \bibitem[Lin et~al.(1993)Lin, Wei, and Ying]{lin1993checking} Lin DY, Wei LJ, and Ying Z. \newblock Checking the cox model with cumulative sums of martingale-based residuals. \newblock \emph{Biometrika}, 80\penalty0 (3):\penalty0 557--572, 1993. \bibitem[Liu and Xie(2019)]{liu2018cauchy} Liu Y and Xie J. \newblock Cauchy combination test: a powerful test with analytic p-value calculation under arbitrary dependency structures. \newblock \emph{Journal of the American Statistical Association} 1-18, 2019. \bibitem[Lee (2007)]{lee2007} Seung-Hwan L. \newblock On the versatility of the combination of the weighted log-rank statistics. \newblock \emph{Computational Statistics \& Data Analysis} , 51(12), 6557--6564, 2007. \bibitem[Lin et~al.(2019)]{lin2019} Lin RS, et al. \newblock Alternative Analysis Methods for Time to Event Endpoints under Non-proportional Hazards: A Comparative Analysis. \newblock \emph{arXiv preprint} arXiv:1909.09467, 2019. \bibitem[Schlesinger and Gorfine(2019)]{schl2019} Schlesinger M and Gorfine M. \newblock Konp tests: Powerful k-sample tests for right-censored data. \newblock 2019. \newblock R package version 1.0.1. \bibitem[Schein and Group(1982)]{schein1982comparison} Schein PS and Gastrointestinal Tumor~Study Group. \newblock A comparison of combination chemotherapy and combined modality therapy for locally advanced gastric carcinoma. \newblock \emph{Cancer}, 49\penalty0 (9):\penalty0 1771--1777, 1982. \bibitem[Powel et~al.(2018)]{powles2018atezolizumab} Powles T et al. \newblock Atezolizumab versus chemotherapy in patients with platinum-treated locally advanced or metastatic urothelial carcinoma (IMvigor211): a multicentre, open-label, phase 3 randomised controlled trial. \newblock \emph{The Lancet} 391: 748--757, 2018. \bibitem[Roychoudhury et~al.(2019)]{Roych2019} Roychoudhury S et al. \newblock Robust Design and Analysis of Clinical Trials With Non-proportional Hazards: A Straw Man Guidance from a Cross-pharma Working Group. \newblock \emph{arXiv preprint} arXiv:1908.07112, 2019. \bibitem[Guyot et~al.(2012)]{Guyot2012} Guyot P, Ades AE, Ouwens MJ, and Welton NJ. \newblock Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves. \newblock \emph{BMC medical research methodology}, (12):\penalty1 9, 2012. \end{thebibliography} \setcounter{table}{0} \setcounter{figure}{0} \renewcommand{S\arabic{table}}{A\arabic{table}} \renewcommand{S\arabic{figure}}{A\arabic{figure}} \section{Appendix} \subsection{Detailed Description of the Simulation Settings} \label{long_table} {\footnotesize \begin{longtable}{|l|l|l|l|l|} \caption{Description of the Non-Proportional Hazards Simulation Scenarios\label{scenario_description}} \\ \hline \multicolumn{1}{|c|}{Scenario} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Failure time distribution \\ and reference\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Graphical \\ description\end{tabular}} & \multicolumn{2}{c|}{Censoring distribution} \\ \hline Null & \begin{tabular}[c]{@{}l@{}}$F_1(t) = log$-$Logistic(1,1)$ \\ \\ $F_2(t) = log$-$Logistic(1,1)$\\ \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{Null.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.1,0)$ \\ \\ unequal: $(a,b)=(0,10)$\\ $(\theta_1,\theta_2)=(0.85,0.25)$\end{tabular}} \\ \hline A & \begin{tabular}[c]{@{}l@{}}$ F_1(t) = \left\{ \begin{array}{ll} Weibull(0.849,10) & \quad t \le 3 \\ U(3,50.625) & \quad 3<t \le 33 \\ Weibull(0.849,10) & \quad t>33 \end{array} \right.$ \\ \\ $F_2 = Weibull(0.849,10)$\\ \\ Difference in middle times, \\ Uno et al. (2015)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{A.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Weibull(\alpha,\beta)$\\ $(\alpha,\beta)=(18,16), (1.5,9)$ \\ \\ unequal: $(a,b)=(2,30)$\\ $(\theta_1,\theta_2)=(0.06,0.04)$\end{tabular}} \\ \hline B & \begin{tabular}[c]{@{}l@{}}$ F_1(t) = \left\{ \begin{array}{ll} U(0,50) & \quad t \le 3 \\ U(3,12.347) & \quad 3<t \le 8 \\ Weibull(0.849,10) & \quad t>8 \end{array} \right.$ \\ \\ $F_2(t) = Weibull(0.849,10)$\\ \\ Difference in early times, \\ Uno et al. (2015)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{B.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Weibull(\alpha,\beta)$\\ $(\alpha,\beta)=(10,15), (3,7.5)$ \\ \\ unequal: $(a,b)=(2,30)$\\ $(\theta_1,\theta_2)=(0.06,0.04)$\end{tabular}} \\ \hline C & \begin{tabular}[c]{@{}l@{}}$ F_1(t) = \left\{ \begin{array}{ll} Exp(0.5) & \quad t \le 0.57 \\ Exp(1.5) & \quad 0.57<t \le 1.1 \\ Exp(1) & \quad t>1.1 \end{array} \right.$ \\ $ F_2(t) = \left\{ \begin{array}{ll} Exp(1.5) & \quad t \le 0.56 \\ Exp(2/9) & \quad 0.56<t \le 1.1 \\ Exp(1) & \quad t>1.1 \end{array} \right.$\\ Difference in early times, \\ Pepe and Fleming (1989; 1991)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{C.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim U(\alpha,\beta)$\\ $(\alpha,\beta)=(1,2), (0,1.6)$ \\ \\ unequal: $(a,b)=(0.01,3)$\\ $(\theta_1,\theta_2)=(0.5,0.8)$\end{tabular}} \\ \hline D & \begin{tabular}[c]{@{}l@{}}$ F_1(t) = \left\{ \begin{array}{ll} Exp(0.5) & \quad t \le 0.44 \\ Exp(0.1) & \quad 0.44<t \le 1.05 \\ Exp(1.5) & \quad 1.05<t \le 1.47 \\ Exp(1) & \quad t>1.47 \end{array} \right.$ \\ $ F_2(t) = \left\{ \begin{array}{ll} Exp(1.5) & \quad t \le 0.38 \\ Exp(0.1) & \quad 0.38<t \le 1.02 \\ Exp(0.5) & \quad 1.02<t \le 1.47 \\ Exp(1) & \quad t>1.47 \end{array} \right.$\\ Difference in early times, \\ Pepe and Fleming (1989; 1991)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{D.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim U(\alpha,\beta)$\\ $(\alpha,\beta)=(1.1,3), (0.1,2.1)$ \\ \\ unequal: $(a,b)=(0.5,3.5)$\\ $(\theta_1,\theta_2)=(0.5,0.3)$\end{tabular}} \\ \hline E & \begin{tabular}[c]{@{}l@{}}$ F_1(t) = Exp(1)$ \\ \\ $ F_2(t)= \left\{ \begin{array}{ll} Exp(1) & \quad t \le 0.3 \\ Exp(2) & \quad t>0.3 \end{array} \right.$\\ \\Proportional difference in late times, \\ Pepe and Fleming (1989; 1991)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{E.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim U(\alpha,\beta)$\\ $(\alpha,\beta)=(0.9,1.2), (0.1,1.1)$ \\ \\ unequal: $(a,b)=(0.01,2.3)$\\ $(\theta_1,\theta_2)=(0.5,0.8)$\end{tabular}} \\ \hline F & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=1-\{1+\exp(2)t\}^{-\exp(1)}$ \\ \\ $F_2(t)= log$-$Logistic(1,1)$ \\ \\ Yang and Prentice Model (1), \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{F.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.35,0.01)$ \\ \\ unequal: $(a,b)=(0.5,5)$\\ $(\theta_1,\theta_2)=(0.7,0.25)$\end{tabular}} \\ \hline G & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=1-\{1+t/ \exp(2)\}^{-\exp(1)}$ \\ \\ $F_2(t) = log$-$Logistic(1,1)$ \\ \\ Yang and Prentice Model (1), \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{G.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.4,0.4)$ \\ \\ unequal: $(a,b)=(0.5,7)$\\ $(\theta_1,\theta_2)=(0.2,0.4)$\end{tabular}} \\ \hline H & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)= \left\{ \begin{array}{ll} Exp(2) & \quad t \le 0.5 \\ Exp(0.5) & \quad 0.5<t \le 1.5 \\ Exp(2) & \quad t>1.5 \end{array} \right.$ \\ \\ U shape hazard ratio, \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{H.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(0.25,-0.7)$ \\ \\ unequal: $(a,b)=(0,2.5)$\\ $(\theta_1,\theta_2)=(0.5,0.25)$\end{tabular}} \\ \hline I-1 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)=1-\{4\exp(t)+3\}^{-0.5}$ \\ \\ Yang and Prentice Model (1), \\ \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=23mm]{I-1.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.32,1)$ \\ \\ unequal: $(a,b)=(0,4)$\\ $(\theta_1,\theta_2)=(0.8,0.4)$\end{tabular}} \\ \hline I-2 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)=1-\big[\{\exp(t)+3\}/4\big]^{-2}$ \\ \\ Yang and Prentice Model (1) \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=23mm]{I-2.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.3,0.85)$ \\ \\ unequal: $(a,b)=(0,4)$\\ $(\theta_1,\theta_2)=(0.8,0.25)$\end{tabular}} \\ \hline I-3 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=\{5/(0.5t+5)\}^5$ \\ \\ $F_2(t)=log$-$Logistic(1,1)$ \\ \\ Yang and Prentice Model (1) \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{I-3.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.2,0.25)$ \\ \\ unequal: $(a,b)=(0,10)$\\ $(\theta_1,\theta_2,)=(0.4,0.25)$\end{tabular}} \\ \hline J-1 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)= \left\{ \begin{array}{ll} Exp(2) & \quad t \le 0.25 \\ Exp(0.6) & \quad t>0.25 \end{array} \right.$ \\ \\ strong monotone hazards ratio\\ assumption does not hold \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{J-1.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.3,1)$ \\ \\ unequal: $(a,b)=(0,4)$\\ $(\theta_1,\theta_2)=(0.9,0.5)$\end{tabular}} \\ \hline J-2 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)= \left\{ \begin{array}{ll} Exp(1) & \quad t \le 0.1 \\ Exp(1.7) & \quad 0.1<t \le 0.45 \\ Exp(0.5) & \quad t>0.45 \end{array} \right.$ \\ \\ strong monotone hazards ratio\\ assumption does not hold \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{J-2.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.3,1)$ \\ \\ unequal: $(a,b)=(0,4)$\\ $(\theta_1,\theta_2)=(0.9,0.5)$\end{tabular}} \\ \hline J-3 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)= \left\{ \begin{array}{ll} \exp\{0.5t^2-t\} & \quad t \le 1 \\ \exp\{-0.5t^2+t-1\} & \quad t>1 \end{array} \right.$ \\ \\ strong monotone hazards ratio\\ assumption does not hold \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{J-3.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.25,0.75)$ \\ \\ unequal: $(a,b)=(0,5)$\\ $(\theta_1,\theta_2)=(0.9,0.3)$\end{tabular}} \\ \hline K-1 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (1)$ \\ \\ $F_2(t)=\{4\exp(1.7t)-3\}^{-1/3.4}$ \\ \\ strong monotone hazards ratio \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{K-1.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.3,0.9)$ \\ \\ unequal: $(a,b)=(0,5)$\\ $(\theta_1,\theta_2)=(0.9,0.3)$\end{tabular}} \\ \hline K-2 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=Exp (1)$ \\ \\ $F_2(t)=\{8\exp(2t)-7\}^{-0.25}$ \\ \\ strong monotone hazards ratio \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{K-2.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.3,1)$ \\ \\ unequal: $(a,b)=(0,5)$\\ $(\theta_1,\theta_2)=(0.9,0.45)$\end{tabular}} \\ \hline K-3 & \begin{tabular}[c]{@{}l@{}}$ F_1(t)= Exp (0.4)$ \\ \\ $F_2(t)=\big[\{2\exp(t)+64\}/66\big]^{-2.5}$ \\ \\ strong monotone hazards ratio \end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{K-3.jpeg} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Exp(\lambda)$\\ $\lambda=(0.13,35)$ \\ \\ unequal: $(a,b)=(0,10)$\\ $(\theta_1,\theta_2)=(0.3,0.15)$\end{tabular}} \\ \hline \end{longtable}} {\footnotesize \begin{longtable}[c]{|l|l|l|l|l|} \caption{Additional 2-samples settings: proportional hazards and close to proportional hazards \label{prop_design}}\\ \hline \multicolumn{1}{|c|}{Scenario} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Failure time distribution \\ and reference\end{tabular}} & \multicolumn{1}{c|}{Graphical description} & \multicolumn{2}{c|}{Censroing distribution} \\ \hline L & \begin{tabular}[c]{@{}l@{}}$F_1(t) = Weibull(0.849,20)$ \\ \\ $F_2(t) = Weibull(0.849,10)$\\ \\ Proportional hazards, \\ Uno et al. (2015)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{L} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim Weibull(\alpha,\beta)$\\ $(\alpha,\beta)=(5,24), (1.5,12)$ \\ \\ unequal: $(a,b)=(0,40)$\\ $(\theta_1,\theta_2)=(0.025,0.05)$\end{tabular}} \\ \hline M & \begin{tabular}[c]{@{}l@{}}$ F_1(t) = \left\{ \begin{array}{ll} Weibull(4,1) & \quad t \le 0.5 \\ Weibull(2,1.5) & \quad t>0.5 \end{array} \right.$ \\ \\ $F_2 = \left\{ \begin{array}{ll} Weibull(2.2,1) & \quad t \le 0.5 \\ Weibull(1.5,1.5) & \quad t>0.5 \end{array} \right.$\\ \\ Substantial difference in early times, \\ Pepe and Fleming (1989; 1991)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=24mm]{M} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal: $C \sim Weibull(\alpha,\beta)$\\ $(\alpha,\beta)=(0.9,5.5), (0.35,3.4)$ \\ \\ unequal: $(a,b)=(0,4.5)$\\ $(\theta_1,\theta_2)=(0.25,0.14)$\end{tabular}} \\ \hline N & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=1-(1+t)^{exp(-0.5)}$ \\ \\ $F_2(t) = log$-$Logistic(1,1)$ \\ \\ Yang and Prentice model, \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=24mm]{N} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(0.75,-0.1)$ \\ \\ unequal: $(a,b)=(0,12)$\\ $(\theta_1,\theta_2)=(1.5,0.4)$\end{tabular}} \\ \hline O & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=1-(1+t)^{exp(-1)}$ \\ \\ $F_2(t) = log$-$Logistic(1,1)$ \\ \\ Yang and Prentice model, \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=24mm]{O} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(-0.6,-1.8)$ \\ \\ unequal: $(a,b)=(0,8)$\\ $(\theta_1,\theta_2)=(2,0.5)$\end{tabular}} \\ \hline P & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=1-\{1+t\exp(1)\}^{-1}$ \\ \\ $F_2(t) = log$-$Logistic(1,1)$ \\ \\ Yang and Prentice model, \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{P} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(0.6,-0.5)$ \\ \\ unequal: $(a,b)=(0,8)$\\ $(\theta_1,\theta_2)=(2,0.5)$\end{tabular}} \\ \hline Q & \begin{tabular}[c]{@{}l@{}}$ F_1(t)=1-\{1+t/ \exp(1)\}^{-exp(1)}$ \\ \\ $F_2(t) = log$-$Logistic(1,1)$ \\ \\ Yang and Prentice model, \\ Yang and Prentice (2010)\end{tabular} & \begin{minipage}{.20\textwidth} \ \includegraphics[width=\linewidth, height=25mm]{Q} \end{minipage} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(0.7,-0.15)$ \\ \\ unequal: $(a,b)=(0,8)$\\ $(\theta_1,\theta_2)=(0.9,0.3)$\end{tabular}} \\ \hline \end{longtable} } \begin{figure} \caption{Density plots of the 17 simulation scenarios: red (green) - density function of treatment group 1 (2).} \label{density_fig} \end{figure} \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \section{Supplementary Materials} This Supplementary Material file includes, the proof of the theorem, additional simulation results and additional details of the simulation results summarized in the paper. \begin{center} \textbf{Proof of the theorem for a continuous survival time} \end{center} \label{proof_contin} For simplicity, we show the proof using the Pearson chi-squared test statistic. The proof using the likelihood ratio test statistic is very similar and therefore omitted. The reasoning is based on the proofs of Heller et al. (2013). We show in the following that for an arbitrary fixed $\alpha \in (0,1)$, if $H_0$ is false then $\lim_{n\to\infty} \mbox{pr} \big(Q-q_{1-\alpha} >0 \big)=1$, where $q_{1-\alpha}$ is the $1-\alpha$ quantile of the test statistic under the null distribution. Assume $X\in \mathbb{R}^+$ has a continuous distribution given $Y$, denoted by $f_{X|Y}(\cdot|\cdot)$, and let $f_X^*(x)=\sum_{k=1}^K \pi_k f_{X|Y}(x|k)$, which is not necessarily the true marginal distribution. If $H_0$ is false, there exists at least one pair of points $(x_0,g)$ such that without loss of generality $f_{X|Y}(x_0|g)>f_X^*(x_0)$. Assume (for the moment) that $\mbox{pr}(C>x_0|Y=k)>0$ for all $k=1,\ldots,K$ and let $d(x,x_0)=|x-x_0|$. Since $f_{X|Y}(\cdot |g)$ and $f_X^*(\cdot)$ are continuous, there exist a radius, $R>0$, and a set $$ \mathcal{B}=\{x: d(x,x_0)<R \} $$ with positive probability, such that for $x \in \mathcal{B}$, $f_{X|Y}(x | g)>f_X^*(x)$ and $\mbox{pr}(C>x+3R|Y=k)>0$ for all $k=1,\ldots,K$. The last condition guarantees that with positive probability $S_P(i,j)$ with $n_{ij}=n$ (namely, the table consists of all the groups) is observed, where $Y_i=g$, and $X_i, X_j \in \mathcal{B}$. Moreover, $$\min_{\mathcal{B}} \{f_{X|Y}(x | g) - f_X^*(x)\} > 0 \, ; $$ denote this minimum by $c$. Put \begin{eqnarray*} \mathcal{B}_1 &= & \{ x : d(x, x_0)< R/8 \}\ ,\\ \mathcal{B}_2 &=& \{ x : 3R/8 < d(x, x_0) < R/2 \}\, , \end{eqnarray*} and let $p_1=\mbox{pr} (X \in \mathcal{B}_1,Y=g)$, $p_2 = \mbox{pr}(X \in \mathcal{B}_2)$, $p_3 = \mbox{pr}(C>x_0+R,Y=g)$, and $p_4 = \min_{k=1,\ldots,K}\mbox{pr}(C>x_0+R,Y=k)$. Therefore, we expect to have at least $n^2 p_1 p_2 p_3 p_4$ pairs $(i,j)$ such that $\{X_i\in \mathcal{B}_1,Y_i=g, \Delta_i=1\}$ and $\{X_j\in \mathcal{B}_2, \Delta_j=1\}$. Consider such a pair. Stute and Wang (1993) showed that the KM estimator converges almost surely to the true survival function under random censorship. Therefore, uniformly almost surely, \begin{eqnarray*} &&\lim_{n\to\infty} \left\{ \frac{A_{11}^*(i,j)}{n-2} - \pi_g \int_{\mathcal{B}_3}f_{X|Y}(x|Y=g)dx \right\} = 0 \, ,\\ &&\lim_{n\to\infty} \left\{ \frac{A_{1\cdot}^*(i,j)}{n-2} - \pi_g\ \right\} = 0 \, ,\\ &&\lim_{n\to\infty} \left\{ \frac{A_{\cdot1}^*(i,j)}{n-2} - \int_{\mathcal{B}_3}f_X^*(x)dx \right\}= 0 \, , \end{eqnarray*} where $$\mathcal{B}_3=\{x: d(x,X_i)<d(X_i,X_j) \, .\} $$ Since $$S_P(i,j)=\sum_{m=1,2}\sum_{l=1,2}\frac{\{A_{ml}^*(i,j)-A_{m\cdot}^*(i,j)A_{\cdot l}^*(i,j)/(n-2)\}^2}{A_{m\cdot}^*(i,j)A_{\cdot l}^*(i,j)/(n-2)} \ ,$$ it is enough to look at the term with $l=m=1$ in $S_P(i,j)$, that is, $$S_{P_1}(i,j)=\frac{\{A_{11}^*(i,j)-A_{1\cdot}^*(i,j)A_{\cdot 1}^*(i,j)/(n-2)\}^2}{A_{1\cdot}^*(i,j)A_{\cdot 1}^*(i,j)/(n-2)} \ .$$ It follows that $S_P(i,j)\geq S_{P_1}(i,j)$ and hence that our test statistic satisfies $$ Q\geq\frac{1}{N}\sum_{k=1}^{K}\sum_{T_i\in \mathcal{A}_k}\sum_{j=1,j\neq i}^{n} S_{P_1}(i,j) \Delta_i \Delta_j I(T_j\le \tau_k) I(2T_i-T_j\le \tau_k) \equiv Q_1\, . $$ By Slutzky's theorem and the continuous mapping theorem, in probability \begin{eqnarray*} \lim_{n\to\infty} \frac{S_{P_1}(i,j)}{n-2} &=& \lim_{n\to\infty} \frac{1}{n-2} \frac{\{A_{11}^*(i,j)-A_{1\cdot}^*(i,j)A_{\cdot 1}^*(i,j)/(n-2)\}^2}{A_{1\cdot}^*(i,j)A_{\cdot 1}^*(i,j)/(n-2)} \\ &=& \frac{\pi_g\big[\int_{\mathcal{B}_3}\{f_{X|Y}(x|Y=g) - f_X^*(x)\}dx \big]^2}{\int_{\mathcal{B}_3}f_X^*(x)dx} \, , \end{eqnarray*} and we show that this limit can be bounded from below by a positive constant that does not depend on $(i,j)$. Indeed, it can be shown that $\mathcal{B}_3 \subseteq \mathcal{B}$ and $\mathcal{B}_1 \subseteq \mathcal{B}_3$, by the triangle inequality (see Heller et al. for details). Therefore, $$ \pi_g\Big[\int_{\mathcal{B}_3}\{f_{X|Y}(x|Y=g) - f_X^*(x)\}dx \Big]^2 \geq \pi_g \Big\{c \int_{\mathcal{B}_3}dx\Big\}^2 \geq \pi_g \Big\{c \int_{\mathcal{B}_1}dx\Big\}^2 \equiv c' $$ since $\int_{\mathcal{B}_3}f_X^*(x)dx \leq 1$, it follows that $S_{P_1}(i,j)/(n-2)$ converges in probability to a positive constant greater than $c'>0$. Therefore, $S_{P_1}^*(i,j)>(n-2)c'$ with probability going to $1$ as $n\to\infty$. As a result, $Q_1>N^{-1} n^2(n-2)c'p_1p_2p_3p_4$ with probability going to $1$ as $n\to\infty$. Therefore, $$ \lim_{n\to\infty}\mbox{pr} \big\{Q - N^{-1}n^2(n-2)c'p_1p_2p_3p_4 > 0 \big\}=1 \ . $$ Since $N<n^2$, there exist a constant $\lambda>0$ such that $\lim_{n\to\infty}\mbox{pr} \big(Q - \lambda n > 0 \big) =1$. Under the null hypothesis, for a large enough sample size $n$, $S_P(i, j)$ follows the $\chi^2$ distribution with one degree of freedom. Therefore, the null expectation of $Q$, which is an average of $N$ different $S_P$'s, is approximately 1 and the null variance is bounded above by 2. Consequently, $\lim_{n\to\infty}(\lambda n-q_{1-\alpha})>0$ and $\lim_{n\to\infty} \mbox{pr} \big(Q - q_{1-\alpha} > 0 \big)=1$. Finally, for simplicity of presentation, we required that $\mbox{pr}(C>x_0|Y=k)>0$ for all $k=1,\ldots,K$. However this is only required for the two different groups, $g$ and $m$. The proof for the discrete failure time variable $X$ is done in much the same way.\\ \begin{center} \textbf{Description of the tests included in the simulation study} \end{center} Let $$ G^{\rho,\gamma} = \sqrt{\frac{n_1+n_2}{n_1n_2}} \int_0^{\infty} \left\{ \widehat{S}(t-) \right\}^{\rho} \left\{ 1- \widehat{S}(t-) \right\}^{\gamma} \frac{\overline{Y}_1(t)\overline{Y}_2(t)}{\overline{Y}_1(t)+\overline{Y}_2(t)} \left\{ \frac{d \overline{N}_1(t)}{\overline{Y}_1(t)} - \frac{d \overline{N}_2(t)}{\overline{Y}_2(t)}\right\} $$ where $\overline{N}_j(t)$ is the number of failures before or at time $t$ in group $j$, $\overline{Y}_j(t)$ is the number at risk at time $t$ in group $j$, $j=1,2$, and $\widehat{S}$ is the Kaplan-Meier estimator based on the pooled data. Also let \begin{eqnarray} \widehat{\sigma}_{lm} &=& {\frac{n_1+n_2}{n_1n_2}} \int_0^{\infty} \left\{ \widehat{S}(t-) \right\}^{\rho_l} \left\{ 1- \widehat{S}(t-) \right\}^{\gamma_l} \left\{ \widehat{S}(t-) \right\}^{\rho_m} \left\{ 1- \widehat{S}(t-) \right\}^{\gamma_m} \nonumber \\ &&\frac{\overline{Y}_1(t)\overline{Y}_2(t)}{\overline{Y}_1(t)+\overline{Y}_2(t)} \left\{ 1 - \frac{\Delta \overline{N}_1(t) + \Delta \overline{N}_2(t) -1}{\overline{Y}_1(t)+\overline{Y}_2(t)} \right\} \frac{d\{\overline{N}_1(t) + \overline{N}_2(t)\}}{\overline{Y}_1(t)+\overline{Y}_2(t)}\nonumber \end{eqnarray} where $\Delta \overline{N}_j(t) = \overline{N}_j(t) - \overline{N}_j(t-)$, $j=1,2$, $(\rho_1,\gamma_1)=(0,0)$ (the logrank test statistic), $(\rho_2,\gamma_2)=(1,0)$, $(\rho_3,\gamma_3)=(0,1)$, and $(\rho_4,\gamma_4)=(1,1)$. Then, four standardized statistics are defined by $$ Z_k = G^{\rho_k,\gamma_k} / \sqrt{\widehat{\sigma}_{kk}} \;\;\; , k=1,\ldots,4 \,. $$ The test statistic of Lee (2007) is defined by $$\max\{|Z_2|,|Z_3|\}$$ and the MaxCombo test statistic is $$\max\{|Z_1|,|Z_2|,|Z_3|,|Z_4|\}.$$ The $p$values can be easily calculated based on the asymptotic multivariate normal distribution of $$(G^{\rho_1,\gamma_1},G^{\rho_2,\gamma_2},G^{\rho_3,\gamma_3},G^{\rho_4,\gamma_4})$$ under the null hypothesis, i.e. with a mean zero and a covariance matrix that can be consistently estimated by $\widehat{\sigma}_{lm}$. The R package mvtnorm was used. The Pepe-Fleming weighted KM test (Pepe and Fleming, 1989) uses the following weight function $$ \frac{n \widehat{G}_1(t)\widehat{G}_2(t)}{n_1 \widehat{G}_1(t) + n_2 \widehat{G}_1(t)} $$ where $\widehat{G}_j$ is the KM estimator of the time to censoring in group $j$, $j=1,2$. Details od the variance estimator can be found in Pepe and Fleming (1989). Peto-Peto weighted KM test (Peto and Peto, 1972) uses a weight function that is very close to the pooled KM estimator. \footnotesize {\footnotesize \begin{longtable}[c]{|l|l|l|l|} \caption{$K$-sample scenarios with $K=3,4,5$: under the null hypothesis and based on scenarios D and J-2 from the main text \label{K_sample_design}}\\ \hline \multicolumn{1}{|c|}{Scenario} &\multicolumn{1}{c|}{Failure time} & \multicolumn{2}{c|}{Censroing distribution} \\ \hline \begin{tabular}[c]{@{}l@{}} Null \\ $K=3$ \end{tabular} & \begin{tabular}[c]{@{}l@{}}$F_1(t) = F_2(t) = F_3(t) =$\\ $log$-$Logistic(1,1)$ \end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.1,0)$ \\ \\ 40\% and 55\%:\\ $C_1,C_2\sim\min \{Exp(0.85),U(0,10)\}$ \\ $C_3 \sim U(0,10)$ \\ \\ 27\% and 55\%: \\ $C_1 \sim \min \{Exp(0.85),U(0,10)\}$\\ $C_2 \sim \min \{Exp(0.25),U(0,10)\}$ \\ $C_3\sim U(0,10)$ \end{tabular}} \\ \hline \begin{tabular}[c]{@{}l@{}} Null \\ $K=4$ \end{tabular} & \begin{tabular}[c]{@{}l@{}}$F_1(t) = F_2(t) = F_3(t) = F_4(t) = $ \\ $log$-$Logistic(1,1)$ \end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.1,0)$ \\ \\ 40\% and 55\%:\\ $C_1,C_2\sim\min \{Exp(0.85),U(0,10)\}$ \\ $C_3,C_4 \sim U(0,10)$ \\ \\ 27\% and 55\%: \\ $C_1 \sim \min \{Exp(0.85),U(0,10)\}$\\ $C_2 \sim \min \{Exp(0.25),U(0,10)\}$ \\ $C_3\sim U(0,10)$\\ $C_4 \sim log-Normal(1.5,0.5)$ \end{tabular}} \\ \hline \begin{tabular}[c]{@{}l@{}} Null \\ $K=5$ \end{tabular} & \begin{tabular}[c]{@{}l@{}}$F_1(t) = F_2(t) = F_3(t) = F_4(t) = F_5(t) =$ \\ $log$-$Logistic(1,1)$ \end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim log$-$Normal(\alpha,0.5)$\\ $\alpha=(1.1,0)$ \\ \\ 40\% and 55\%:\\ $C_1,C_2\sim\min \{Exp(0.85),U(0,10)\}$ \\ $C_3,C_4,C_5 \sim U(0,10)$ \\ \\ 27\% and 55\%: \\ $C_1 \sim \min \{Exp(0.85),U(0,10)\}$\\ $C_2 \sim \min \{Exp(0.25),U(0,10)\}$ \\ $C_3\sim U(0,10)$\\ $C_4 \sim log-Normal(1.5,0.5)$\\ $C_5\sim Exp (1.5)$ \end{tabular}} \\ \hline D & \begin{tabular}[c]{@{}l@{}}$F_1(t) = \left\{ \begin{array}{ll} Exp(0.5) & \quad t \le 0.44 \\ Exp(0.1) & \quad 0.44<t \le 1.05 \\ Exp(1.5) & \quad 1.05<t \le 1.47 \\ Exp(1) & \quad t>1.47 \end{array} \right.$ \\ \\ $ F_2(t)=$\\ $F_3(t) = \left\{ \begin{array}{ll} Exp(1.5) & \quad t \le 0.38 \\ Exp(0.1) & \quad 0.38<t \le 1.02 \\ Exp(0.5) & \quad 1.02<t \le 1.47 \\ Exp(1) & \quad t>1.47 \end{array} \right.$ \end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim U(\alpha,\beta)$\\ $(\alpha,\beta)=(1.1,3), (0.1,2.1)$ \\ \\ 40\% and 55\%:\\ $C_1,C_2\sim\min \{Exp(0.5),U(0.5,3.5)\}$ \\ $C_3\sim U(0.5,3.5)$ \\ \\ 27\% and 55\%: \\ $C_1 \sim \min \{Exp(0.3),U(0.5,3.5)\}$\\ $C_2 \sim \min \{Exp(0.5),U(0.5,3.5)\}$ \\ $C_3\sim U(0.5,3.5)$ \end{tabular}} \\ \hline J-2 & \begin{tabular}[c]{@{}l@{}}$F_1(t)= Exp (1)$ \\ \\ $ F_2(t)=$\\ $F_3(t)= \left\{ \begin{array}{ll} Exp(1) & \quad t \le 0.1 \\ Exp(1.7) & \quad 0.1<t \le 0.45 \\ Exp(0.5) & \quad t>0.45 \end{array} \right.$ \end{tabular} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}equal:\\ $C \sim Exp(\lambda)$\\ $\lambda=(0.3,1)$ \\ \\ 40\% and 55\%:\\ $C_1,C_2\sim\min \{Exp(0.9),U(0,4)\}$ \\ $C_3 \sim U(0,4)$ \\ \\ 27\% and 55\%: \\ $C_1 \sim \min \{Exp(0.9),U(0,4)\}$\\ $C_2 \sim \min \{Exp(0.5),U(0,4)\}$ \\ $C_3\sim U(0,4)$ \end{tabular}} \\ \hline \end{longtable} } \begin{figure} \caption{Empirical power of the tests under the null for $K=3,4,5$} \label{K_null_results} \end{figure} \begin{figure} \caption{Empirical power under the alternative for $K=3$} \label{K_alternative_results} \end{figure} \begin{figure} \caption{Empirical power under the null of the 2-sample KONP tests, logrank and the $Cau$ robust test} \label{K_alternative_results} \end{figure} \begin{figure} \caption{Empirical power of the 2-sample KONP tests, logrank and the $Cau$ robust test} \label{K_alternative_results} \end{figure} \begin{figure} \caption{Empirical power of the 2-sample KONP tests, logrank and the $Cau$ robust test} \label{K_alternative_results} \end{figure} \spacingset{1} \footnotesize \begin{longtable}[c]{ccccccccccc} \caption{Empirical power of all the $2$-sample scenarios: KONP-P and KONP-LR are our two proposed tests; YP-1 and YP-2 are Yang-Prentice tests with group 1 as control and 2 as treatment, and vice verse; YP-Inv is Yang--Prentice invariant test; LR - the logrank test; PP - the Peto--Peto test; and PF - the Pepe--Fleming test. In each line of the table, the highest power \textbf{among the invariant} tests is in bold. Evidently, under the non-proportional hazards scenarios (A through K-3) usually KONP-P and KONP-LR are with the highest power. Under proportional hazards (or close to, L through Q) usually the logrank test is with the highest power. } \label{2_sample_table} \\ \hline n & Scenario & Censoring & KONP-P & KONP-LR & YP-1 & YP-2 & YP-Inv & LR & PP & PF \\ \hline 100 & Null & 25\% and 25\%& 0.047 & 0.046 & 0.064 & 0.066 & 0.052 & 0.054 & 0.051 & 0.047 \\ 100 & Null & 50\% and 50\% & 0.049 & 0.049 & 0.072 & 0.074 & 0.057 & 0.056 & 0.056 & 0.051 \\ 100 & Null & 27\% and 55\% & 0.041 & 0.038 & 0.061 & 0.066 & 0.050 & 0.048 & 0.048 & 0.041 \\ 100 & Null & 40\% and 55\% & 0.046 & 0.044 & 0.060 & 0.061 & 0.052 & 0.049 & 0.048 & 0.043 \\ \hline 100 & A & 25\% and 25\%& 0.492 & 0.492 & 0.513 & {0.556} & \textbf{0.496} & 0.479 & 0.320 & 0.337 \\ 100 & A & 50\% and 50\%& \textbf{0.235} & \textbf{0.235} & {0.237} & 0.295 & 0.217 & 0.211 & 0.147 & 0.169 \\ 100 & A & 27\% and 55\% & \textbf{0.342} & 0.337 & 0.299 & 0.338 & 0.271 & 0.280 & 0.178 & 0.259 \\ 100 & A & 40\% and 55\% & \textbf{0.309} & 0.305 & 0.273 & 0.319 & 0.235 & 0.255 & 0.167 & 0.231 \\ \hline 100 & B & 25\% and 25\%& \textbf{0.635} & 0.647 & {0.680} & 0.258 & 0.634 & 0.102 & 0.278 & 0.192 \\ 100 & B & 50\% and 50\%& 0.582 & 0.597 & {0.755} & 0.438 & \textbf{0.688} & 0.274 & 0.459 & 0.589 \\ 100 & B & 27\% and 55\% & 0.482 & 0.489 & {0.705} & 0.311 & \textbf{0.636} & 0.156 & 0.354 & 0.203 \\ 100 & B & 40\% and 55\% & 0.455 & 0.464 & {0.710} & 0.355 & \textbf{0.632} & 0.197 & 0.399 & 0.229 \\ \hline 100 & C & 25\% and 25\%& 0.866 & \textbf{0.868} & 0.615 & 0.444 & 0.564 & 0.187 & 0.525 & 0.457 \\ 100 & C & 50\% and 50\%& \textbf{0.743} & 0.741 & 0.669 & 0.593 & 0.604 & 0.431 & 0.639 & 0.601 \\ 100 & C & 27\% and 55\% &\textbf{0.710} & 0.702 & 0.630 & 0.517 & 0.557 & 0.326 & 0.597 & 0.386 \\ 100 & C & 40\% and 55\% & \textbf{0.682} & 0.673 & 0.670 & 0.601 & 0.609 & 0.437 & 0.655 & 0.504 \\ \hline 100 & D & 25\% and 25\%& 0.783 & \textbf{0.785} & 0.520 & 0.319 & 0.462 & 0.145 & 0.383 & 0.451 \\ 100 & D & 50\% and 50\%& \textbf{0.696} & 0.692 & 0.628 & 0.552 & 0.565 & 0.404 & 0.591 & 0.633 \\ 100 & D & 27\% and 55\% & \textbf{0.646} & 0.629 & 0.581 & 0.448 & 0.521 & 0.295 & 0.506 & 0.475 \\ 100 & D & 40\% and 55\% & \textbf{0.639} & 0.630 & 0.624 & 0.515 & 0.549 & 0.366 & 0.566 & 0.540 \\ \hline 100 & E & 25\% and 25\%& 0.529 & 0.526 & 0.578 & {0.600} & 0.538 & \textbf{0.538} & 0.345 & 0.332 \\ 100 & E & 50\% and 50\%& \textbf{0.273} & 0.269 & 0.271 & {0.314 }& 0.226 & 0.244 & 0.160 & 0.150 \\ 100 & E & 27\% and 55\% & \textbf{0.406} & 0.401 & 0.409 & {0.429} & 0.354 & 0.376 & 0.231 & 0.321 \\ 100 & E & 40\% and 55\% & 0.336 & \textbf{0.338 }& 0.331 & {0.368} & 0.267 & 0.296 & 0.189 & 0.245 \\ \hline 100 & F & 25\% and 25\%& 0.247 & 0.250 & 0.156 & {0.320} & \textbf{0.264} & 0.053 & 0.090 & 0.044 \\ 100 & F & 50\% and 50\%& 0.190 & 0.193 & 0.158 & {0.284} & \textbf{0.226} & 0.114 & 0.175 & 0.173 \\ 100 & F & 27\% and 55\% & 0.148 & 0.149 & 0.138 & {0.259} & \textbf{0.199} & 0.084 & 0.152 & 0.079 \\ 100 & F & 40\% and 55\% & 0.160 & 0.160 & 0.149 & {0.282} & \textbf{0.212} & 0.102 & 0.177 & 0.095 \\ \hline 100 & G & 25\% and 25\%& 0.537 & 0.536 & 0.614 & 0.565 & 0.549 & 0.490 & \textbf{0.622} & 0.547 \\ 100 & G & 50\% and 50\%& 0.589 & 0.584 & {0.674} & 0.641 & 0.609 & 0.621 & \textbf{0.662} & 0.655 \\ 100 & G & 27\% and 55\% & 0.485 & 0.478 & {0.604} & 0.575 & 0.548 & 0.520 & \textbf{0.601} & 0.521 \\ 100 & G & 40\% and 55\% & 0.468 & 0.461 & {0.599} & {0.559} & 0.531 & 0.530 & \textbf{0.599} & 0.515 \\ \hline 100 & H & 25\% and 25\%& \textbf{0.548} & 0.548 & 0.452 & 0.403 & 0.394 & 0.274 & 0.480 & 0.404 \\ 100 & H & 50\% and 50\%& 0.583 & 0.581 & {0.649} & 0.621 & 0.587 & {0.625} & \textbf{0.629} & 0.596 \\ 100 & H & 27\% and 55\% & 0.429 & 0.420 & 0.440 & 0.410 & 0.385 & 0.328 & \textbf{0.461 }& 0.323 \\ 100 & H & 40\% and 55\% & 0.439 & 0.436 & 0.461 & 0.421 & 0.389 & 0.348 & \textbf{0.486 }& 0.346 \\ \hline 100 & I-1 & 25\% and 25\%& 0.190 & \textbf{0.194} & 0.193 & 0.132 & 0.144 & 0.061 & 0.103 & 0.056 \\ 100 & I-1 & 50\% and 50\%& 0.110 & 0.109 & 0.178 & 0.108 & 0.131 & 0.079 & \textbf{0.123} & 0.074 \\ 100 & I-1 & 27\% and 55\% & 0.122 & 0.123 & {0.177} & 0.117 & \textbf{0.137} & 0.063 & 0.114 & 0.065 \\ 100 & I-1 & 40\% and 55\% & 0.112 & 0.113 & {0.186} & 0.116 & \textbf{0.138} & 0.072 & 0.131 & 0.069 \\ \hline 100 & I-2 & 25\% and 25\%& 0.256 & 0.253 & 0.249 & {0.287 }& 0.235 & 0.177 & \textbf{0.285} & 0.176 \\ 100 & I-2 & 50\% and 50\%& 0.182 & 0.177 & 0.247 & {0.285} & 0.238 & 0.206 & \textbf{0.272} & 0.203 \\ 100 & I-2 & 27\% and 55\% & 0.180 & 0.173 & 0.265 & {0.315 }& 0.248 & 0.242 &\textbf{ 0.293} & 0.230 \\ 100 & I-2 & 40\% and 55\% & 0.177 & 0.169 & 0.257 & {0.305} & 0.239 & 0.227 & \textbf{0.281 }& 0.222 \\ \hline 100 & I-3 & 25\% and 25\%& 0.196 & 0.197 & {0.232} & 0.179 & 0.197 & 0.122 & \textbf{0.215} & 0.154 \\ 100 & I-3 & 50\% and 50\%& 0.250 & 0.250 & {0.320} & 0.272 & 0.266 & 0.252 & \textbf{0.288} & 0.287 \\ 100 & I-3 & 27\% and 55\% & 0.166 & 0.158 & {0.243} & 0.193 & 0.199 & 0.137 & \textbf{0.221} & 0.114 \\ 100 & I-3 & 40\% and 55\% & 0.153 & 0.150 & {0.236} & 0.183 & 0.187 & 0.145 & \textbf{0.218} & 0.126 \\ \hline 100 & J-1 & 25\% and 25\%& \textbf{0.388} & \textbf{0.388} & 0.271 & 0.160 & 0.211 & 0.055 & 0.113 & 0.048 \\ 100 & J-1 & 50\% and 50\%& \textbf{0.244} & \textbf{0.244} & {0.266} & 0.152 & 0.205 & 0.082 & 0.162 & 0.076 \\ 100 & J-1 & 27\% and 55\% & \textbf{0.264} & 0.261 & {0.278} & 0.152 & 0.220 & 0.073 & 0.155 & 0.063 \\ 100 & J-1 & 40\% and 55\% & \textbf{0.239} & \textbf{0.239} & {0.261} & 0.160 & 0.193 & 0.094 & 0.178 & 0.083 \\ \hline 100 & J-2 & 25\% and 25\%& 0.322 & \textbf{0.325 }& 0.173 & 0.153 & 0.139 & 0.069 & 0.076 & 0.067 \\ 100 & J-2 & 50\% and 50\%& \textbf{0.172} & 0.172 & 0.123 & 0.099 & 0.086 & 0.068 & 0.109 & 0.059 \\ 100 & J-2 & 27\% and 55\% & \textbf{0.192} & 0.189 & 0.113 & 0.088 & 0.083 & 0.049 & 0.093 & 0.041 \\ 100 & J-2 & 40\% and 55\% & 0.164 & \textbf{0.165} & 0.107 & 0.084 & 0.077 & 0.063 & 0.088 & 0.056 \\ \hline 100 & J-3 & 25\% and 25\%& 0.758 & \textbf{0.759 }& 0.549 & 0.571 & 0.518 & 0.555 & 0.470 & 0.577 \\ 100 & J-3 & 50\% and 50\%& \textbf{0.535} & 0.526 & 0.499 & 0.472 & 0.417 & 0.441 & 0.326 & 0.421 \\ 100 & J-3 & 27\% and 55\% &\textbf{0.453} & 0.444 & {0.527} & 0.482 & 0.439 & 0.424 & 0.300 & 0.372 \\ 100 & J-3 & 40\% and 55\% & \textbf{0.460} & 0.455 & {0.510 }& 0.465 & 0.417 & 0.415 & 0.283 & 0.369 \\ \hline 100 & K-1 & 25\% and 25\%& {0.286} & \textbf{0.289} & 0.271 & 0.208 & 0.222 & 0.115 & 0.047 & 0.120 \\ 100 & K-1 & 50\% and 50\%& 0.123 & \textbf{0.126} & {0.177} & 0.087 & 0.114 & 0.056 & 0.054 & 0.050 \\ 100 & K-1 & 27\% and 55\% & 0.158 & \textbf{0.163} & {0.180} & 0.090 & 0.129 & 0.054 & 0.046 & 0.046 \\ 100 & K-1 & 40\% and 55\% & \textbf{0.140} & \textbf{0.140} & {0.183} & 0.087 & 0.123 & 0.053 & 0.064 & 0.056 \\ \hline 100 & K-2 & 25\% and 25\%& 0.449 & \textbf{0.459} & 0.401 & 0.189 & 0.325 & 0.058 & 0.136 & 0.053 \\ 100 & K-2 & 50\% and 50\%& 0.226 & 0.225 & {0.377} & 0.186 & \textbf{0.300} & 0.098 & 0.212 & 0.088 \\ 100 & K-2 & 27\% and 55\% & {0.294} & 0.290 & {0.394} & 0.180 & \textbf{0.308} & 0.087 & 0.199 & 0.068 \\ 100 & K-2 & 40\% and 55\% & 0.262 & 0.262 & {0.399} & 0.204 & \textbf{0.310} & 0.106 & 0.218 & 0.092 \\ \hline 100 & K-3 & 25\% and 25\%& 0.869 & \textbf{0.873} & 0.422 & 0.699 & 0.608 & 0.057 & 0.383 & 0.102 \\ 100 & K-3 & 50\% and 50\%& \textbf{0.572} & \textbf{0.572} & 0.427 & {0.631} & 0.544 & 0.175 & 0.497 & 0.281 \\ 100 & K-3 & 27\% and 55\% & 0.570 & \textbf{0.571} & 0.427 & 0.654 & 0.556 & 0.175 & 0.485 & 0.324 \\ 100 & K-3 & 40\% and 55\% & 0.542 & 0.542 & 0.414 & {0.635} & \textbf{0.548} & 0.195 & 0.474 & 0.311 \\ \hline 100 & L & 25\% and 25\%& 0.605 & 0.602 & {0.733} & 0.724 & \textbf{0.681} & 0.717 & 0.657 & 0.666 \\ 100 & L & 50\% and 50\%& 0.427 & 0.418 & {0.545} & 0.532 & 0.477 & \textbf{0.529} & 0.502 & 0.502 \\ 100 & L & 27\% and 55\% & 0.483 & 0.476 & {0.616} & 0.610 & 0.545 & \textbf{0.603} & 0.556 & 0.580 \\ 100 & L & 40\% and 55\% & 0.422 & 0.414 & {0.563} & 0.549 & 0.476 & \textbf{0.545} & 0.492 & 0.502 \\ \hline 100 & M & 25\% and 25\%& 0.280 & 0.283 & {0.491} & 0.464 & 0.404 & \textbf{0.458} & 0.390 & 0.413 \\ 100 & M & 50\% and 50\%& 0.172 & 0.174 & {0.373 }& 0.343 & 0.268 & \textbf{0.333} & 0.280 & 0.272 \\ 100 & M & 27\% and 55\% & 0.209 & 0.210 & {0.387} & 0.372 & 0.312 & \textbf{0.358} & 0.333 & 0.339 \\ 100 & M & 40\% and 55\% & 0.227 & 0.224 & {0.382} & 0.363 & 0.316 & \textbf{0.349} & 0.333 & 0.333 \\ \hline 100 & N & 25\% and 25\%& 0.473 & 0.475 & 0.589 & {0.593 }& 0.538 & \textbf{0.577} & 0.524 & 0.550 \\ 100 & N & 50\% and 50\%& 0.396 & 0.393 & 0.498 & {0.511} & 0.444 & \textbf{0.481} & 0.460 & 0.437 \\ 100 & N & 27\% and 55\% & 0.269 & 0.264 & 0.430 & {0.463} & 0.377 & \textbf{0.428} & 0.405 & 0.316 \\ 100 & N & 40\% and 55\% & 0.281 & 0.276 & 0.387 & {0.414} & 0.324 & \textbf{0.381} & 0.346 & 0.326 \\ \hline 100 & O & 25\% and 25\%& 0.968 & 0.968 & 0.989 & {0.990} & 0.984 & \textbf{0.989} & 0.976 & 0.985 \\ 100 & O & 50\% and 50\%& 0.904 & 0.900 & 0.940 & 0.941 & 0.922 & \textbf{0.944} & 0.929 & 0.909 \\ 100 & O & 27\% and 55\% & 0.781 & 0.772 & 0.921 & {0.931} & 0.894 & \textbf{0.925} & 0.908 & 0.866 \\ 100 & O & 40\% and 55\% & 0.790 & 0.782 & 0.909 & {0.917} & 0.870 & \textbf{0.908} & 0.892 & 0.877 \\ \hline 100 & P & 25\% and 25\%& 0.707 & 0.707 & 0.777 & 0.793 & 0.758 & 0.753 & \textbf{0.797} & 0.762 \\ 100 & P & 50\% and 50\%& 0.654 & 0.648 & 0.734 & {0.758} & 0.697 & 0.731 & \textbf{0.742} & 0.721 \\ 100 & P & 27\% and 55\% & 0.488 & 0.479 & 0.676 & {0.710} & 0.638 & 0.674 & \textbf{0.686} & 0.592 \\ 100 & P & 40\% and 55\% & 0.578 & 0.571 & 0.711 & {0.742} & 0.670 & 0.710 & \textbf{0.732} & 0.668 \\ \hline 100 & Q & 25\% and 25\%& 0.148 & 0.146 & 0.211 & {0.212} & 0.178 & \textbf{0.186} & 0.125 & 0.135 \\ 100 & Q & 50\% and 50\%& 0.078 & 0.077 & {0.119} & 0.110 & 0.088 & \textbf{0.099 }& 0.086 & 0.069 \\ 100 & Q & 27\% and 55\% & 0.116 & 0.119 & {0.158} & 0.144 & \textbf{0.123} & 0.121 & 0.097 & 0.105 \\ 100 & Q & 40\% and 55\% & 0.105 & 0.103 & {0.138} & 0.127 & 0.099 & \textbf{0.108} & 0.079 & 0.101 \\ \hline 200 & Null & 25\% and 25\%& 0.059 & 0.058 & 0.067 & 0.065 & 0.054 & 0.058 & 0.054 & 0.055 \\ 200 & Null & 50\% and 50\%& 0.050 & 0.050 & 0.061 & 0.059 & 0.056 & 0.053 & 0.052 & 0.052 \\ 200 & Null & 27\% and 55\% & 0.044 & 0.042 & 0.050 & 0.058 & 0.042 & 0.045 & 0.043 & 0.043 \\ 200 & Null & 40\% and 55\% & 0.043 & 0.041 & 0.052 & 0.052 & 0.048 & 0.045 & 0.048 & 0.046 \\ \hline 200 & A & 25\% and 25\%& 0.850 & \textbf{0.851} & 0.824 & 0.849 & 0.826 & 0.773 & 0.546 & 0.587 \\ 200 & A & 50\% and 50\%& \textbf{0.497} & 0.494 & 0.445 & {0.502} & 0.427 & 0.388 & 0.262 & 0.319 \\ 200 & A & 27\% and 55\% & \textbf{0.675} & 0.670 & 0.565 & 0.620 & 0.561 & 0.522 & 0.331 & 0.487 \\ 200 & A & 40\% and 55\% & \textbf{0.610} & 0.602 & 0.505 & 0.561 & 0.481 & 0.456 & 0.294 & 0.423 \\ \hline 200 & B & 25\% and 25\%& 0.939 & \textbf{0.942} & 0.896 & 0.428 & 0.886 & 0.150 & 0.471 & 0.348 \\ 200 & B & 50\% and 50\%& 0.919 & 0.925 & {0.945} & 0.702 & \textbf{0.933} & 0.444 & 0.737 & 0.891 \\ 200 & B & 27\% and 55\% & 0.858 & 0.861 & {0.929} & 0.592 & \textbf{0.918} & 0.286 & 0.659 & 0.391 \\ 200 & B & 40\% and 55\% & 0.820 & 0.821 & {0.916} & 0.604 & \textbf{0.901} & 0.314 & 0.658 & 0.405 \\ \hline 200 & C & 25\% and 25\%& \textbf{0.998} & \textbf{0.998 }& 0.885 & 0.750 & 0.867 & 0.345 & 0.818 & 0.768 \\ 200 & C & 50\% and 50\%& 0.977 & \textbf{0.978} & 0.920 & 0.885 & 0.902 & 0.711 & 0.917 & 0.899 \\ 200 & C & 27\% and 55\% & \textbf{0.979} & 0.975 & 0.901 & 0.833 & 0.877 & 0.586 & 0.889 & 0.690 \\ 200 & C & 40\% and 55\% & \textbf{0.950} & 0.946 & 0.906 & 0.866 & 0.888 & 0.694 & 0.908 & 0.768 \\ \hline 200 & D & 25\% and 25\%& \textbf{0.984} & \textbf{0.984} & 0.780 & 0.549 & 0.753 & 0.241 & 0.636 & 0.738 \\ 200 & D & 50\% and 50\%& \textbf{0.958} & \textbf{0.958} & 0.885 & 0.828 & 0.863 & 0.671 & 0.865 & 0.905 \\ 200 & D & 27\% and 55\% & \textbf{0.928} & 0.922 & 0.868 & 0.758 & 0.833 & 0.531 & 0.819 & 0.799 \\ 200 & D & 40\% and 55\% & \textbf{0.929} & 0.923 & 0.853 & 0.775 & 0.829 & 0.576 & 0.822 & 0.804 \\ \hline 200 & E & 25\% and 25\%& 0.859 & 0.859 & 0.875 & {0.882} & \textbf{0.866} & 0.831 & 0.616 & 0.628 \\ 200 & E & 50\% and 50\%& \textbf{0.552} & \textbf{0.552} & 0.522 & {0.567} & 0.495 & 0.470 & 0.313 & 0.310 \\ 200 & E & 27\% and 55\% & \textbf{0.736} & 0.734 & 0.724 & 0.744 & 0.689 & 0.668 & 0.421 & 0.600 \\ 200 & E & 40\% and 55\% & \textbf{0.600} & \textbf{0.600 }& 0.583 & {0.609} & 0.521 & 0.508 & 0.319 & 0.446 \\ \hline 200 & F & 25\% and 25\%& {0.523} & \textbf{0.525} & 0.268 & 0.465 & 0.428 & 0.065 & 0.128 & 0.057 \\ 200 & F & 50\% and 50\%& 0.350 & {0.351} & 0.255 & 0.385 & \textbf{0.356} & 0.175 & 0.288 & 0.300 \\ 200 & F & 27\% and 55\% & 0.289 & 0.288 & 0.233 & {0.399} & \textbf{0.364} & 0.124 & 0.276 & 0.117 \\ 200 & F & 40\% and 55\% & 0.306 & 0.307 & 0.250 & {0.409} & \textbf{0.373} & 0.132 & 0.281 & 0.123 \\ \hline 200 & G & 25\% and 25\%& 0.841 & 0.840 & {0.866} & 0.852 & {0.848} & 0.773 & \textbf{0.881} & 0.839 \\ 200 & G & 50\% and 50\%& 0.861 & 0.860 & {0.900} & 0.892 & 0.883 & 0.879 & 0.904 & \textbf{0.907 }\\ 200 & G & 27\% and 55\% & 0.801 & 0.796 & 0.876 & 0.864 & 0.860 & 0.800 & \textbf{0.893} & 0.819 \\ 200 & G & 40\% and 55\% & 0.750 & 0.748 & 0.857 & 0.838 & 0.834 & 0.807 & \textbf{0.866} & 0.805 \\ \hline 200 & H & 25\% and 25\%& 0.871 & \textbf{0.872} & 0.686 & 0.663 & 0.661 & 0.467 & 0.740 & 0.667 \\ 200 & H & 50\% and 50\%& 0.875 & 0.874 & {0.904} & 0.899 & 0.883 & 0.897 & \textbf{0.902} & 0.889 \\ 200 & H & 27\% and 55\% & 0.765 & 0.761 & 0.725 & 0.709 & 0.695 & 0.595 & \textbf{0.771} & 0.607 \\ 200 & H & 40\% and 55\% & 0.755 & 0.745 & 0.731 & 0.710 & 0.702 & 0.614 & \textbf{0.770} & 0.615 \\ \hline 200 & I-1 & 25\% and 25\%& 0.383 & \textbf{0.388} & 0.240 & 0.202 & 0.208 & 0.049 & 0.142 & 0.051 \\ 200 & I-1 & 50\% and 50\%& 0.203 & 0.203 & {0.242} & 0.170 & \textbf{0.209} & 0.107 & 0.206 & 0.094 \\ 200 & I-1 & 27\% and 55\% & \textbf{0.241} & 0.232 & 0.224 & 0.168 & 0.196 & 0.083 & 0.179 & 0.082 \\ 200 & I-1 & 40\% and 55\% & 0.211 & 0.210 & {0.238} & 0.175 & \textbf{0.212} & 0.093 & 0.209 & 0.091 \\ \hline 200 & I-2 & 25\% and 25\%& 0.489 & 0.487 & 0.405 & 0.414 & 0.383 & 0.250 & \textbf{0.498} & 0.292 \\ 200 & I-2 & 50\% and 50\%& 0.338 & 0.332 & 0.431 & 0.460 & 0.428 & 0.352 & \textbf{0.505} & 0.364 \\ 200 & I-2 & 27\% and 55\% & 0.323 & 0.313 & 0.471 & 0.493 & 0.455 & 0.387 & \textbf{0.526} & 0.430 \\ 200 & I-2 & 40\% and 55\% & 0.309 & 0.307 & 0.438 & 0.467 & 0.422 & 0.365 & \textbf{0.493} & 0.396 \\ \hline 200 & I-3 & 25\% and 25\%& \textbf{0.371} & 0.370 & 0.337 & 0.303 & 0.308 & 0.166 & 0.359 & 0.262 \\ 200 & I-3 & 50\% and 50\%& 0.440 & 0.436 & {0.496} & 0.459 & 0.456 & 0.421 & \textbf{0.481} & 0.508 \\ 200 & I-3 & 27\% and 55\% & 0.304 & 0.296 & 0.350 & 0.304 & 0.328 & 0.201 & \textbf{0.372} & 0.176 \\ 200 & I-3 & 40\% and 55\% & 0.285 & 0.282 & 0.357 & 0.309 & 0.330 & 0.216 & \textbf{0.367} & 0.198 \\ \hline 200 & J-1 & 25\% and 25\%& 0.753 & \textbf{0.754} & 0.429 & 0.268 & 0.371 & 0.049 & 0.172 & 0.050 \\ 200 & J-1 & 50\% and 50\%& \textbf{0.513} & 0.506 & 0.400 & 0.257 & 0.361 & 0.120 & 0.307 & 0.102 \\ 200 & J-1 & 27\% and 55\% & \textbf{0.506} & 0.491 & 0.399 & 0.233 & 0.351 & 0.091 & 0.273 & 0.085 \\ 200 & J-1 & 40\% and 55\% & \textbf{0.516} & 0.506 & 0.391 & 0.240 & 0.338 & 0.117 & 0.281 & 0.105 \\ \hline 200 & J-2 & 25\% and 25\%& 0.660 & \textbf{0.665} & 0.261 & 0.293 & 0.259 & 0.075 & 0.082 & 0.095 \\ 200 & J-2 & 50\% and 50\%& 0.334 & \textbf{0.335} & 0.141 & 0.127 & 0.117 & 0.075 & 0.142 & 0.065 \\ 200 & J-2 & 27\% and 55\% & \textbf{0.382} & 0.371 & 0.158 & 0.142 & 0.131 & 0.061 & 0.145 & 0.064 \\ 200 & J-2 & 40\% and 55\% & \textbf{0.328} & \textbf{0.328} & 0.134 & 0.118 & 0.113 & 0.070 & 0.134 & 0.069 \\ \hline 200 & J-3 & 25\% and 25\%& \textbf{0.982} & \textbf{0.982} & 0.782 & 0.825 & 0.795 & 0.821 & 0.768 & 0.859 \\ 200 & J-3 & 50\% and 50\%& \textbf{0.853} & 0.848 & 0.745 & 0.732 & 0.698 & 0.711 & 0.544 & 0.721 \\ 200 & J-3 & 27\% and 55\% & \textbf{0.793} & 0.767 & 0.784 & 0.780 & 0.735 & 0.725 & 0.542 & 0.687 \\ 200 & J-3 & 40\% and 55\% & \textbf{0.753} & 0.739 & 0.748 & 0.723 & 0.677 & 0.664 & 0.485 & 0.646 \\ \hline 200 & K-1 & 25\% and 25\%& 0.541 & \textbf{0.546} & 0.434 & 0.383 & 0.382 & 0.176 & 0.053 & 0.203 \\ 200 & K-1 & 50\% and 50\%& \textbf{0.251} & \textbf{0.251} & 0.224 & 0.127 & 0.174 & 0.058 & 0.054 & 0.062 \\ 200 & K-1 & 27\% and 55\% & \textbf{0.299} & \textbf{0.294} & 0.253 & 0.157 & 0.198 & 0.067 & 0.056 & 0.064 \\ 200 & K-1 & 40\% and 55\% & 0.273 & \textbf{0.274} & 0.235 & 0.133 & 0.187 & 0.054 & 0.061 & 0.055 \\ \hline 200 & K-2 & 25\% and 25\%& 0.789 & \textbf{0.792} & 0.593 & 0.356 & 0.532 & 0.046 & 0.217 & 0.055 \\ 200 & K-2 & 50\% and 50\%& 0.539 & 0.536 & {0.606} & 0.336 & \textbf{0.550} & 0.154 & 0.385 & 0.123 \\ 200 & K-2 & 27\% and 55\% & \textbf{0.546} & 0.529 & 0.590 & 0.305 & 0.523 & 0.108 & 0.329 & 0.082 \\ 200 & K-2 & 40\% and 55\% & 0.533 & 0.526 & {0.606} & 0.337 & \textbf{0.550} & 0.132 & 0.385 & 0.103 \\ \hline 200 & K-3 & 25\% and 25\%& \textbf{0.996} & 0.995 & 0.895 & 0.947 & 0.924 & 0.059 & 0.654 & 0.158 \\ 200 & K-3 & 50\% and 50\%& \textbf{0.920} & 0.918 & 0.720 & 0.903 & 0.856 & 0.274 & 0.779 & 0.423 \\ 200 & K-3 & 27\% and 55\% & \textbf{0.927} & 0.924 & 0.748 & 0.905 & 0.860 & 0.240 & 0.777 & 0.544 \\ 200 & K-3 & 40\% and 55\% & \textbf{0.911} & 0.909 & 0.714 & 0.905 & 0.853 & 0.292 & 0.784 & 0.520 \\ \hline 200 & L & 25\% and 25\%& 0.904 & 0.904 & 0.954 & {0.952} & 0.942 & \textbf{0.950} & 0.921 & 0.933 \\ 200 & L & 50\% and 50\%& 0.747 & 0.740 & {0.844} & 0.843 & 0.815 & \textbf{0.836} & 0.809 & 0.820 \\ 200 & L & 27\% and 55\% & 0.790 & 0.791 & {0.890} & 0.888 & 0.862 & \textbf{0.884} & 0.848 & 0.877 \\ 200 & L & 40\% and 55\% & 0.742 & 0.737 & {0.856} & 0.851 & 0.815 & \textbf{0.847} & 0.807 & 0.834 \\ \hline 200 & M & 25\% and 25\%& 0.542 & 0.546 & {0.770} & 0.759 & 0.707 & \textbf{0.748} & 0.664 & 0.736 \\ 200 & M & 50\% and 50\%& 0.356 & 0.362 & {0.625} & 0.603 & 0.518 & \textbf{0.586} & 0.523 & 0.559 \\ 200 & M & 27\% and 55\% & 0.425 & 0.420 & {0.671} & 0.662 & 0.611 & \textbf{0.648} & 0.593 & 0.638 \\ 200 & M & 40\% and 55\% & 0.407 & 0.410 & {0.635} & 0.625 & 0.576 & \textbf{0.616} & 0.569 & 0.603 \\ \hline 200 & N & 25\% and 25\%& 0.776 & 0.777 & 0.884 & {0.888} & 0.864 & \textbf{0.880} & 0.834 & 0.855 \\ 200 & N & 50\% and 50\%& 0.663 & 0.663 & 0.755 & {0.758} & 0.731 & \textbf{0.755} & 0.721 & 0.705 \\ 200 & N & 27\% and 55\% & 0.493 & 0.482 & 0.716 & {0.724} & 0.662 & \textbf{0.705} & 0.667 & 0.652 \\ 200 & N & 40\% and 55\% & 0.494 & 0.478 & 0.664 & {0.678} & 0.610 & \textbf{0.659} & 0.619 & 0.635 \\ \hline 200 & O & 25\% and 25\%& 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 200 & O & 50\% and 50\%& 0.997 & 0.997 & 1.000 & 1.000 & 1.000 & 1.000 & 0.998 & 0.998 \\ 200 & O & 27\% and 55\% & 0.956 & 0.947 & 0.998 & 0.998 & 0.996 & \textbf{0.998} & 0.997 & 0.997 \\ 200 & O & 40\% and 55\% & 0.971 & 0.965 & 0.996 & 0.997 & 0.995 & \textbf{0.997} & 0.995 & 0.996 \\ \hline 200 & P & 25\% and 25\%& 0.950 & 0.949 & 0.975 & 0.974 & 0.971 & 0.966 & \textbf{0.982 }& 0.975 \\ 200 & P & 50\% and 50\%& 0.940 & 0.937 & 0.962 & 0.964 & 0.955 & 0.958 & \textbf{0.966} & 0.964 \\ 200 & P & 27\% and 55\% & 0.741 & 0.732 & 0.939 & {0.945} & 0.924 & 0.932 & \textbf{0.940} & 0.915 \\ 200 & P & 40\% and 55\% & 0.859 & 0.854 & 0.939 & 0.942 & 0.932 & 0.933 & \textbf{0.943} & 0.919 \\ \hline 200 & Q & 25\% and 25\%& 0.278 & 0.278 & 0.372 & {0.376} & \textbf{0.344} & 0.321 & 0.204 & 0.238 \\ 200 & Q & 50\% and 50\%& 0.111 & 0.111 & {0.157} & 0.150 & \textbf{0.138} & 0.134 & 0.113 & 0.095 \\ 200 & Q & 27\% and 55\% & 0.183 & 0.182 & 0.218 & {0.227} & \textbf{0.186 }& \textbf{0.186} & 0.131 & 0.178 \\ 200 & Q & 40\% and 55\% & 0.171 & 0.171 &{0.217} & 0.212 & 0.185 & \textbf{0.180} & 0.124 & 0.179 \\ \hline 300 & Null & 25\% and 25\%& 0.055 & 0.055 & 0.064 & 0.062 & 0.056 & 0.056 & 0.052 & 0.053 \\ 300 & Null & 50\% and 50\%& 0.057 & 0.056 & 0.067 & 0.066 & 0.058 & 0.062 & 0.062 & 0.060 \\ 300 & Null & 27\% and 55\% & 0.042 & 0.043 & 0.052 & 0.058 & 0.051 & 0.047 & 0.049 & 0.041 \\ 300 & Null & 40\% and 55\% & 0.037 & 0.039 & 0.049 & 0.050 & 0.040 & 0.043 & 0.041 & 0.038 \\ \hline 300 & A & 25\% and 25\%& 0.966 & 0.966 & 0.956 & 0.962 & 0.957 & 0.923 & 0.728 & 0.770 \\ 300 & A & 50\% and 50\%& \textbf{0.686} & 0.683 & 0.592 & 0.661 & 0.581 & 0.519 & 0.320 & 0.426 \\ 300 & A & 27\% and 55\% & \textbf{0.866} & 0.862 & 0.736 & 0.770 & 0.723 & 0.691 & 0.461 & 0.658 \\ 300 & A & 40\% and 55\% & \textbf{0.802} & 0.798 & 0.675 & 0.729 & 0.665 & 0.616 & 0.392 & 0.590 \\ \hline 300 & B & 25\% and 25\%& \textbf{0.996} & \textbf{0.996} & 0.964 & 0.590 & 0.962 & 0.195 & 0.638 & 0.501 \\ 300 & B & 50\% and 50\%& 0.992 & \textbf{0.993} & 0.975 & 0.869 & 0.981 & 0.613 & 0.887 & 0.974 \\ 300 & B & 27\% and 55\% & \textbf{0.973} & \textbf{0.973} & 0.974 & 0.728 & 0.971 & 0.377 & 0.797 & 0.514 \\ 300 & B & 40\% and 55\% & 0.975 & 0.976 & {0.982} & 0.786 & \textbf{0.980} & 0.431 & 0.834 & 0.563 \\ \hline 300 & C & 25\% and 25\%& \textbf{1.000} & 1\textbf{.000} & 0.971 & 0.919 & 0.971 & 0.476 & 0.942 & 0.919 \\ 300 & C & 50\% and 50\%& \textbf{0.999} & \textbf{0.999} & 0.981 & 0.968 & 0.976 & 0.856 & 0.983 & 0.976 \\ 300 & C & 27\% and 55\% & \textbf{1.000 }& \textbf{1.000} & 0.972 & 0.945 & 0.969 & 0.751 & 0.967 & 0.838 \\ 300 & C & 40\% and 55\% & \textbf{0.993} & \textbf{0.992} & 0.975 & 0.962 & 0.969 & 0.849 & 0.978 & 0.907 \\ \hline 300 & D & 25\% and 25\%& \textbf{1.000} & \textbf{1.000} & 0.904 & 0.715 & 0.896 & 0.330 & 0.807 & 0.895 \\ 300 & D & 50\% and 50\%& \textbf{0.995} & \textbf{0.995} & 0.968 & 0.949 & 0.959 & 0.822 & 0.962 & 0.978 \\ 300 & D & 27\% and 55\% & \textbf{0.991} & 0.990 & 0.950 & 0.899 & 0.938 & 0.672 & 0.933 & 0.918 \\ 300 & D & 40\% and 55\% & \textbf{0.992} & 0.991 & 0.964 & 0.928 & 0.947 & 0.760 & 0.947 & 0.938 \\ \hline 300 & E & 25\% and 25\%& \textbf{0.960} & 0.959 & 0.961 & {0.966} & 0.962 & 0.941 & 0.786 & 0.795 \\ 300 & E & 50\% and 50\%& \textbf{0.729} & 0.729 & 0.670 & 0.707 & 0.649 & 0.603 & 0.412 & 0.414 \\ 300 & E & 27\% and 55\% & \textbf{0.894} & 0.893 & 0.882 & 0.893 & 0.847 & 0.844 & 0.571 & 0.785 \\ 300 & E & 40\% and 55\% & \textbf{0.789} & 0.787 & 0.780 & {0.793} & 0.716 & 0.704 & 0.444 & 0.628 \\ \hline 300 & F & 25\% and 25\%& 0.745 &\textbf{ 0.746} & 0.362 & 0.595 & 0.565 & 0.059 & 0.153 & 0.050 \\ 300 & F & 50\% and 50\%& \textbf{0.504} & 0.503 & 0.354 & 0.498 & 0.481 & 0.236 & 0.398 & 0.417 \\ 300 & F & 27\% and 55\% & 0.468 & 0.462 & 0.329 & {0.499} & \textbf{0.472} & 0.159 & 0.378 & 0.158 \\ 300 & F & 40\% and 55\% & \textbf{0.469} & 0.465 & 0.319 & {0.492} & 0.448 & 0.166 & 0.366 & 0.160 \\ \hline 300 & G & 25\% and 25\%& 0.954 & 0.953 & 0.962 & 0.958 & 0.957 & 0.905 & \textbf{0.976} & 0.951 \\ 300 & G & 50\% and 50\%& 0.964 & 0.964 & 0.977 & 0.976 & 0.974 & 0.968 & 0.980 & \textbf{0.982} \\ 300 & G & 27\% and 55\% & 0.934 & 0.933 & 0.964 & 0.960 & 0.958 & 0.924 & \textbf{0.973} & 0.938 \\ 300 & G & 40\% and 55\% & 0.909 & 0.905 &{0.964} & 0.958 & 0.956 & 0.939 & \textbf{0.970} & 0.937 \\ \hline 300 & H & 25\% and 25\%& \textbf{0.969} & \textbf{0.969} & 0.845 & 0.844 & 0.837 & 0.620 & 0.898 & 0.837 \\ 300 & H & 50\% and 50\%& 0.968 & 0.966 & 0.977 & 0.976 & 0.975 & 0.976 & \textbf{0.978} & 0.976 \\ 300 & H & 27\% and 55\% & \textbf{0.929} & 0.924 & 0.870 & 0.864 & 0.856 & 0.770 & 0.910 & 0.786 \\ 300 & H & 40\% and 55\% & \textbf{0.916} & 0.912 & 0.865 & 0.857 & 0.843 & 0.767 & 0.904 & 0.776 \\ \hline 300 & I-1 & 25\% and 25\%& {0.602} & \textbf{0.606} & 0.325 & 0.300 & 0.305 & 0.050 & 0.193 & 0.065 \\ 300 & I-1 & 50\% and 50\%& \textbf{0.311} & 0.307 & 0.280 & 0.223 & 0.249 & 0.108 & 0.265 & 0.099 \\ 300 & I-1 & 27\% and 55\% & \textbf{0.343} & 0.336 & 0.270 & 0.209 & 0.250 & 0.090 & 0.254 & 0.091 \\ 300 & I-1 & 40\% and 55\% & 0.331 & \textbf{0.332} & 0.285 & 0.217 & 0.265 & 0.113 & 0.267 & 0.103 \\ \hline 300 & I-2 & 25\% and 25\%& 0.672 & 0.670 & 0.542 & 0.540 & 0.531 & 0.331 & \textbf{0.677} & 0.394 \\ 300 & I-2 & 50\% and 50\%& 0.490 & 0.486 & 0.588 & 0.605 & 0.589 & 0.489 & \textbf{0.670} & 0.514 \\ 300 & I-2 & 27\% and 55\% & 0.484 & 0.470 & 0.616 & 0.634 & 0.609 & 0.514 & \textbf{0.681} & 0.571 \\ 300 & I-2 & 40\% and 55\% & 0.481 & 0.469 & 0.603 & 0.621 & 0.596 & 0.528 & \textbf{0.665} & 0.558 \\ \hline 300 & I-3 & 25\% and 25\%& \textbf{0.555 }& \textbf{0.555} & 0.446 & 0.423 & 0.428 & 0.226 & 0.507 & 0.372 \\ 300 & I-3 & 50\% and 50\%& 0.584 & 0.581 & 0.642 & 0.624 & 0.611 & 0.577 & 0.648 & \textbf{0.673 }\\ 300 & I-3 & 27\% and 55\% & 0.479 & 0.469 & 0.497 & 0.471 & 0.488 & 0.321 & \textbf{0.548} & 0.277 \\ 300 & I-3 & 40\% and 55\% & 0.453 & 0.448 & 0.517 & 0.493 & 0.496 & 0.347 & \textbf{0.562} & 0.309 \\ \hline 300 & J-1 & 25\% and 25\%& \textbf{0.928} & 0.927 & 0.568 & 0.360 & 0.509 & 0.051 & 0.238 & 0.060 \\ 300 & J-1 & 50\% and 50\%& \textbf{0.738} & 0.735 & 0.507 & 0.345 & 0.458 & 0.137 & 0.411 & 0.112 \\ 300 & J-1 & 27\% and 55\% & \textbf{0.712} & 0.689 & 0.506 & 0.327 & 0.453 & 0.103 & 0.359 & 0.100 \\ 300 & J-1 & 40\% and 55\% & \textbf{0.715} & 0.708 & 0.520 & 0.342 & 0.464 & 0.146 & 0.394 & 0.131 \\ \hline 300 & J-2 & 25\% and 25\%& 0.843 & \textbf{0.846} & 0.360 & 0.432 & 0.386 & 0.084 & 0.106 & 0.141 \\ 300 & J-2 & 50\% and 50\%& \textbf{0.528} & 0.524 & 0.147 & 0.150 & 0.135 & 0.077 & 0.175 & 0.060 \\ 300 & J-2 & 27\% and 55\% & \textbf{0.555} & 0.541 & 0.165 & 0.168 & 0.149 & 0.067 & 0.184 & 0.066 \\ 300 & J-2 & 40\% and 55\% & \textbf{0.493} & 0.488 & 0.147 & 0.143 & 0.140 & 0.077 & 0.179 & 0.078 \\ \hline 300 & J-3 & 25\% and 25\%& \textbf{1.000} & \textbf{1.000 }& 0.931 & 0.941 & 0.924 & 0.940 & 0.908 & 0.964 \\ 300 & J-3 & 50\% and 50\%& \textbf{0.968} & 0.967 & 0.902 & 0.891 & 0.862 & 0.883 & 0.716 & 0.891 \\ 300 & J-3 & 27\% and 55\% & \textbf{0.917} & 0.902 & 0.898 & 0.887 & 0.861 & 0.861 & 0.700 & 0.853 \\ 300 & J-3 & 40\% and 55\% & \textbf{0.914} & 0.902 & 0.882 & 0.861 & 0.837 & 0.827 & 0.671 & 0.819 \\ \hline 300 & K-1 & 25\% and 25\%& \textbf{0.770} & 0.769 & 0.616 & 0.556 & 0.578 & 0.286 & 0.052 & 0.337 \\ 300 & K-1 & 50\% and 50\%& 0.406 & \textbf{0.408} & 0.290 & 0.172 & 0.227 & 0.068 & 0.065 & 0.073 \\ 300 & K-1 & 27\% and 55\% & \textbf{0.407} & 0.400 & 0.302 & 0.195 & 0.243 & 0.066 & 0.066 & 0.071 \\ 300 & K-1 & 40\% and 55\% & \textbf{0.395} & 0.390 & 0.272 & 0.169 & 0.230 & 0.050 & 0.057 & 0.055 \\ \hline 300 & K-2 & 25\% and 25\%& 0.946 & \textbf{0.947} & 0.783 & 0.527 & 0.735 & 0.048 & 0.318 & 0.074 \\ 300 & K-2 & 50\% and 50\%& \textbf{0.742} & 0.740 & 0.735 & 0.455 & 0.699 & 0.181 & 0.519 & 0.146 \\ 300 & K-2 & 27\% and 55\% & \textbf{0.740 }& 0.724 & 0.734 & 0.431 & 0.679 & 0.127 & 0.481 & 0.107 \\ 300 & K-2 & 40\% and 55\% & \textbf{0.758} & 0.750 & 0.745 & 0.483 & 0.702 & 0.167 & 0.542 & 0.137 \\ \hline 300 & K-3 & 25\% and 25\%& \textbf{1.000 }& \textbf{1.000} & 0.991 & 0.995 & 0.992 & 0.054 & 0.823 & 0.178 \\ 300 & K-3 & 50\% and 50\%& \textbf{0.992} & \textbf{0.992} & 0.881 & 0.976 & 0.950 & 0.367 & 0.916 & 0.580 \\ 300 & K-3 & 27\% and 55\% & \textbf{0.988 }& 0.987 & 0.931 & 0.983 & 0.962 & 0.298 & 0.897 & 0.682 \\ 300 & K-3 & 40\% and 55\% & \textbf{0.984} & 0.983 & 0.886 & 0.982 & 0.958 & 0.363 & 0.912 & 0.678 \\ \hline 300 & L & 25\% and 25\%& 0.980 & 0.980 & 0.993 & 0.993 & 0.991 & \textbf{0.992} & 0.980 & 0.988 \\ 300 & L & 50\% and 50\%& 0.887 & 0.886 & 0.950 & 0.948 & 0.936 & \textbf{0.943 }& 0.933 & 0.941 \\ 300 & L & 27\% and 55\% & 0.934 & 0.934 & 0.976 & 0.974 & 0.969 & \textbf{0.973} & 0.962 & 0.970 \\ 300 & L & 40\% and 55\% & 0.893 & 0.891 & 0.954 & 0.954 & 0.935 & \textbf{0.950} & 0.932 & 0.945 \\ \hline 300 & M & 25\% and 25\%& 0.750 & 0.755 & 0.913 & 0.912 & 0.886 & \textbf{0.907} & 0.854 & 0.902 \\ 300 & M & 50\% and 50\%& 0.510 & 0.519 & 0.766 & 0.762 & 0.682 & \textbf{0.751} & 0.680 & 0.735 \\ 300 & M & 27\% and 55\% & 0.590 & 0.591 & 0.810 & 0.804 & 0.769 & \textbf{0.796} & 0.756 & 0.796 \\ 300 & M & 40\% and 55\% & 0.569 & 0.570 & 0.800 & 0.794 & 0.751 & \textbf{0.782} & 0.748 & 0.779 \\ \hline 300 & N & 25\% and 25\%& 0.922 & 0.921 & 0.966 & 0.966 & 0.961 & \textbf{0.964} & 0.939 & 0.958 \\ 300 & N & 50\% and 50\%& 0.847 & 0.846 & 0.901 & 0.903 & 0.893 & \textbf{0.896} & 0.880 & 0.872 \\ 300 & N & 27\% and 55\% & 0.607 & 0.590 & 0.836 & 0.843 & 0.807 & \textbf{0.832} & 0.807 & 0.808 \\ 300 & N & 40\% and 55\% & 0.664 & 0.645 & 0.836 & 0.837 & 0.804 & \textbf{0.826} & 0.803 & 0.814 \\ \hline 300 & O & 25\% and 25\%& 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 300 & O & 50\% and 50\%& 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 300 & O & 27\% and 55\% & 0.991 & 0.983 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 300 & O & 40\% and 55\% & 0.995 & 0.994 & 1.000 & 1.000 & 1.000 & 1.000 & 0.999 & 0.999 \\ \hline 300 & P & 25\% and 25\%& 0.994 & 0.994 & 0.999 & 0.999 & 0.998 & 0.997 & 0.998 & 0.998 \\ 300 & P & 50\% and 50\%& 0.987 & 0.987 & 0.995 & 0.996 & 0.995 & 0.996 & 0.996 & 0.995 \\ 300 & P & 27\% and 55\% & 0.850 & 0.827 & 0.990 & 0.990 & 0.988 & 0.987 & 0.992 & 0.986 \\ 300 & P & 40\% and 55\% & 0.966 & 0.963 & 0.991 & 0.993 & 0.989 & 0.990 & 0.994 & 0.983 \\ \hline 300 & Q & 25\% and 25\%& 0.436 & 0.435 & 0.531 & 0.540 & \textbf{0.518} & 0.478 & 0.305 & 0.359 \\ 300 & Q & 50\% and 50\%& 0.155 & 0.155 & 0.200 & 0.191 & 0.176 & \textbf{0.180} & 0.149 & 0.126 \\ 300 & Q & 27\% and 55\% & 0.254 & 0.249 & 0.300 & 0.303 & \textbf{0.260} & 0.247 & 0.158 & 0.261 \\ 300 & Q & 40\% and 55\% & 0.231 & 0.228 & 0.285 & 0.286 & \textbf{0.248 }& 0.233 & 0.155 & 0.240 \\ \hline 400 & Null & 25\% and 25\%& 0.052 & 0.052 & 0.055 & 0.054 & 0.052 & 0.051 & 0.052 & 0.048 \\ 400 & Null & 50\% and 50\%& 0.045 & 0.044 & 0.052 & 0.049 & 0.046 & 0.045 & 0.048 & 0.047 \\ 400 & Null & 27\% and 55\% & 0.043 & 0.043 & 0.053 & 0.054 & 0.050 & 0.051 & 0.050 & 0.046 \\ 400 & Null & 40\% and 55\% & 0.053 & 0.051 & 0.058 & 0.059 & 0.049 & 0.050 & 0.049 & 0.049 \\ \hline 400 & A & 25\% and 25\%& \textbf{0.991} & \textbf{0.991 }& 0.989 & 0.991 & 0.989 & 0.970 & 0.830 & 0.875 \\ 400 & A & 50\% and 50\%& \textbf{0.845} & \textbf{0.845} & 0.735 & 0.788 & 0.731 & 0.646 & 0.429 & 0.558 \\ 400 & A & 27\% and 55\% & \textbf{0.940 }& 0.939 & 0.845 & 0.875 & 0.840 & 0.802 & 0.568 & 0.782 \\ 400 & A & 40\% and 55\% & \textbf{0.911} & 0.908 & 0.799 & 0.831 & 0.784 & 0.734 & 0.495 & 0.708 \\ \hline 400 & B & 25\% and 25\%& 1\textbf{.000} & \textbf{1.000} & 0.981 & 0.716 & 0.984 & 0.252 & 0.762 & 0.637 \\ 400 & B & 50\% and 50\%& \textbf{1.000 }& \textbf{1.000} & 0.990 & 0.947 & 0.994 & 0.743 & 0.959 & 0.995 \\ 400 & B & 27\% and 55\% & \textbf{0.999} & 0.998 & 0.989 & 0.849 & 0.992 & 0.504 & 0.902 & 0.663 \\ 400 & B & 40\% and 55\% & 0.996 & \textbf{0.997} & 0.993 & 0.876 & 0.993 & 0.556 & 0.921 & 0.674 \\ \hline 400 & C & 25\% and 25\%& \textbf{1.000} & \textbf{1.000} & 0.993 & 0.968 & 0.993 & 0.585 & 0.978 & 0.963 \\ 400 & C & 50\% and 50\%& \textbf{1.000} & \textbf{1.000} & 0.994 & 0.990 & 0.994 & 0.943 & 0.993 & 0.992 \\ 400 & C & 27\% and 55\% & \textbf{1.000} & \textbf{1.000 }& 0.995 & 0.988 & 0.992 & 0.855 & 0.994 & 0.928 \\ 400 & C & 40\% and 55\% & \textbf{1.000} & \textbf{1.000} & 0.996 & 0.988 & 0.993 & 0.932 & 0.994 & 0.967 \\ \hline 400 & D & 25\% and 25\%& \textbf{1.000} & \textbf{1.000} & 0.968 & 0.843 & 0.964 & 0.411 & 0.916 & 0.965 \\ 400 & D & 50\% and 50\%& \textbf{0.999 }& \textbf{0.999 }& 0.991 & 0.988 & 0.989 & 0.908 & 0.992 & 0.996 \\ 400 & D & 27\% and 55\% & \textbf{0.999} & \textbf{0.999 }& 0.985 & 0.963 & 0.978 & 0.806 & 0.981 & 0.977 \\ 400 & D & 40\% and 55\% & \textbf{0.998} & \textbf{0.998} & 0.987 & 0.976 & 0.980 & 0.876 & 0.986 & 0.983 \\ \hline 400 & E & 25\% and 25\%& \textbf{0.993} & \textbf{0.993 }& 0.994 & 0.995 & \textbf{0.993} & 0.987 & 0.889 & 0.898 \\ 400 & E & 50\% and 50\%& \textbf{0.862} & 0.861 & 0.823 & 0.845 & 0.797 & 0.747 & 0.522 & 0.531 \\ 400 & E & 27\% and 55\% & 0.955 & \textbf{0.956} & 0.955 & 0.959 & 0.929 & 0.922 & 0.677 & 0.884 \\ 400 & E & 40\% and 55\% & \textbf{0.909 }& 0.905 & 0.881 & 0.891 & 0.849 & 0.822 & 0.572 & 0.765 \\ \hline 400 & F & 25\% and 25\%& 0.896 & \textbf{0.897} & 0.488 & 0.721 & 0.703 & 0.072 & 0.188 & 0.060 \\ 400 & F & 50\% and 50\%& \textbf{0.679} & 0.678 & 0.481 & 0.623 & 0.610 & 0.298 & 0.527 & 0.554 \\ 400 & F & 27\% and 55\% & \textbf{0.616} & 0.612 & 0.403 & 0.600 & 0.573 & 0.188 & 0.457 & 0.190 \\ 400 & F & 40\% and 55\% & \textbf{0.623 }& 0.620 & 0.407 & 0.598 & 0.571 & 0.186 & 0.458 & 0.176 \\ \hline 400 & G & 25\% and 25\%& 0.990 & 0.990 & 0.993 & 0.994 & 0.994 & 0.976 & \textbf{0.998} & 0.993 \\ 400 & G & 50\% and 50\%& 0.991 & 0.991 & 0.998 & 0.998 & 0.997 & 0.996 & \textbf{0.999} & 0.998 \\ 400 & G & 27\% and 55\% & 0.981 & 0.981 & 0.991 & 0.991 & 0.990 & 0.977 & \textbf{0.995 }& 0.984 \\ 400 & G & 40\% and 55\% & 0.962 & 0.961 & 0.986 & 0.985 & 0.984 & 0.972 & \textbf{0.992} & 0.976 \\ \hline 400 & H & 25\% and 25\%& \textbf{0.995} & \textbf{0.995 }& 0.935 & 0.942 & 0.941 & 0.772 & 0.962 & 0.942 \\ 400 & H & 50\% and 50\%& 0.991 & 0.991 & 0.995 & 0.995 & 0.995 & 0.995 & \textbf{0.997} & 0.995 \\ 400 & H & 27\% and 55\% & \textbf{0.984} & 0.982 & 0.947 & 0.945 & 0.941 & 0.883 & 0.973 & 0.895 \\ 400 & H & 40\% and 55\% & \textbf{0.977} & 0.976 & 0.945 & 0.941 & 0.940 & 0.882 & 0.967 & 0.888 \\ \hline 400 & I-1 & 25\% and 25\%& \textbf{0.747} & 0.750 & 0.395 & 0.345 & 0.379 & 0.046 & 0.254 & 0.052 \\ 400 & I-1 & 50\% and 50\%& \textbf{0.443} & 0.439 & 0.367 & 0.305 & 0.351 & 0.138 & 0.358 & 0.120 \\ 400 & I-1 & 27\% and 55\% & \textbf{0.485} & 0.471 & 0.347 & 0.290 & 0.327 & 0.112 & 0.333 & 0.110 \\ 400 & I-1 & 40\% and 55\% & \textbf{0.466} & 0.463 & 0.375 & 0.292 & 0.343 & 0.136 & 0.363 & 0.126 \\ \hline 400 & I-2 & 25\% and 25\%& \textbf{0.806} & \textbf{0.806} & 0.674 & 0.665 & 0.668 & 0.419 & 0.797 & 0.489 \\ 400 & I-2 & 50\% and 50\%& 0.615 & 0.610 & 0.693 & 0.706 & 0.694 & 0.577 & \textbf{0.774} & 0.614 \\ 400 & I-2 & 27\% and 55\% & 0.600 & 0.587 & 0.726 & 0.733 & 0.720 & 0.601 & \textbf{0.793} & 0.688 \\ 400 & I-2 & 40\% and 55\% & 0.601 & 0.595 & 0.728 & 0.737 & 0.710 & 0.630 & \textbf{0.790} & 0.689 \\ \hline 400 & I-3 & 25\% and 25\%& \textbf{0.708} & 0.706 & 0.564 & 0.542 & 0.555 & 0.309 & 0.630 & 0.497 \\ 400 & I-3 & 50\% and 50\%& 0.720 & 0.715 & 0.774 & 0.759 & 0.755 & 0.711 & \textbf{0.789 }& 0.815 \\ 400 & I-3 & 27\% and 55\% & 0.608 & 0.592 & 0.587 & 0.573 & 0.576 & 0.369 & \textbf{0.656} & 0.320 \\ 400 & I-3 & 40\% and 55\% & 0.591 & 0.586 & 0.618 & 0.599 & 0.601 & 0.437 & \textbf{0.679 }& 0.383 \\ \hline 400 & J-1 & 25\% and 25\%& \textbf{0.981} & \textbf{0.981} & 0.686 & 0.457 & 0.623 & 0.056 & 0.293 & 0.079 \\ 400 & J-1 & 50\% and 50\%& \textbf{0.872} & 0.870 & 0.634 & 0.452 & 0.573 & 0.168 & 0.523 & 0.128 \\ 400 & J-1 & 27\% and 55\% & \textbf{0.847} & 0.834 & 0.623 & 0.419 & 0.572 & 0.137 & 0.462 & 0.129 \\ 400 & J-1 & 40\% and 55\% & \textbf{0.861} & 0.853 & 0.637 & 0.463 & 0.578 & 0.175 & 0.529 & 0.161 \\ \hline 400 & J-2 & 25\% and 25\%& 0.946 & \textbf{0.947} & 0.455 & 0.541 & 0.485 & 0.108 & 0.127 & 0.177 \\ 400 & J-2 & 50\% and 50\%& \textbf{0.697} & 0.690 & 0.189 & 0.204 & 0.177 & 0.093 & 0.240 & 0.070 \\ 400 & J-2 & 27\% and 55\% & \textbf{0.689} & 0.677 & 0.184 & 0.189 & 0.170 & 0.071 & 0.208 & 0.067 \\ 400 & J-2 & 40\% and 55\% & \textbf{0.678} & 0.672 & 0.187 & 0.203 & 0.180 & 0.092 & 0.245 & 0.086 \\ \hline 400 & J-3 & 25\% and 25\%& \textbf{1.000 }& \textbf{1.000} & 0.980 & 0.985 & 0.984 & 0.985 & 0.971 & 0.992 \\ 400 & J-3 & 50\% and 50\%& \textbf{0.995} & \textbf{0.995} & 0.945 & 0.941 & 0.928 & 0.933 & 0.824 & 0.942 \\ 400 & J-3 & 27\% and 55\% & \textbf{0.980} & 0.972 & 0.956 & 0.949 & 0.937 & 0.936 & 0.838 & 0.937 \\ 400 & J-3 & 40\% and 55\% & \textbf{0.970} & 0.965 & 0.951 & 0.942 & 0.921 & 0.926 & 0.797 & 0.927 \\ \hline 400 & K-1 & 25\% and 25\%& 0.889 & \textbf{0.892} & 0.738 & 0.687 & 0.708 & 0.355 & 0.049 & 0.421 \\ 400 & K-1 & 50\% and 50\%& \textbf{0.537} & 0.532 & 0.332 & 0.193 & 0.263 & 0.068 & 0.071 & 0.079 \\ 400 & K-1 & 27\% and 55\% & \textbf{0.552} & 0.542 & 0.377 & 0.245 & 0.304 & 0.078 & 0.062 & 0.083 \\ 400 & K-1 & 40\% and 55\% & \textbf{0.513} & 0.503 & 0.329 & 0.219 & 0.261 & 0.072 & 0.063 & 0.075 \\ \hline 400 & K-2 & 25\% and 25\%& \textbf{0.989} & \textbf{0.989} & 0.878 & 0.639 & 0.849 & 0.047 & 0.408 & 0.084 \\ 400 & K-2 & 50\% and 50\%& \textbf{0.887} & 0.886 & 0.839 & 0.594 & 0.804 & 0.246 & 0.657 & 0.200 \\ 400 & K-2 & 27\% and 55\% & \textbf{0.862} & 0.844 & 0.820 & 0.536 & 0.776 & 0.155 & 0.571 & 0.132 \\ 400 & K-2 & 40\% and 55\% & \textbf{0.885} & 0.878 & 0.825 & 0.557 & 0.783 & 0.201 & 0.612 & 0.160 \\ \hline 400 & K-3 & 25\% and 25\%& 1.000 & 1.000 & 0.999 & 1.000 & 0.999 & 0.064 & 0.912 & 0.221 \\ 400 & K-3 & 50\% and 50\%& \textbf{0.999} & \textbf{0.999} & 0.968 & 0.997 & 0.988 & 0.480 & 0.970 & 0.694 \\ 400 & K-3 & 27\% and 55\% & \textbf{1.000} & \textbf{1.000} & 0.984 & 0.997 & 0.992 & 0.334 & 0.967 & 0.803 \\ 400 & K-3 & 40\% and 55\% & \textbf{0.998} & \textbf{0.998} & 0.972 & 0.997 & 0.988 & 0.455 & 0.973 & 0.791 \\ \hline 400 & L & 25\% and 25\%& 0.998 & 0.998 & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 \\ 400 & L & 50\% and 50\%& 0.953 & 0.951 & 0.986 & 0.984 & 0.977 & \textbf{0.984} & 0.973 & 0.981 \\ 400 & L & 27\% and 55\% & 0.976 & 0.976 & 0.992 & 0.991 & 0.990 & \textbf{0.992 }& 0.983 & 0.992 \\ 400 & L & 40\% and 55\% & 0.958 & 0.957 & 0.993 & 0.993 & 0.988 & \textbf{0.994} & 0.985 & 0.991 \\ \hline 400 & M & 25\% and 25\%& 0.876 & 0.878 & 0.960 & 0.960 & 0.953 & \textbf{0.958} & 0.930 & 0.954 \\ 400 & M & 50\% and 50\%& 0.633 & 0.639 & 0.873 & 0.870 & 0.814 & \textbf{0.866} & 0.800 & 0.850 \\ 400 & M & 27\% and 55\% & 0.730 & 0.733 & 0.912 & 0.910 & 0.889 & \textbf{0.904} & 0.879 & 0.900 \\ 400 & M & 40\% and 55\% & 0.715 & 0.714 & 0.905 & 0.903 & 0.876 & \textbf{0.896} & 0.870 & 0.895 \\ \hline 400 & N & 25\% and 25\%& 0.983 & 0.982 & 0.992 & 0.992 & 0.992 & \textbf{0.993} & 0.985 & 0.991 \\ 400 & N & 50\% and 50\%& 0.934 & 0.932 & 0.964 & 0.965 & 0.960 & \textbf{0.962} & 0.956 & 0.950 \\ 400 & N & 27\% and 55\% & 0.714 & 0.696 & 0.928 & 0.931 & 0.914 & \textbf{0.925} & 0.909 & 0.918 \\ 400 & N & 40\% and 55\% & 0.769 & 0.751 & 0.919 & 0.923 & 0.902 & \textbf{0.918} & 0.902 & 0.912 \\ \hline 400 & O & 25\% and 25\%& 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & O & 50\% and 50\%& 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & O & 27\% and 55\% & 0.995 & 0.991 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & O & 40\% and 55\% & 0.999 & 0.998 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ \hline 400 & P & 25\% and 25\%& 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & P & 50\% and 50\%& 0.999 & 0.999 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & P & 27\% and 55\% & 0.900 & 0.872 & 1.000 & 1.000 & 0.999 & 1.000 & 0.999 & 0.998 \\ 400 & P & 40\% and 55\% & 0.989 & 0.988 & 0.997 & 0.998 & 0.997 & 0.997 & 0.999 & 0.997 \\ \hline 400 & Q & 25\% and 25\%& 0.534 & 0.533 & 0.634 & 0.644 & \textbf{0.624} & 0.577 & 0.378 & 0.451 \\ 400 & Q & 50\% and 50\%& 0.210 & 0.208 & 0.257 & 0.251 & 0.236 & \textbf{0.238} & 0.191 & 0.159 \\ 400 & Q & 27\% and 55\% & 0.333 & 0.329 & 0.372 & 0.386 & \textbf{0.339} & 0.312 & 0.200 & 0.327 \\ 400 & Q & 40\% and 55\% & 0.319 & 0.315 & 0.353 & 0.356 & 0.319 & 0.312 & 0.211 & \textbf{0.327 }\\ \hline \end{longtable} \begin{longtable}[c]{cccccccc} \caption{Empirical power of all scenarios with $K=3,4,5$ groups: KONP-P and KONP-LR are our two proposed KONP-Pearson and KONP-likelihood ratio test, respectively; LR and PP are the logrank the Peto--Peto tests} \label{K_sample_table} \\ \hline K & Scenario & n & Censoring & KONP-P & KONP-LR & LR & PP \\ \hline 3 & Null & 102 & equal - 25\% & 0.049 & 0.047 & 0.045 & 0.050 \\ 3 & Null & 102 & equal - 50\% & 0.051 & 0.049 & 0.055 & 0.057 \\ 3 & Null & 102 & unequal - mild & 0.051 & 0.046 & 0.057 & 0.059 \\ 3 & Null & 102 & unequal - severe & 0.049 & 0.048 & 0.058 & 0.052 \\ \hline 3 & D & 102 & equal - 25\% & \textbf{0.587} & 0.581 & 0.122 & 0.286 \\ 3 & D & 102 & equal - 50\% & 0.399 & 0.382 & 0.304 & \textbf{0.440} \\ 3 & D & 102 & unequal - mild & 0.351 & 0.338 & 0.274 & \textbf{0.403} \\ 3 & D & 102 & unequal - severe & \textbf{0.425} & 0.407 & 0.272 & 0.413 \\ \hline 3 & J-2 & 102 & equal - 25\% & \textbf{0.207} & 0.206 & 0.051 & 0.068 \\ 3 & J-2 & 102 & equal - 50\% & \textbf{0.127 }& 0.124 & 0.062 & 0.079 \\ 3 & J-2 & 102 & unequal - mild & \textbf{0.124} & 0.121 & 0.078 & 0.094 \\ 3 & J-2 & 102 & unequal - severe & 0.113 & \textbf{0.117} & 0.054 & 0.071 \\ \hline 3 & Null & 201 & equal - 25\% & 0.056 & 0.055 & 0.054 & 0.052 \\ 3 & Null & 201 & equal - 50\% & 0.046 & 0.048 & 0.057 & 0.054 \\ 3 & Null & 201 & unequal - mild & 0.050 & 0.048 & 0.060 & 0.055 \\ 3 & Null & 201 & unequal - severe & 0.055 & 0.055 & 0.058 & 0.060 \\ \hline 3 & D & 201 & equal - 25\% & \textbf{0.922} & \textbf{0.922} & 0.178 & 0.493 \\ 3 & D & 201 & equal - 50\% & \textbf{0.796} & 0.794 & 0.483 & 0.690 \\ 3 & D & 201 & unequal - mild & \textbf{0.740} & 0.731 & 0.478 & 0.700 \\ 3 & D & 201 & unequal - severe & \textbf{0.797} & 0.790 & 0.430 & 0.670 \\ \hline 3 & J-2 & 201 & equal - 25\% & 0.483 & \textbf{0.490 }& 0.076 & 0.090 \\ 3 & J-2 & 201 & equal - 50\% & \textbf{0.223} & 0.217 & 0.063 & 0.117 \\ 3 & J-2 & 201 & unequal - mild & \textbf{0.220} & 0.216 & 0.070 & 0.116 \\ 3 & J-2 & 201 & unequal - severe & 0.208 & \textbf{0.211} & 0.070 & 0.110 \\ \hline 3 & Null & 300 & equal - 25\% & 0.052 & 0.052 & 0.046 & 0.046 \\ 3 & Null & 300 & equal - 50\% & 0.048 & 0.047 & 0.049 & 0.048 \\ 3 & Null & 300 & unequal - mild & 0.039 & 0.037 & 0.057 & 0.062 \\ 3 & Null & 300 & unequal - severe & 0.049 & 0.047 & 0.055 & 0.062 \\ \hline 3 & D & 300 & equal - 25\% & \textbf{0.993} & \textbf{0.993} & 0.240 & 0.643 \\ 3 & D & 300 & equal - 50\% & \textbf{0.962} & \textbf{0.962} & 0.642 & 0.867 \\ 3 & D & 300 & unequal - mild & \textbf{0.922} & 0.916 & 0.630 & 0.853 \\ 3 & D & 300 & unequal - severe & \textbf{0.956} & 0.951 & 0.580 & 0.839 \\ \hline 3 & J-2 & 300 & equal - 25\% & 0.668 & \textbf{0.670} & 0.067 & 0.095 \\ 3 & J-2 & 300 & equal - 50\% & \textbf{0.372} & 0.368 & 0.076 & 0.137 \\ 3 & J-2 & 300 & unequal - mild & \textbf{0.317} & 0.309 & 0.080 & 0.147 \\ 3 & J-2 & 300 & unequal - severe & \textbf{0.343} & 0.336 & 0.068 & 0.149 \\ \hline 3 & Null & 402 & equal - 25\% & 0.052 & 0.052 & 0.047 & 0.049 \\ 3 & Null & 402 & equal - 50\% & 0.053 & 0.055 & 0.054 & 0.055 \\ 3 & Null & 402 & unequal - mild & 0.042 & 0.041 & 0.048 & 0.051 \\ 3 & Null & 402 & unequal - severe & 0.051 & 0.050 & 0.054 & 0.053 \\ \hline 3 & D & 402 & equal - 25\% & \textbf{1.000} & \textbf{1.000} & 0.326 & 0.779 \\ 3 & D & 402 & equal - 50\% & 0.989 & \textbf{0.990} & 0.776 & 0.933 \\ 3 & D & 402 & unequal - mild & \textbf{0.982} & 0.980 & 0.742 & 0.924 \\ 3 & D & 402 & unequal - severe & \textbf{0.991} & 0.990 & 0.704 & 0.930 \\ \hline 3 & J-2 & 402 & equal - 25\% & 0.836 & \textbf{0.839} & 0.089 & 0.115 \\ 3 & J-2 & 402 & equal - 50\% & \textbf{0.511} & 0.509 & 0.070 & 0.157 \\ 3 & J-2 & 402 & unequal - mild & \textbf{0.445} & 0.434 & 0.093 & 0.179 \\ 3 & J-2 & 402 & unequal - severe & \textbf{0.435} & 0.429 & 0.070 & 0.154 \\ \hline 4 & Null & 100 & equal - 25\% & 0.057 & 0.057 & 0.062 & 0.059 \\ 4 & Null & 100 & equal - 50\% & 0.038 & 0.038 & 0.045 & 0.044 \\ 4 & Null & 100 & unequal - mild & 0.051 & 0.049 & 0.056 & 0.047 \\ 4 & Null & 100 & unequal - severe & 0.041 & 0.039 & 0.050 & 0.045 \\ \hline 4 & Null & 200 & equal - 25\% & 0.047 & 0.048 & 0.060 & 0.056 \\ 4 & Null & 200 & equal - 50\% & 0.055 & 0.052 & 0.047 & 0.045 \\ 4 & Null & 200 & unequal - mild & 0.052 & 0.047 & 0.057 & 0.055 \\ 4 & Null & 200 & unequal - severe & 0.057 & 0.055 & 0.047 & 0.050 \\ \hline 4 & Null & 300 & equal - 25\% & 0.058 & 0.057 & 0.043 & 0.044 \\ 4 & Null & 300 & equal - 50\% & 0.044 & 0.042 & 0.040 & 0.042 \\ 4 & Null & 300 & unequal - mild & 0.040 & 0.038 & 0.050 & 0.052 \\ 4 & Null & 300 & unequal - severe & 0.050 & 0.051 & 0.056 & 0.058 \\ \hline 4 & Null & 400 & equal - 25\% & 0.051 & 0.050 & 0.052 & 0.052 \\ 4 & Null & 400 & equal - 50\% & 0.042 & 0.043 & 0.049 & 0.047 \\ 4 & Null & 400 & unequal - mild & 0.044 & 0.043 & 0.041 & 0.045 \\ 4 & Null & 400 & unequal - severe & 0.054 & 0.054 & 0.057 & 0.057 \\ \hline 5 & Null & 100 & equal - 25\% & 0.052 & 0.050 & 0.063 & 0.053 \\ 5 & Null & 100 & equal - 50\% & 0.049 & 0.048 & 0.058 & 0.057 \\ 5 & Null & 100 & unequal - mild & 0.050 & 0.051 & 0.053 & 0.050 \\ 5 & Null & 100 & unequal - severe & 0.045 & 0.049 & 0.051 & 0.048 \\ \hline 5 & Null & 200 & equal - 25\% & 0.050 & 0.049 & 0.053 & 0.049 \\ 5 & Null & 200 & equal - 50\% & 0.042 & 0.042 & 0.056 & 0.050 \\ 5 & Null & 200 & unequal - mild & 0.044 & 0.042 & 0.053 & 0.061 \\ 5 & Null & 200 & unequal - severe & 0.051 & 0.049 & 0.056 & 0.056 \\ \hline 5 & Null & 300 & equal - 25\% & 0.060 & 0.060 & 0.057 & 0.045 \\ 5 & Null & 300 & equal - 50\% & 0.052 & 0.052 & 0.052 & 0.053 \\ 5 & Null & 300 & unequal - mild & 0.049 & 0.050 & 0.042 & 0.046 \\ 5 & Null & 300 & unequal - severe & 0.062 & 0.054 & 0.060 & 0.060 \\ \hline 5 & Null & 400 & equal - 25\% & 0.057 & 0.056 & 0.052 & 0.054 \\ 5 & Null & 400 & equal - 50\% & 0.047 & 0.046 & 0.047 & 0.050 \\ 5 & Null & 400 & unequal - mild & 0.059 & 0.055 & 0.046 & 0.049 \\ 5 & Null & 400 & unequal - severe & 0.054 & 0.051 & 0.051 & 0.048 \\ \hline \end{longtable} \begin{longtable}[c]{ccccccccc} \caption{Empirical power of KONP-P and KONP-LR, our two proposed KONP-Pearson and KONP-LR tests, respectively; LR - the logrank test, and the robust test based on the Cauchy-combination test, $Cau$, of Section 3.4 of the main text; {the test of Lee (2007) and the MaxCombo test}. Scenarios L through Q are of proportional hazards or close to it.}\\ \hline $n$ & Scenario & Censoring & KONP-P & KONP-LR & logrank & $Cau$ & {Lee (2007)} & {MaxCombo}\\ \hline 100 & A & 25\% and 25\% & 0.542 & 0.539 & 0.515 & 0.547 & 0.582 & 0.584 \\ 100 & A & 50\% and 50\% & 0.238 & 0.234 & 0.235 & 0.252 & 0.283 & 0.291 \\ 100 & A & 40\% and 55\% & 0.323 & 0.322 & 0.262 & 0.320 & 0.333 & 0.329 \\ 100 & A & 27\% and 55\% & 0.355 & 0.351 & 0.300 & 0.354 & 0.344 & 0.350 \\ \hline 200 & A & 25\% and 25\% & 0.858 & 0.860 & 0.782 & 0.856 & 0.867 & 0.878 \\ 200 & A & 50\% and 50\% & 0.484 & 0.482 & 0.375 & 0.476 & 0.537 & 0.521 \\ 200 & A & 40\% and 55\% & 0.610 & 0.608 & 0.465 & 0.592 & 0.623 & 0.604 \\ 200 & A & 27\% and 55\% & 0.665 & 0.658 & 0.508 & 0.652 & 0.573 & 0.660 \\ \hline 300 & A & 25\% and 25\% & 0.973 & 0.974 & 0.919 & 0.970 & 0.971 & 0.972 \\ 300 & A & 50\% and 50\% & 0.700 & 0.691 & 0.512 & 0.678 & 0.744 & 0.746 \\ 300 & A & 40\% and 55\% & 0.797 & 0.798 & 0.609 & 0.781 & 0.806 & 0.788 \\ 300 & A & 27\% and 55\% & 0.862 & 0.854 & 0.687 & 0.845 & 0.760 & 0.811 \\ \hline 400 & A & 25\% and 25\% & 0.995 & 0.995 & 0.976 & 0.993 & 0.995 & 0.995 \\ 400 & A & 50\% and 50\% & 0.852 & 0.848 & 0.641 & 0.836 & 0.847 & 0.830 \\ 400 & A & 40\% and 55\% & 0.920 & 0.918 & 0.755 & 0.912 & 0.904 & 0.890 \\ 400 & A & 27\% and 55\% & 0.955 & 0.953 & 0.818 & 0.948 & 0.849 & 0.922 \\ \hline 100 & B & 25\% and 25\% & 0.621 & 0.635 & 0.113 & 0.555 & 0.256 & 0.241 \\ 100 & B & 50\% and 50\% & 0.600 & 0.616 & 0.294 & 0.561 & 0.378 & 0.375 \\ 100 & B & 40\% and 55\% & 0.476 & 0.489 & 0.206 & 0.439 & 0.326 & 0.332 \\ 100 & B & 27\% and 55\% & 0.478 & 0.476 & 0.190 & 0.435 & 0.340 & 0.319 \\ \hline 200 & B & 25\% and 25\% & 0.948 & 0.951 & 0.143 & 0.919 & 0.499 & 0.450 \\ 200 & B & 50\% and 50\% & 0.927 & 0.926 & 0.482 & 0.899 & 0.735 & 0.690 \\ 200 & B & 40\% and 55\% & 0.840 & 0.849 & 0.327 & 0.812 & 0.602 & 0.604 \\ 200 & B & 27\% and 55\% & 0.862 & 0.865 & 0.269 & 0.827 & 0.613 & 0.579 \\ \hline 300 & B & 25\% and 25\% & 0.998 & 0.998 & 0.208 & 0.996 & 0.672 & 0.672 \\ 300 & B & 50\% and 50\% & 0.994 & 0.995 & 0.637 & 0.992 & 0.895 & 0.870 \\ 300 & B & 40\% and 55\% & 0.964 & 0.964 & 0.454 & 0.968 & 0.787 & 0.788 \\ 300 & B & 27\% and 55\% & 0.968 & 0.967 & 0.393 & 0.960 & 0.799 & 0.768 \\ \hline 400 & B & 25\% and 25\% & 1.000 & 1.000 & 0.265 & 1.000 & 0.813 & 0.792 \\ 400 & B & 50\% and 50\% & 1.000 & 1.000 & 0.773 & 1.000 & 0.969 & 0.959 \\ 400 & B & 40\% and 55\% & 0.997 & 0.997 & 0.577 & 0.996 & 0.904 & 0.918 \\ 400 & B & 27\% and 55\% & 0.996 & 0.995 & 0.494 & 0.992 & 0.913 & 0.903 \\ \hline 100 & C & 25\% and 25\% & 0.895 & 0.898 & 0.224 & 0.848 & 0.481 & 0.443 \\ 100 & C & 50\% and 50\% & 0.746 & 0.750 & 0.456 & 0.713 & 0.558 & 0.544 \\ 100 & C & 40\% and 55\% & 0.679 & 0.674 & 0.459 & 0.658 & 0.531 & 0.539 \\ 100 & C & 27\% and 55\% & 0.739 & 0.727 & 0.368 & 0.692 & 0.567 & 0.493 \\ \hline 200 & C & 25\% and 25\% & 0.998 & 0.998 & 0.337 & 0.996 & 0.807 & 0.787 \\ 200 & C & 50\% and 50\% & 0.975 & 0.975 & 0.725 & 0.960 & 0.871 & 0.879 \\ 200 & C & 40\% and 55\% & 0.934 & 0.934 & 0.681 & 0.934 & 0.854 & 0.848 \\ 200 & C & 27\% and 55\% & 0.974 & 0.971 & 0.584 & 0.958 & 0.863 & 0.824 \\ \hline 300 & C & 25\% and 25\% & 1.000 & 1.000 & 0.482 & 1.000 & 0.948 & 0.948 \\ 300 & C & 50\% and 50\% & 0.999 & 0.999 & 0.873 & 0.999 & 0.970 & 0.966 \\ 300 & C & 40\% and 55\% & 0.992 & 0.993 & 0.855 & 0.993 & 0.962 & 0.961 \\ 300 & C & 27\% and 55\% & 1.000 & 1.000 & 0.758 & 0.998 & 0.963 & 0.952 \\ \hline 400 & C & 25\% and 25\% & 1.000 & 1.000 & 0.597 & 0.999 & 0.989 & 0.991 \\ 400 & C & 50\% and 50\% & 1.000 & 1.000 & 0.950 & 1.000 & 0.992 & 0.994 \\ 400 & C & 40\% and 55\% & 1.000 & 1.000 & 0.926 & 1.000 & 0.992 & 0.994 \\ 400 & C & 27\% and 55\% & 1.000 & 1.000 & 0.873 & 1.000 & 0.992 & 0.991 \\ \hline 100 & D & 25\% and 25\% & 0.789 & 0.785 & 0.166 & 0.740 & 0.339 & 0.342 \\ 100 & D & 50\% and 50\% & 0.681 & 0.681 & 0.419 & 0.645 & 0.493 & 0.489 \\ 100 & D & 40\% and 55\% & 0.654 & 0.643 & 0.365 & 0.615 & 0.453 & 0.427 \\ 100 & D & 27\% and 55\% & 0.641 & 0.630 & 0.314 & 0.608 & 0.470 & 0.405 \\ \hline 200 & D & 25\% and 25\% & 0.978 & 0.978 & 0.236 & 0.968 & 0.624 & 0.597 \\ 200 & D & 50\% and 50\% & 0.948 & 0.948 & 0.663 & 0.933 & 0.814 & 0.809 \\ 200 & D & 40\% and 55\% & 0.925 & 0.923 & 0.574 & 0.908 & 0.767 & 0.757 \\ 200 & D & 27\% and 55\% & 0.924 & 0.919 & 0.531 & 0.910 & 0.776 & 0.733 \\ \hline 300 & D & 25\% and 25\% & 0.999 & 0.999 & 0.347 & 0.999 & 0.948 & 0.757 \\ 300 & D & 50\% and 50\% & 0.993 & 0.993 & 0.842 & 0.993 & 0.970 & 0.940 \\ 300 & D & 40\% and 55\% & 0.990 & 0.989 & 0.770 & 0.989 & 0.913 & 0.909 \\ 300 & D & 27\% and 55\% & 0.993 & 0.993 & 0.698 & 0.988 & 0.914 & 0.897 \\ \hline 400 & D & 25\% and 25\% & 1.000 & 1.000 & 0.443 & 1.000 & 0.919 & 0.896 \\ 400 & D & 50\% and 50\% & 0.999 & 0.999 & 0.918 & 0.999 & 0.986 & 0.983 \\ 400 & D & 40\% and 55\% & 0.999 & 0.999 & 0.872 & 0.999 & 0.979 & 0.974 \\ 400 & D & 27\% and 55\% & 1.000 & 0.999 & 0.825 & 0.999 & 0.977 & 0.974 \\ \hline 100 & E & 25\% and 25\% & 0.570 & 0.569 & 0.557 & 0.589 & 0.602 & 0.604 \\ 100 & E & 50\% and 50\% & 0.317 & 0.314 & 0.277 & 0.325 & 0.333 & 0.343 \\ 100 & E & 40\% and 55\% & 0.331 & 0.332 & 0.296 & 0.338 & 0.451 & 0.392 \\ 100 & E & 27\% and 55\% & 0.435 & 0.431 & 0.389 & 0.443 & 0.384 & 0.448 \\ \hline 200 & E & 25\% and 25\% & 0.861 & 0.861 & 0.827 & 0.877 & 0.885 & 0.892 \\ 200 & E & 50\% and 50\% & 0.543 & 0.537 & 0.438 & 0.546 & 0.567 & 0.593 \\ 200 & E & 40\% and 55\% & 0.595 & 0.594 & 0.510 & 0.606 & 0.745 & 0.666 \\ 200 & E & 27\% and 55\% & 0.725 & 0.723 & 0.658 & 0.744 & 0.635 & 0.772 \\ \hline 300 & E & 25\% and 25\% & 0.969 & 0.968 & 0.943 & 0.971 & 0.972 & 0.980 \\ 300 & E & 50\% and 50\% & 0.735 & 0.734 & 0.627 & 0.730 & 0.772 & 0.779 \\ 300 & E & 40\% and 55\% & 0.793 & 0.788 & 0.715 & 0.818 & 0.901 & 0.839 \\ 300 & E & 27\% and 55\% & 0.895 & 0.894 & 0.846 & 0.905 & 0.814 & 0.922 \\ \hline 400 & E & 25\% and 25\% & 0.990 & 0.990 & 0.982 & 0.993 & 0.996 & 0.997 \\ 400 & E & 50\% and 50\% & 0.880 & 0.879 & 0.772 & 0.879 & 0.884 & 0.878 \\ 400 & E & 40\% and 55\% & 0.911 & 0.909 & 0.819 & 0.913 & 0.969 & 0.923 \\ 400 & E & 27\% and 55\% & 0.965 & 0.963 & 0.928 & 0.971 & 0.902 & 0.967 \\ \hline 100 & F & 25\% and 25\% & 0.236 & 0.240 & 0.064 & 0.196 & 0.220 & 0.213 \\ 100 & F & 50\% and 50\% & 0.206 & 0.212 & 0.131 & 0.192 & 0.146 & 0.129 \\ 100 & F & 40\% and 55\% & 0.178 & 0.175 & 0.113 & 0.167 & 0.138 & 0.141 \\ 100 & F & 27\% and 55\% & 0.159 & 0.158 & 0.102 & 0.156 & 0.146 & 0.131 \\ \hline 200 & F & 25\% and 25\% & 0.512 & 0.513 & 0.069 & 0.427 & 0.427 & 0.409 \\ 200 & F & 50\% and 50\% & 0.361 & 0.359 & 0.189 & 0.319 & 0.250 & 0.247 \\ 200 & F & 40\% and 55\% & 0.315 & 0.314 & 0.140 & 0.276 & 0.264 & 0.227 \\ 200 & F & 27\% and 55\% & 0.293 & 0.287 & 0.133 & 0.254 & 0.254 & 0.212 \\ \hline 300 & F & 25\% and 25\% & 0.764 & 0.763 & 0.062 & 0.679 & 0.619 & 0.579 \\ 300 & F & 50\% and 50\% & 0.524 & 0.527 & 0.256 & 0.466 & 0.358 & 0.336 \\ 300 & F & 40\% and 55\% & 0.487 & 0.486 & 0.185 & 0.422 & 0.373 & 0.349 \\ 300 & F & 27\% and 55\% & 0.449 & 0.447 & 0.173 & 0.393 & 0.358 & 0.320 \\ \hline 400 & F & 25\% and 25\% & 0.904 & 0.905 & 0.059 & 0.836 & 0.741 & 0.733 \\ 400 & F & 50\% and 50\% & 0.666 & 0.662 & 0.313 & 0.611 & 0.466 & 0.461 \\ 400 & F & 40\% and 55\% & 0.626 & 0.623 & 0.206 & 0.572 & 0.453 & 0.449 \\ 400 & F & 27\% and 55\% & 0.607 & 0.602 & 0.182 & 0.548 & 0.457 & 0.471 \\ \hline 100 & G & 25\% and 25\% & 0.505 & 0.504 & 0.471 & 0.520 & 0.538 & 0.527 \\ 100 & G & 50\% and 50\% & 0.558 & 0.553 & 0.594 & 0.587 & 0.579 & 0.599 \\ 100 & G & 40\% and 55\% & 0.442 & 0.433 & 0.518 & 0.490 & 0.518 & 0.508 \\ 100 & G & 27\% and 55\% & 0.463 & 0.453 & 0.486 & 0.479 & 0.519 & 0.513 \\ \hline 200 & G & 25\% and 25\% & 0.823 & 0.822 & 0.791 & 0.832 & 0.843 & 0.834 \\ 200 & G & 50\% and 50\% & 0.849 & 0.849 & 0.886 & 0.870 & 0.866 & 0.873 \\ 200 & G & 40\% and 55\% & 0.756 & 0.750 & 0.804 & 0.800 & 0.827 & 0.829 \\ 200 & G & 27\% and 55\% & 0.784 & 0.781 & 0.783 & 0.807 & 0.835 & 0.832 \\ \hline 300 & G & 25\% and 25\% & 0.964 & 0.964 & 0.908 & 0.969 & 0.964 & 0.962 \\ 300 & G & 50\% and 50\% & 0.965 & 0.965 & 0.973 & 0.972 & 0.972 & 0.969 \\ 300 & G & 40\% and 55\% & 0.900 & 0.896 & 0.937 & 0.930 & 0.958 & 0.952 \\ 300 & G & 27\% and 55\% & 0.932 & 0.930 & 0.920 & 0.943 & 0.951 & 0.949 \\ \hline 400 & G & 25\% and 25\% & 0.991 & 0.991 & 0.972 & 0.991 & 0.993 & 0.993 \\ 400 & G & 50\% and 50\% & 0.993 & 0.993 & 0.994 & 0.996 & 0.995 & 0.995 \\ 400 & G & 40\% and 55\% & 0.972 & 0.972 & 0.977 & 0.982 & 0.990 & 0.983 \\ 400 & G & 27\% and 55\% & 0.984 & 0.985 & 0.982 & 0.987 & 0.990 & 0.989 \\ \hline 100 & H & 25\% and 25\% & 0.518 & 0.515 & 0.268 & 0.465 & 0.385 & 0.355 \\ 100 & H & 50\% and 50\% & 0.556 & 0.550 & 0.610 & 0.588 & 0.573 & 0.606 \\ 100 & H & 40\% and 55\% & 0.405 & 0.403 & 0.342 & 0.410 & 0.379 & 0.383 \\ 100 & H & 27\% and 55\% & 0.419 & 0.417 & 0.341 & 0.416 & 0.381 & 0.405 \\ \hline 200 & H & 25\% and 25\% & 0.861 & 0.860 & 0.456 & 0.820 & 0.677 & 0.655 \\ 200 & H & 50\% and 50\% & 0.867 & 0.862 & 0.888 & 0.886 & 0.866 & 0.881 \\ 200 & H & 40\% and 55\% & 0.763 & 0.752 & 0.577 & 0.737 & 0.678 & 0.666 \\ 200 & H & 27\% and 55\% & 0.767 & 0.761 & 0.569 & 0.743 & 0.711 & 0.646 \\ \hline 300 & H & 25\% and 25\% & 0.974 & 0.974 & 0.627 & 0.965 & 0.862 & 0.847 \\ 300 & H & 50\% and 50\% & 0.968 & 0.968 & 0.982 & 0.979 & 0.969 & 0.967 \\ 300 & H & 40\% and 55\% & 0.917 & 0.917 & 0.774 & 0.905 & 0.865 & 0.865 \\ 300 & H & 27\% and 55\% & 0.918 & 0.914 & 0.756 & 0.906 & 0.869 & 0.840 \\ \hline 400 & H & 25\% and 25\% & 0.998 & 0.998 & 0.735 & 0.996 & 0.939 & 0.939 \\ 400 & H & 50\% and 50\% & 0.995 & 0.993 & 0.994 & 0.995 & 0.996 & 0.992 \\ 400 & H & 40\% and 55\% & 0.980 & 0.979 & 0.874 & 0.974 & 0.950 & 0.939 \\ 400 & H & 27\% and 55\% & 0.979 & 0.978 & 0.871 & 0.977 & 0.951 & 0.951 \\ \hline 100 & I-1 & 25\% and 25\% & 0.186 & 0.190 & 0.058 & 0.151 & 0.151 & 0.144 \\ 100 & I-1 & 50\% and 50\% & 0.097 & 0.100 & 0.081 & 0.095 & 0.122 & 0.111 \\ 100 & I-1 & 40\% and 55\% & 0.130 & 0.127 & 0.076 & 0.122 & 0.128 & 0.107 \\ 100 & I-1 & 27\% and 55\% & 0.134 & 0.131 & 0.067 & 0.129 & 0.118 & 0.105 \\ \hline 200 & I-1 & 25\% and 25\% & 0.401 & 0.408 & 0.043 & 0.331 & 0.289 & 0.263 \\ 200 & I-1 & 50\% and 50\% & 0.211 & 0.211 & 0.102 & 0.186 & 0.197 & 0.185 \\ 200 & I-1 & 40\% and 55\% & 0.210 & 0.203 & 0.104 & 0.183 & 0.191 & 0.172 \\ 200 & I-1 & 27\% and 55\% & 0.238 & 0.233 & 0.087 & 0.199 & 0.190 & 0.182 \\ \hline 300 & I-1 & 25\% and 25\% & 0.613 & 0.616 & 0.038 & 0.524 & 0.391 & 0.352 \\ 300 & I-1 & 50\% and 50\% & 0.339 & 0.334 & 0.140 & 0.307 & 0.261 & 0.273 \\ 300 & I-1 & 40\% and 55\% & 0.322 & 0.317 & 0.111 & 0.293 & 0.268 & 0.244 \\ 300 & I-1 & 27\% and 55\% & 0.364 & 0.358 & 0.086 & 0.312 & 0.265 & 0.235 \\ \hline 400 & I-1 & 25\% and 25\% & 0.752 & 0.758 & 0.048 & 0.670 & 0.545 & 0.468 \\ 400 & I-1 & 50\% and 50\% & 0.439 & 0.440 & 0.141 & 0.388 & 0.353 & 0.326 \\ 400 & I-1 & 40\% and 55\% & 0.464 & 0.458 & 0.129 & 0.403 & 0.339 & 0.322 \\ 400 & I-1 & 27\% and 55\% & 0.463 & 0.452 & 0.092 & 0.393 & 0.324 & 0.306 \\ \hline 100 & I-2 & 25\% and 25\% & 0.235 & 0.233 & 0.153 & 0.222 & 0.223 & 0.224 \\ 100 & I-2 & 50\% and 50\% & 0.188 & 0.182 & 0.197 & 0.192 & 0.221 & 0.218 \\ 100 & I-2 & 40\% and 55\% & 0.184 & 0.180 & 0.233 & 0.209 & 0.245 & 0.227 \\ 100 & I-2 & 27\% and 55\% & 0.180 & 0.178 & 0.250 & 0.224 & 0.237 & 0.228 \\ \hline 200 & I-2 & 25\% and 25\% & 0.462 & 0.460 & 0.240 & 0.434 & 0.428 & 0.396 \\ 200 & I-2 & 50\% and 50\% & 0.353 & 0.342 & 0.356 & 0.374 & 0.399 & 0.393 \\ 200 & I-2 & 40\% and 55\% & 0.309 & 0.300 & 0.365 & 0.365 & 0.435 & 0.401 \\ 200 & I-2 & 27\% and 55\% & 0.309 & 0.301 & 0.357 & 0.362 & 0.435 & 0.405 \\ \hline 300 & I-2 & 25\% and 25\% & 0.641 & 0.642 & 0.314 & 0.595 & 0.603 & 0.555 \\ 300 & I-2 & 50\% and 50\% & 0.485 & 0.478 & 0.467 & 0.507 & 0.573 & 0.543 \\ 300 & I-2 & 40\% and 55\% & 0.457 & 0.441 & 0.503 & 0.515 & 0.598 & 0.569 \\ 300 & I-2 & 27\% and 55\% & 0.446 & 0.438 & 0.500 & 0.524 & 0.583 & 0.576 \\ \hline 400 & I-2 & 25\% and 25\% & 0.814 & 0.814 & 0.458 & 0.778 & 0.726 & 0.688 \\ 400 & I-2 & 50\% and 50\% & 0.642 & 0.638 & 0.620 & 0.685 & 0.704 & 0.673 \\ 400 & I-2 & 40\% and 55\% & 0.627 & 0.615 & 0.636 & 0.666 & 0.739 & 0.703 \\ 400 & I-2 & 27\% and 55\% & 0.615 & 0.603 & 0.635 & 0.672 & 0.738 & 0.703 \\ \hline 100 & I-3 & 25\% and 25\% & 0.182 & 0.183 & 0.118 & 0.173 & 0.178 & 0.150 \\ 100 & I-3 & 50\% and 50\% & 0.229 & 0.226 & 0.229 & 0.242 & 0.243 & 0.223 \\ 100 & I-3 & 40\% and 55\% & 0.170 & 0.167 & 0.151 & 0.172 & 0.174 & 0.154 \\ 100 & I-3 & 27\% and 55\% & 0.172 & 0.169 & 0.138 & 0.168 & 0.171 & 0.158 \\ \hline 200 & I-3 & 25\% and 25\% & 0.371 & 0.371 & 0.150 & 0.325 & 0.304 & 0.284 \\ 200 & I-3 & 50\% and 50\% & 0.412 & 0.407 & 0.411 & 0.420 & 0.410 & 0.406 \\ 200 & I-3 & 40\% and 55\% & 0.301 & 0.297 & 0.233 & 0.300 & 0.308 & 0.319 \\ 200 & I-3 & 27\% and 55\% & 0.309 & 0.301 & 0.200 & 0.284 & 0.335 & 0.302 \\ \hline 300 & I-3 & 25\% and 25\% & 0.546 & 0.546 & 0.195 & 0.486 & 0.446 & 0.420 \\ 300 & I-3 & 50\% and 50\% & 0.583 & 0.582 & 0.565 & 0.602 & 0.591 & 0.584 \\ 300 & I-3 & 40\% and 55\% & 0.443 & 0.436 & 0.334 & 0.430 & 0.458 & 0.454 \\ 300 & I-3 & 27\% and 55\% & 0.449 & 0.435 & 0.297 & 0.437 & 0.461 & 0.439 \\ \hline 400 & I-3 & 25\% and 25\% & 0.706 & 0.706 & 0.261 & 0.657 & 0.560 & 0.537 \\ 400 & I-3 & 50\% and 50\% & 0.707 & 0.705 & 0.684 & 0.710 & 0.713 & 0.721 \\ 400 & I-3 & 40\% and 55\% & 0.583 & 0.577 & 0.435 & 0.582 & 0.579 & 0.570 \\ 400 & I-3 & 27\% and 55\% & 0.591 & 0.569 & 0.373 & 0.572 & 0.613 & 0.558 \\ \hline 100 & J-1 & 25\% and 25\% & 0.400 & 0.396 & 0.058 & 0.336 & 0.229 & 0.240 \\ 100 & J-1 & 50\% and 50\% & 0.261 & 0.257 & 0.106 & 0.223 & 0.172 & 0.158 \\ 100 & J-1 & 40\% and 55\% & 0.229 & 0.229 & 0.095 & 0.205 & 0.175 & 0.152 \\ 100 & J-1 & 27\% and 55\% & 0.255 & 0.245 & 0.085 & 0.220 & 0.168 & 0.158 \\ \hline 200 & J-1 & 25\% and 25\% & 0.752 & 0.751 & 0.050 & 0.680 & 0.429 & 0.391 \\ 200 & J-1 & 50\% and 50\% & 0.530 & 0.528 & 0.122 & 0.456 & 0.308 & 0.280 \\ 200 & J-1 & 40\% and 55\% & 0.529 & 0.521 & 0.132 & 0.453 & 0.302 & 0.287 \\ 200 & J-1 & 27\% and 55\% & 0.523 & 0.507 & 0.111 & 0.451 & 0.295 & 0.287 \\ \hline 300 & J-1 & 25\% and 25\% & 0.912 & 0.913 & 0.057 & 0.871 & 0.601 & 0.581 \\ 300 & J-1 & 50\% and 50\% & 0.740 & 0.736 & 0.154 & 0.685 & 0.430 & 0.387 \\ 300 & J-1 & 40\% and 55\% & 0.707 & 0.698 & 0.148 & 0.647 & 0.434 & 0.395 \\ 300 & J-1 & 27\% and 55\% & 0.713 & 0.700 & 0.108 & 0.620 & 0.444 & 0.427 \\ \hline 400 & J-1 & 25\% and 25\% & 0.981 & 0.981 & 0.062 & 0.964 & 0.774 & 0.712 \\ 400 & J-1 & 50\% and 50\% & 0.874 & 0.873 & 0.167 & 0.823 & 0.558 & 0.525 \\ 400 & J-1 & 40\% and 55\% & 0.869 & 0.865 & 0.176 & 0.813 & 0.549 & 0.533 \\ 400 & J-1 & 27\% and 55\% & 0.868 & 0.848 & 0.134 & 0.812 & 0.564 & 0.551 \\ \hline 100 & J-2 & 25\% and 25\% & 0.314 & 0.320 & 0.056 & 0.252 & 0.205 & 0.193 \\ 100 & J-2 & 50\% and 50\% & 0.169 & 0.166 & 0.075 & 0.146 & 0.095 & 0.098 \\ 100 & J-2 & 40\% and 55\% & 0.178 & 0.177 & 0.066 & 0.145 & 0.101 & 0.101 \\ 100 & J-2 & 27\% and 55\% & 0.186 & 0.184 & 0.053 & 0.144 & 0.103 & 0.100 \\ \hline 200 & J-2 & 25\% and 25\% & 0.684 & 0.688 & 0.072 & 0.602 & 0.369 & 0.349 \\ 200 & J-2 & 50\% and 50\% & 0.334 & 0.328 & 0.066 & 0.272 & 0.143 & 0.135 \\ 200 & J-2 & 40\% and 55\% & 0.342 & 0.339 & 0.079 & 0.284 & 0.153 & 0.136 \\ 200 & J-2 & 27\% and 55\% & 0.355 & 0.346 & 0.061 & 0.280 & 0.150 & 0.138 \\ \hline 300 & J-2 & 25\% and 25\% & 0.864 & 0.868 & 0.090 & 0.823 & 0.581 & 0.526 \\ 300 & J-2 & 50\% and 50\% & 0.535 & 0.527 & 0.086 & 0.457 & 0.199 & 0.181 \\ 300 & J-2 & 40\% and 55\% & 0.523 & 0.514 & 0.072 & 0.443 & 0.230 & 0.193 \\ 300 & J-2 & 27\% and 55\% & 0.541 & 0.528 & 0.057 & 0.455 & 0.204 & 0.193 \\ \hline 400 & J-2 & 25\% and 25\% & 0.942 & 0.942 & 0.091 & 0.920 & 0.715 & 0.677 \\ 400 & J-2 & 50\% and 50\% & 0.704 & 0.696 & 0.085 & 0.618 & 0.282 & 0.243 \\ 400 & J-2 & 40\% and 55\% & 0.673 & 0.660 & 0.076 & 0.597 & 0.264 & 0.237 \\ 400 & J-2 & 27\% and 55\% & 0.688 & 0.675 & 0.062 & 0.611 & 0.252 & 0.246 \\ \hline 100 & J-3 & 25\% and 25\% & 0.765 & 0.763 & 0.545 & 0.746 & 0.543 & 0.656 \\ 100 & J-3 & 50\% and 50\% & 0.525 & 0.517 & 0.434 & 0.527 & 0.508 & 0.540 \\ 100 & J-3 & 40\% and 55\% & 0.477 & 0.472 & 0.434 & 0.494 & 0.582 & 0.571 \\ 100 & J-3 & 27\% and 55\% & 0.495 & 0.491 & 0.470 & 0.531 & 0.551 & 0.598 \\ \hline 200 & J-3 & 25\% and 25\% & 0.980 & 0.979 & 0.808 & 0.971 & 0.788 & 0.928 \\ 200 & J-3 & 50\% and 50\% & 0.863 & 0.857 & 0.709 & 0.850 & 0.741 & 0.833 \\ 200 & J-3 & 40\% and 55\% & 0.775 & 0.755 & 0.668 & 0.779 & 0.811 & 0.826 \\ 200 & J-3 & 27\% and 55\% & 0.776 & 0.761 & 0.705 & 0.802 & 0.791 & 0.856 \\ \hline 300 & J-3 & 25\% and 25\% & 0.997 & 0.997 & 0.932 & 0.996 & 0.920 & 0.986 \\ 300 & J-3 & 50\% and 50\% & 0.957 & 0.956 & 0.869 & 0.961 & 0.881 & 0.954 \\ 300 & J-3 & 40\% and 55\% & 0.926 & 0.908 & 0.831 & 0.924 & 0.911 & 0.940 \\ 300 & J-3 & 27\% and 55\% & 0.921 & 0.907 & 0.854 & 0.930 & 0.897 & 0.955 \\ \hline 400 & J-3 & 25\% and 25\% & 1.000 & 1.000 & 0.983 & 1.000 & 0.975 & 1.000 \\ 400 & J-3 & 50\% and 50\% & 0.998 & 0.998 & 0.946 & 0.998 & 0.959 & 0.985 \\ 400 & J-3 & 40\% and 55\% & 0.981 & 0.976 & 0.929 & 0.981 & 0.959 & 0.981 \\ 400 & J-3 & 27\% and 55\% & 0.984 & 0.979 & 0.950 & 0.988 & 0.961 & 0.991 \\ \hline 100 & K-1 & 25\% and 25\% & 0.269 & 0.275 & 0.115 & 0.238 & 0.277 & 0.256 \\ 100 & K-1 & 50\% and 50\% & 0.111 & 0.109 & 0.046 & 0.094 & 0.139 & 0.123 \\ 100 & K-1 & 40\% and 55\% & 0.165 & 0.164 & 0.061 & 0.147 & 0.165 & 0.132 \\ 100 & K-1 & 27\% and 55\% & 0.167 & 0.167 & 0.058 & 0.145 & 0.137 & 0.148 \\ \hline 200 & K-1 & 25\% and 25\% & 0.529 & 0.536 & 0.177 & 0.476 & 0.510 & 0.491 \\ 200 & K-1 & 50\% and 50\% & 0.255 & 0.254 & 0.065 & 0.217 & 0.228 & 0.197 \\ 200 & K-1 & 40\% and 55\% & 0.257 & 0.257 & 0.053 & 0.207 & 0.265 & 0.223 \\ 200 & K-1 & 27\% and 55\% & 0.286 & 0.281 & 0.062 & 0.235 & 0.236 & 0.257 \\ \hline 300 & K-1 & 25\% and 25\% & 0.757 & 0.763 & 0.246 & 0.709 & 0.710 & 0.707 \\ 300 & K-1 & 50\% and 50\% & 0.409 & 0.409 & 0.045 & 0.344 & 0.325 & 0.295 \\ 300 & K-1 & 40\% and 55\% & 0.401 & 0.398 & 0.041 & 0.318 & 0.372 & 0.320 \\ 300 & K-1 & 27\% and 55\% & 0.427 & 0.418 & 0.043 & 0.345 & 0.335 & 0.356 \\ \hline 400 & K-1 & 25\% and 25\% & 0.899 & 0.905 & 0.372 & 0.882 & 0.852 & 0.832 \\ 400 & K-1 & 50\% and 50\% & 0.523 & 0.520 & 0.065 & 0.447 & 0.429 & 0.419 \\ 400 & K-1 & 40\% and 55\% & 0.534 & 0.526 & 0.057 & 0.464 & 0.479 & 0.401 \\ 400 & K-1 & 27\% and 55\% & 0.576 & 0.561 & 0.066 & 0.474 & 0.432 & 0.462 \\ \hline 100 & K-2 & 25\% and 25\% & 0.428 & 0.427 & 0.053 & 0.364 & 0.283 & 0.264 \\ 100 & K-2 & 50\% and 50\% & 0.215 & 0.214 & 0.105 & 0.188 & 0.223 & 0.182 \\ 100 & K-2 & 40\% and 55\% & 0.251 & 0.255 & 0.093 & 0.220 & 0.213 & 0.181 \\ 100 & K-2 & 27\% and 55\% & 0.297 & 0.285 & 0.080 & 0.246 & 0.217 & 0.204 \\ \hline 200 & K-2 & 25\% and 25\% & 0.796 & 0.798 & 0.047 & 0.737 & 0.545 & 0.497 \\ 200 & K-2 & 50\% and 50\% & 0.518 & 0.516 & 0.159 & 0.464 & 0.388 & 0.379 \\ 200 & K-2 & 40\% and 55\% & 0.526 & 0.525 & 0.151 & 0.471 & 0.397 & 0.358 \\ 200 & K-2 & 27\% and 55\% & 0.536 & 0.521 & 0.119 & 0.460 & 0.398 & 0.381 \\ \hline 300 & K-2 & 25\% and 25\% & 0.948 & 0.947 & 0.044 & 0.917 & 0.749 & 0.695 \\ 300 & K-2 & 50\% and 50\% & 0.736 & 0.730 & 0.207 & 0.675 & 0.571 & 0.518 \\ 300 & K-2 & 40\% and 55\% & 0.755 & 0.748 & 0.171 & 0.683 & 0.576 & 0.524 \\ 300 & K-2 & 27\% and 55\% & 0.759 & 0.737 & 0.131 & 0.679 & 0.561 & 0.562 \\ \hline 400 & K-2 & 25\% and 25\% & 0.986 & 0.986 & 0.054 & 0.975 & 0.894 & 0.860 \\ 400 & K-2 & 50\% and 50\% & 0.857 & 0.858 & 0.221 & 0.821 & 0.712 & 0.689 \\ 400 & K-2 & 40\% and 55\% & 0.857 & 0.850 & 0.194 & 0.819 & 0.721 & 0.685 \\ 400 & K-2 & 27\% and 55\% & 0.857 & 0.835 & 0.143 & 0.797 & 0.702 & 0.671 \\ \hline 100 & K-3 & 25\% and 25\% & 0.897 & 0.899 & 0.055 & 0.844 & 0.603 & 0.550 \\ 100 & K-3 & 50\% and 50\% & 0.574 & 0.580 & 0.180 & 0.525 & 0.454 & 0.427 \\ 100 & K-3 & 40\% and 55\% & 0.546 & 0.536 & 0.221 & 0.508 & 0.512 & 0.432 \\ 100 & K-3 & 27\% and 55\% & 0.589 & 0.586 & 0.207 & 0.552 & 0.477 & 0.481 \\ \hline 200 & K-3 & 25\% and 25\% & 0.997 & 0.997 & 0.051 & 0.992 & 0.953 & 0.940 \\ 200 & K-3 & 50\% and 50\% & 0.920 & 0.921 & 0.300 & 0.901 & 0.809 & 0.779 \\ 200 & K-3 & 40\% and 55\% & 0.899 & 0.898 & 0.298 & 0.876 & 0.852 & 0.796 \\ 200 & K-3 & 27\% and 55\% & 0.922 & 0.921 & 0.234 & 0.896 & 0.823 & 0.837 \\ \hline 300 & K-3 & 25\% and 25\% & 1.000 & 1.000 & 0.046 & 1.000 & 0.998 & 0.996 \\ 300 & K-3 & 50\% and 50\% & 0.990 & 0.988 & 0.378 & 0.983 & 0.965 & 0.936 \\ 300 & K-3 & 40\% and 55\% & 0.982 & 0.982 & 0.391 & 0.975 & 0.973 & 0.946 \\ 300 & K-3 & 27\% and 55\% & 0.990 & 0.990 & 0.280 & 0.985 & 0.963 & 0.966 \\ \hline 400 & K-3 & 25\% and 25\% & 1.000 & 1.000 & 0.050 & 1.000 & 1.000 & 1.000 \\ 400 & K-3 & 50\% and 50\% & 0.997 & 0.996 & 0.500 & 0.996 & 0.992 & 0.995 \\ 400 & K-3 & 40\% and 55\% & 1.000 & 1.000 & 0.478 & 0.999 & 0.996 & 0.993 \\ 400 & K-3 & 27\% and 55\% & 1.000 & 1.000 & 0.335 & 0.997 & 0.995 & 0.996 \\ \hline 100 & L & 25\% and 25\% & 0.565 & 0.562 & 0.675 & 0.630 & 0.701 & 0.674 \\ 100 & L & 50\% and 50\% & 0.410 & 0.406 & 0.514 & 0.472 & 0.509 & 0.512 \\ 100 & L & 40\% and 55\% & 0.386 & 0.382 & 0.510 & 0.463 & 0.577 & 0.530 \\ 100 & L & 27\% and 55\% & 0.448 & 0.440 & 0.574 & 0.532 & 0.517 & 0.565 \\ \hline 200 & L & 25\% and 25\% & 0.887 & 0.885 & 0.948 & 0.929 & 0.940 & 0.941 \\ 200 & L & 50\% and 50\% & 0.723 & 0.718 & 0.821 & 0.781 & 0.807 & 0.799 \\ 200 & L & 40\% and 55\% & 0.727 & 0.720 & 0.837 & 0.805 & 0.857 & 0.807 \\ 200 & L & 27\% and 55\% & 0.810 & 0.804 & 0.891 & 0.860 & 0.817 & 0.863 \\ \hline 300 & L & 25\% and 25\% & 0.982 & 0.983 & 0.992 & 0.988 & 0.991 & 0.992 \\ 300 & L & 50\% and 50\% & 0.891 & 0.888 & 0.952 & 0.934 & 0.922 & 0.928 \\ 300 & L & 40\% and 55\% & 0.890 & 0.885 & 0.949 & 0.937 & 0.967 & 0.937 \\ 300 & L & 27\% and 55\% & 0.936 & 0.937 & 0.973 & 0.963 & 0.933 & 0.969 \\ \hline 400 & L & 25\% and 25\% & 0.999 & 0.999 & 1.000 & 0.999 & 0.997 & 0.997 \\ 400 & L & 50\% and 50\% & 0.960 & 0.960 & 0.984 & 0.983 & 0.974 & 0.983 \\ 400 & L & 40\% and 55\% & 0.961 & 0.959 & 0.990 & 0.985 & 0.992 & 0.979 \\ 400 & L & 27\% and 55\% & 0.982 & 0.982 & 0.994 & 0.993 & 0.988 & 0.994 \\ \hline 100 & M & 25\% and 25\% & 0.316 & 0.317 & 0.474 & 0.417 & 0.403 & 0.417 \\ 100 & M & 50\% and 50\% & 0.221 & 0.218 & 0.367 & 0.302 & 0.297 & 0.303 \\ 100 & M & 40\% and 55\% & 0.217 & 0.220 & 0.372 & 0.306 & 0.341 & 0.309 \\ 100 & M & 27\% and 55\% & 0.231 & 0.223 & 0.384 & 0.321 & 0.344 & 0.339 \\ \hline 200 & M & 25\% and 25\% & 0.529 & 0.537 & 0.752 & 0.686 & 0.700 & 0.705 \\ 200 & M & 50\% and 50\% & 0.364 & 0.364 & 0.624 & 0.528 & 0.529 & 0.537 \\ 200 & M & 40\% and 55\% & 0.397 & 0.399 & 0.614 & 0.543 & 0.593 & 0.571 \\ 200 & M & 27\% and 55\% & 0.416 & 0.418 & 0.651 & 0.564 & 0.548 & 0.609 \\ \hline 300 & M & 25\% and 25\% & 0.742 & 0.749 & 0.907 & 0.864 & 0.870 & 0.874 \\ 300 & M & 50\% and 50\% & 0.523 & 0.529 & 0.775 & 0.704 & 0.721 & 0.697 \\ 300 & M & 40\% and 55\% & 0.568 & 0.566 & 0.793 & 0.726 & 0.767 & 0.751 \\ 300 & M & 27\% and 55\% & 0.618 & 0.614 & 0.812 & 0.763 & 0.732 & 0.775 \\ \hline 400 & M & 25\% and 25\% & 0.880 & 0.882 & 0.965 & 0.946 & 0.948 & 0.958 \\ 400 & M & 50\% and 50\% & 0.669 & 0.676 & 0.875 & 0.815 & 0.840 & 0.828 \\ 400 & M & 40\% and 55\% & 0.727 & 0.724 & 0.900 & 0.859 & 0.875 & 0.854 \\ 400 & M & 27\% and 55\% & 0.757 & 0.757 & 0.916 & 0.875 & 0.841 & 0.874 \\ \hline 100 & N & 25\% and 25\% & 0.510 & 0.505 & 0.608 & 0.569 & 0.567 & 0.566 \\ 100 & N & 50\% and 50\% & 0.413 & 0.409 & 0.484 & 0.451 & 0.432 & 0.455 \\ 100 & N & 40\% and 55\% & 0.318 & 0.319 & 0.418 & 0.378 & 0.401 & 0.367 \\ 100 & N & 27\% and 55\% & 0.291 & 0.287 & 0.431 & 0.385 & 0.391 & 0.402 \\ \hline 200 & N & 25\% and 25\% & 0.783 & 0.782 & 0.871 & 0.838 & 0.846 & 0.846 \\ 200 & N & 50\% and 50\% & 0.679 & 0.673 & 0.757 & 0.729 & 0.717 & 0.730 \\ 200 & N & 40\% and 55\% & 0.489 & 0.481 & 0.654 & 0.613 & 0.659 & 0.618 \\ 200 & N & 27\% and 55\% & 0.438 & 0.432 & 0.678 & 0.616 & 0.612 & 0.653 \\ \hline 300 & N & 25\% and 25\% & 0.925 & 0.923 & 0.971 & 0.951 & 0.948 & 0.955 \\ 300 & N & 50\% and 50\% & 0.846 & 0.843 & 0.902 & 0.869 & 0.879 & 0.877 \\ 300 & N & 40\% and 55\% & 0.664 & 0.653 & 0.831 & 0.790 & 0.807 & 0.827 \\ 300 & N & 27\% and 55\% & 0.612 & 0.590 & 0.848 & 0.802 & 0.790 & 0.826 \\ \hline 400 & N & 25\% and 25\% & 0.977 & 0.977 & 0.994 & 0.991 & 0.985 & 0.992 \\ 400 & N & 50\% and 50\% & 0.932 & 0.932 & 0.969 & 0.954 & 0.957 & 0.950 \\ 400 & N & 40\% and 55\% & 0.792 & 0.771 & 0.924 & 0.887 & 0.909 & 0.897 \\ 400 & N & 27\% and 55\% & 0.731 & 0.700 & 0.926 & 0.903 & 0.897 & 0.913 \\ \hline 100 & O & 25\% and 25\% & 0.975 & 0.975 & 0.990 & 0.987 & 0.986 & 0.991 \\ 100 & O & 50\% and 50\% & 0.903 & 0.899 & 0.943 & 0.928 & 0.937 & 0.931 \\ 100 & O & 40\% and 55\% & 0.793 & 0.781 & 0.904 & 0.884 & 0.905 & 0.901 \\ 100 & O & 27\% and 55\% & 0.769 & 0.759 & 0.914 & 0.893 & 0.902 & 0.918 \\ \hline 200 & O & 25\% and 25\% & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 200 & O & 50\% and 50\% & 0.993 & 0.992 & 0.998 & 0.998 & 1.000 & 1.000 \\ 200 & O & 40\% and 55\% & 0.975 & 0.966 & 0.999 & 0.998 & 0.996 & 0.996 \\ 200 & O & 27\% and 55\% & 0.966 & 0.946 & 1.000 & 0.997 & 0.994 & 0.998 \\ \hline 300 & O & 25\% and 25\% & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 300 & O & 50\% and 50\% & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 300 & O & 40\% and 55\% & 0.996 & 0.995 & 1.000 & 0.999 & 1.000 & 0.999 \\ 300 & O & 27\% and 55\% & 0.989 & 0.981 & 1.000 & 1.000 & 1.000 & 1.000 \\ \hline 400 & O & 25\% and 25\% & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & O & 50\% and 50\% & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & O & 40\% and 55\% & 1.000 & 0.998 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & O & 27\% and 55\% & 0.997 & 0.994 & 1.000 & 1.000 & 1.000 & 1.000 \\ \hline 100 & P & 25\% and 25\% & 0.743 & 0.740 & 0.784 & 0.779 & 0.752 & 0.743 \\ 100 & P & 50\% and 50\% & 0.699 & 0.696 & 0.755 & 0.735 & 0.690 & 0.705 \\ 100 & P & 40\% and 55\% & 0.572 & 0.560 & 0.695 & 0.663 & 0.662 & 0.661 \\ 100 & P & 27\% and 55\% & 0.493 & 0.485 & 0.689 & 0.648 & 0.679 & 0.643 \\ \hline 200 & P & 25\% and 25\% & 0.949 & 0.949 & 0.964 & 0.962 & 0.970 & 0.961 \\ 200 & P & 50\% and 50\% & 0.924 & 0.923 & 0.949 & 0.940 & 0.952 & 0.945 \\ 200 & P & 40\% and 55\% & 0.860 & 0.857 & 0.928 & 0.909 & 0.917 & 0.916 \\ 200 & P & 27\% and 55\% & 0.736 & 0.724 & 0.920 & 0.893 & 0.926 & 0.913 \\ \hline 300 & P & 25\% and 25\% & 0.992 & 0.992 & 0.997 & 0.997 & 0.997 & 0.996 \\ 300 & P & 50\% and 50\% & 0.990 & 0.989 & 0.995 & 0.992 & 0.996 & 0.994 \\ 300 & P & 40\% and 55\% & 0.966 & 0.964 & 0.989 & 0.983 & 0.982 & 0.988 \\ 300 & P & 27\% and 55\% & 0.854 & 0.842 & 0.986 & 0.980 & 0.988 & 0.982 \\ \hline 400 & P & 25\% and 25\% & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 400 & P & 50\% and 50\% & 1.000 & 1.000 & 1.000 & 1.000 & 0.998 & 1.000 \\ 400 & P & 40\% and 55\% & 0.993 & 0.991 & 0.999 & 0.998 & 1.000 & 0.999 \\ 400 & P & 27\% and 55\% & 0.900 & 0.878 & 0.999 & 0.996 & 0.998 & 0.998 \\ \hline 100 & Q & 25\% and 25\% & 0.167 & 0.164 & 0.213 & 0.187 & 0.216 & 0.215 \\ 100 & Q & 50\% and 50\% & 0.086 & 0.086 & 0.103 & 0.088 & 0.104 & 0.102 \\ 100 & Q & 40\% and 55\% & 0.116 & 0.119 & 0.135 & 0.138 & 0.157 & 0.139 \\ 100 & Q & 27\% and 55\% & 0.136 & 0.133 & 0.142 & 0.149 & 0.145 & 0.153 \\ \hline 200 & Q & 25\% and 25\% & 0.281 & 0.281 & 0.330 & 0.303 & 0.380 & 0.379 \\ 200 & Q & 50\% and 50\% & 0.110 & 0.107 & 0.146 & 0.132 & 0.151 & 0.153 \\ 200 & Q & 40\% and 55\% & 0.162 & 0.160 & 0.186 & 0.190 & 0.249 & 0.220 \\ 200 & Q & 27\% and 55\% & 0.184 & 0.178 & 0.199 & 0.212 & 0.220 & 0.229 \\ \hline 300 & Q & 25\% and 25\% & 0.441 & 0.439 & 0.468 & 0.464 & 0.529 & 0.537 \\ 300 & Q & 50\% and 50\% & 0.164 & 0.162 & 0.201 & 0.182 & 0.200 & 0.209 \\ 300 & Q & 40\% and 55\% & 0.239 & 0.235 & 0.254 & 0.255 & 0.318 & 0.307 \\ 300 & Q & 27\% and 55\% & 0.261 & 0.251 & 0.271 & 0.290 & 0.295 & 0.334 \\ \hline 400 & Q & 25\% and 25\% & 0.552 & 0.552 & 0.585 & 0.589 & 0.679 & 0.685 \\ 400 & Q & 50\% and 50\% & 0.221 & 0.220 & 0.239 & 0.228 & 0.269 & 0.249 \\ 400 & Q & 40\% and 55\% & 0.307 & 0.305 & 0.305 & 0.336 & 0.408 & 0.384 \\ 400 & Q & 27\% and 55\% & 0.346 & 0.339 & 0.344 & 0.373 & 0.358 & 0.417 \\ \hline \end{longtable} \end{document}
arXiv
\begin{document} \title{The Wild Bootstrap for Multivariate Nelson-Aalen Estimators} \author{\hspace{-1.2cm}\footnote{Authors with equal contribution, e-mail: [email protected], [email protected], [email protected], [email protected]} \ Tobias Bluhmki, ${}^*$ Dennis Dobler, Jan Beyersmann, Markus Pauly \\ Ulm University, Institute of Statistics, \\ Helmholtzstrasse 20, 89081 Ulm, \\ Germany} \iffalse \hspace{-1.2cm}\footnote{Authors with equal contribution, e-mail: [email protected], [email protected], [email protected], [email protected]} \ Tobias Bluhmki, ${}^*$ Dennis Dobler, Jan Beyersmann, Markus Pauly \\ {Ulm University, Institute of Statistics, \\ Helmholtzstrasse 20, 89081 Ulm, \\ Germany} } \fi \maketitle \begin{abstract} We rigorously extend the widely used wild bootstrap resampling technique to the multivariate Nelson-Aalen estimator under Aalen's multiplicative intensity model. Aalen's model covers general Markovian multistate models including competing risks subject to independent left-truncation and right-censoring. {\color{black} This leads to various statistical applications such as asymptotically valid confidence bands or tests for equivalence and proportional hazards. This is exemplified in a data analysis examining the impact of ventilation on the duration of intensive care unit stay. The finite sample properties of the new procedures are investigated in a simulation study.} \end{abstract} Keywords: conditional central limit theorem, counting process, equivalence test, proportional hazards, Kolmogorov-Smirnov test, survival analysis, weak convergence. \section{Introduction} One of the most crucial quantities within the analysis of time-to-event data with independently right-censored and left-truncated survival times is the cumulative hazard function, also known as transition intensity. Most commonly, it is nonparametrically estimated by the well-known \emph{Nelson-Aalen estimator} \citep[Chapter IV]{abgk93}. In this context, time-simultaneous confidence bands are the perhaps best interpretative tool to account for related estimation uncertainties. The construction of confidence bands is typically based on the asymptotic behavior of the underlying stochastic processes, more precisely, the (properly standardized) Nelson-Aalen estimator asymptotically behaves like a Wiener process. Early approaches utilized this property to derive confidence bands for the cumulative hazard function; see e.g., \citet{bie87} or Section~IV.1.3 in \citet{abgk93}. However, \citet{dudek08} found that this approach applied to small samples can result in considerable deviations from the aimed nominal level. To improve small sample properties, \citet{efron79,efron81} suggested a computationally convenient and flexible resampling technique, called \textit{bootstrap}, where the unknown non-Gaussian quantile is approximated via repeated generation of point estimates based on random samples of the original data. For a detailed discussion within the standard right-censored survival setup, see also \citet{akritas86}, \citet{lo86}, and \citet{horvath87}. The simulation study of \citet{dudek08} particularly reports improvements of bootstrap-based confidence bands for the hazard function as compared to those using asymptotic quantiles. An alternative is the so-called \textit{wild bootstrap} firstly proposed in the context of regression analyses \citep{wu86}. As done in \citet{lin93}, the basic idea is to replace the (standardized) residuals with independent standardized variates -- so-called multipliers -- while keeping the data fixed. One advantage compared to Efron's bootstrap is to gain robustness against variance heteroscedasticity \citep{wu86}. Using standard normal {\color{black}multipliers}, this resampling procedure has been applied to construct time-simultaneous confidence bands for survival curves under the Cox proportional hazards model \citep{lin1994} and adapted to cumulative incidence functions in the more general competing risks setting \citep{lin97}. The latter approach has recently been extended to general wild bootstrap multipliers with mean zero and variance one \citep{beyersmann12b}, which indicate possible improved small sample performances. This result was confirmed in \citet{dobler14} as well as \citet{dobler15a}, where more general resampling schemes are discussed. In contrast to probability estimation, the present article focuses on the nonparametric estimation of cumulative hazard functions and proposes a general and flexible wild bootstrap resampling technique, which is valid for a large class of time-to-event models. In particular, the procedure is not limited to the standard survival or competing risks framework. The key assumption is that the involved counting processes satisfy the so-called \textit{multiplicative intensity model} \citep{abgk93}. Consequently, arbitrary Markovian multistate models with finite state space are covered, as well as various other intensity models \citep[e.g., excess or relative mortality models, cf.][]{andersen1989} and specific semi-Markov situations \citep[][Example X.1.7]{abgk93}. Independent right-censoring and left-truncation can straightforwardly be incorporated. The main aim of this article is to mathematically justify the wild bootstrap technique for the multivariate Nelson-Aalen estimator in this general framework. This is accomplished by generalizing core arguments in \citet{beyersmann12b} and \citet{dobler14} and verifying conditional tightness via a modified version of Theorem~15.6 in \citet{billingsley68}; see p. 356 in \citet{jacod03}. Compared to the standard survival or competing risks setting, with at most one transition per individual, the major difficulty is to account for counting processes having an arbitrarily large random number of jumps. As \citet{beyersmann12b} suggested in the competing risks setting, we also permit for more general multipliers with expectation 0 and variance 1 and extend the resulting weak convergence theorems to resample the multivariate Nelson-Aalen estimator in our general setting. For practical applications, this result allows, for instance, within- or two-sample comparisons and the formulation of statistical tests. The wild bootstrap is exemplified to statistically assess the impact of mechanical ventilation in the intensive care unit (ICU) on the length of stay. A related problem is to investigate ventilation-free days, which was established as an efficacy measure in patients subject to acute respiratory failure \citep{schoenfeld2002}. However, statistical evaluation of their often-used methodology (see e.g., \citealp{sauaia2009} and \citealp{stewart2009}) relies on the constant hazards assumption. Other publications like \citet{dewit2008}, \citet{trof2012}, or \citet{curley2015} used a Kaplan-Meier-type procedure that does not account for the more complex multistate structure. In contrast, we propose an illness-death model with recovery that methodologically works under the more general time-inhomogeneous Markov assumption and captures both the time-dependent structure of mechanical ventilation and the competing endpoint `death in ICU'. The remainder of this article is organized as follows: Section~\ref{sec:model} introduces cumulative hazard functions and their Nelson-Aalen-type estimators using counting process formulations. After summarizing its asymptotic properties, Section~\ref{sec:main} offers our main theorem on conditional weak convergence for the wild bootstrap. This allows for various statistical applications in Section~\ref{sec:stat_app}: Two-sided hypothesis tests and various sorts of time-simultaneous confidence bands are deduced, as well as simultaneous confidence intervals for a finite set of time points. {\color{black} Furthermore, tests for equivalence, inferiority and superiority as well as for proportionality of two hazard functions constitute useful criteria in practical data analyses. } A simulation study assessing small and large sample performances of both the derived confidence bands in comparison to the algebraic approach based on the time-transformed Brownian motion {\color{black}and the tests for proportional hazards} is reported in Section~\ref{sec:simus}. The SIR-3 data on patients in ICU (\citealp{beyersmann2006} and \citealp{wolkewitz2008}) serves as its template and is practically revisited in Section~\ref{sec:dataex}. Concluding remarks and a discussion are given in Section~\ref{sec:discussion}. All proofs are deferred to the Appendix. \section{Non-Parametric Estimation under the Multiplicative Intensity Structure} \label{sec:model} Throughout, we adopt the notation of \cite{abgk93}. For ${k\in \mathbb{N}}$, let $\textbf{N}=\left(N_1,\ldots,N_k\right)'$ be a multivariate counting process which is adapted to a filtration $(\mathcal F_t)_{t \geq 0}$. Each entry $N_j, j=1,\dots,k,$ is supposed to be a c\`adl\`ag function, zero at time zero, and to have piecewise constant paths with jumps of size one. In addition, assume that no two components jump at the same time and that each $N_{j}(t)$ satisfies the \textit{multiplicative intensity model} of \cite{aalen1978} with intensity process given by $\lambda_j(t) = \alpha_j(t) Y_j(t)$. Here, $Y_j(t)$ defines a predictable process not depending on unknown parameters and $\alpha_j$ describes a non-negative (hazard) function. For well-definiteness, the observation of $\textbf{N}$ is restricted to the interval $[0,\tau]$, where $\tau< \tau_j = {\sup}\big\{ u \geq 0 :\int_{(0,u]}\alpha_j(s)ds<\infty\big\} \ \text{for all } j = 1, \dots, k.$ The multiplicative intensity structure covers several customary frameworks in the context of time-to-event analysis. The following overview specifies frequently used models. \begin{example}\label{ex:cov_models} (a) Markovian multistate models with finite state space $\mathcal S$ are very popular in biostatistics. In this setting, $Y_\ell(t)$ represents the total number of individuals in state $\ell$ just prior to $t$ (`number at risk'), whereas $\alpha_{\ell m}(t)$ is the instantaneous risk (`transition intensity') to switch from state $\ell$ to $m$, where $\ell, m \in \mathcal S$, $\ell\ne m$. Here $N_\ell = \sum_{i=1}^n N_{\ell;i}$ is the aggregation over individual-specific counting processes with $n \in \mathbb N$ individuals under study. For specific examples (such as competing risks or the illness-death model) and details including the incorporation of independent left-truncation and right-censoring, see \cite{abgk93} and \cite{aalen08}. \\ (b) Other examples are the relative or excess mortality model, where not all individuals necessarily share the same hazard rate $\alpha$. In this case $Y$ cannot be interpreted as the total number of individuals at risk as in part (a); see Example IV.1.11 in \cite{abgk93} for details. \\ (c) The time-inhomogeneous Markov assumption required in part (a) can even be relaxed in specific situations: Following Example X.1.7 in \cite{abgk93}, consider an illness-death model without recovery. {\color{black} Assuming that the transition intensity $\alpha_{12}$ depends on the duration $d$ in the intermediate state, but not on time $t$, leads to semi-Markov process not satisfying the multiplicative intensity structure. This is because the intensity process of $N_{12}(t)$ is given by $\alpha_{12}(t-T)Y_1(t)$, where the first factor of the product is not deterministic anymore. Here, $T$ is the random transition time into state 1. However, when $d=t-T$ is used as the basic timescale, the counting process $K(d)=N_{12}(d+T)$ has intensity $\alpha_{12}(d)Y_1(d)$ with respect to the filtration \begin{align*} \mathcal F_d=\left(\sigma\lbrace (N_{01}(t), N_{02}(t)): 0<t<\tau\rbrace \lor \sigma\lbrace K(d): 0<d<\infty\rbrace \right). \end{align*} Thus, the multiplicative intensity structure is fulfilled. } \end{example} Under the above assumptions, the Doob-Meyer decomposition applied to $N_j$ leads to \begin{eqnarray} dN_{j}(s)= \lambda_{j}(s)ds+dM_{j}(s),\label{eq:doobmeyer} \end{eqnarray} where the $M_{j}$ are zero-mean martingales with respect to $(\mathcal F_t)_{t \in [0,\tau]}$. The canonical nonparametric estimator of the cumulative hazard function $A_j(t)=\int_{(0,t]}\alpha_j(s)ds$ is given by the so-called \textit{Nelson-Aalen estimator} \begin{eqnarray*} \hat A_{jn}(t)=\int\limits_{(0,t]}\frac{J_j(s)}{Y_j(s)}dN_j(s). \label{eq:NA} \end{eqnarray*} Here, $J_j(t)=\mathbf 1\{Y_j(t)>0\}$, $\frac00 := 0$, and $n \in \mathbb N$ is a sample size-related number (that goes to infinity in asymptotic considerations). Its multivariate counterpart is introduced by $\hat{\textbf{{A}}}_n:=(\hat A_{1n},\ldots,\hat A_{kn})'$. As in \cite{abgk93}, suppose that there exist deterministic functions $y_j$ with $\inf_{u \in [0,\tau]} y_j(u)>0$ such that \begin{eqnarray} \underset{s\in[0,\tau]}{\sup}\left|\frac{Y_j(s)}{n}-y_j(s)\right|\xrightarrow[]{\text{ }P\text{ }}0 \quad \text{for all } j=1,\dots, k , \label{Pre.ass11} \end{eqnarray} where `$\stackrel{P}{\rightarrow}$' denotes convergence in probability for $n\rightarrow\infty$. For each $j$, define the normalized Nelson-Aalen process $W_{jn}:=\sqrt{n} ( \hat A_{jn}-A_j )$ possessing the asymptotic martingale representation \begin{eqnarray} W_{jn}(t)& \doteqdot &\sqrt{n}\int\limits_{(0,t]}\frac{J_j(s)}{Y_j(s)}dM_j(s) \label{NA_uni_martingale} \end{eqnarray} with $M_{j}$ given by (\ref{eq:doobmeyer}). Here, `$\doteqdot$' means that the difference of both sides converges to zero in probability. Define the vectorial aggregation of all $W_{jn}$ as $\textbf{\textit{W}}_n = (W_{1n}, \dots, W_{kn})'$ and let `$\stackrel{d}{\rightarrow}$' denote convergence in distribution for $n\rightarrow\infty$. Then, Theorem IV.1.2 in \cite{abgk93} in combination with (\ref{Pre.ass11}) provides a weak convergence result on the $k$-dimensional space $\mathfrak D[0,\tau]^k$ of c\`{a}dl\`{a}g functions endowed with the product Skorohod topology. \begin{theorem}\label{Th.Na.uni} If assumption (\ref{Pre.ass11}) holds, we have convergence in distribution \begin{eqnarray} \textbf{W}_{n} \stackrel{d}{\longrightarrow} \textbf{U} = (U_1, \dots, U_k)', \end{eqnarray} on $\mathfrak D[0,\tau]^k$, where $U_1, \dots, U_k$ are independent zero-mean Gaussian martingales with covariance functions $\psi_j(s_1,s_2):=Cov(U_j(s_1),U_j(s_2))={\int}_{(0,s_1]}\frac{\alpha_j(s)}{y_j(s)}ds$ for $j = 1, \dots, k$ and $0\le s_1\le s_2\le \tau$. \end{theorem} The covariance function $\psi_j$ is commonly approximated by the \textit{Aalen-type} \begin{eqnarray} \hat\sigma_j^2(s_1) = n \int\limits_{(0,s_1]} \frac{J_j(s)}{Y_j^2(s)} d N_j(s). \label{eq:varaalen}\end{eqnarray} or the \textit{Greenwood-type} estimator \begin{eqnarray} \hat\sigma_j^2(s_1) = n \int\limits_{(0,s_1]} \frac{J_j(s) (Y_j(s)-\Delta N_j(s))}{Y_j^3(s)} d N_j(s) \label{eq:vargreenwood}\end{eqnarray} which are consistent for $\psi_j(s_1,s_2)$ under the assumption of Theorem~\ref{Th.Na.uni}; cf. (4.1.6) and (4.1.7) in \cite{abgk93}. Here, $\Delta N_j(s)$ denotes the jump size of $N_j$ at time $s$. \section{Inference via Brownian Bridges and the Wild Bootstrap} \label{sec:main} As discussed in \citet{abgk93}, the limit process $\textbf{\textit{U}}$ can analytically be approximated via Brownian bridges. However, improved coverage probabilities in the simulation study in Section \ref{sec:simus} suggest that the proposed wild bootstrap approach may be preferable. First, we sum up the classic result. \subsection{Inference via Transformed Brownian Bridges} \label{sec:brownian_bridge} The asymptotic mutual independence stated in Theorem~\ref{Th.Na.uni} allows to focus on a single component of $\textbf{\textit{W}}_n$, say $W_{1n} = \sqrt{n} ( \hat A_{1n} - A_1 )$. For notational convenience, we suppress the subscript $1$. Let $g$ be a positive (weight) function on an interval $[t_1,t_2]\subset [0,\tau]$ of interest and $B^0$ a standard Brownian bridge process. Then, as $n \rightarrow \infty$, it is established in Section~IV.1 in \cite{abgk93} that \begin{eqnarray} \sup_{s \in [t_1,t_2]} \Big| \frac{\sqrt n ( \hat A_n(s) - A(s))}{1 + \hat\sigma^2(s)} g \Big(\frac{\hat\sigma^2(s)}{1 + \hat\sigma^2(s)} \Big) \Big| \stackrel{d}{\longrightarrow} \sup_{s \in [\phi(t_1),\phi(t_2)]} | g(s) B^0(s) |. \label{eq:asymptdist} \end{eqnarray} Here $\phi(t) = \frac{\sigma^2(t)}{1 + \sigma^2(t)}$, $\sigma^2(t) = \psi(t,t)$ and $\hat\sigma^2(t)$ is a consistent estimator for $\sigma^2(t)$, such as~\eqref{eq:varaalen} or~\eqref{eq:vargreenwood}. Quantiles of the right-hand side of \eqref{eq:asymptdist} for $g\equiv 1$ are recorded in tables \citep[e.g.,][]{koziol1975,hall1980,schumacher1984}. For general $g$, they can be approximated via standard statistical software. Even though relation \eqref{eq:asymptdist} enables statistical inference based on the asymptotics of a central limit theorem, appropriate resampling procedures usually showed improved properties; see e.g., \cite{hall91}, \cite{good05} and \cite{pauly15}. \subsection{Wild Bootstrap Resampling}\label{sec:wbresampling} In contrast to, for instance, a competing risks model where each counting process $N_{j}$ is at most $n$, the number $N_{j}(\tau)$ is not necessarily bounded in our setup only assuming Aalen's multiplicative intensity model. Hence, a modification of the multiplier resampling scheme under competing risks suggested by \cite{lin97} and elaborated by \cite{beyersmann12b} is required. For this purpose, introduce counting process-specific stochastic processes indexed by $s \in [0,\tau]$ that are independent of $N_j, Y_j$ for all $j=1,\dots,k$. Let $(G_j(s))_{s \in [0,\tau]}, 1 \leq j \leq k,$ be independently and identically distributed (i.i.d.) white noise processes such that each $G_j(s)$ satisfies $\E(G_j(s)) = 0$ and $var(G_j(s))=1$, $j=1,\dots,k$, $s \in [0,\tau]$. {\color{black}That is, all $\ell$-dimensional marginals of $G_1$, $\ell \in \N$, shall be the same $\ell$-fold product-measure.} Then, a \emph{wild bootstrap version} of the normalized multivariate Nelson-Aalen estimator $\textbf{\textit{W}}_n$ is defined as \begin{eqnarray} \hat{\textbf{\textit{W}}}_{n}(t) & = & (\hat{W}_{1n}(t), \dots, \hat{W}_{kn}(t) )' \label{NA.uni.Wnh} \\ & := & \sqrt{n} \bigg( \underset{(0,t]}{\int}\frac{J_1(s)}{Y_1(s)} G_{1}(s) dN_{1}(s), \dots, \underset{(0,t]}{\int}\frac{J_k(s)}{Y_k(s)} G_{k}(s) dN_{k}(s) \bigg)'. \nonumber \end{eqnarray} In words, $\hat{\textbf{\textit{W}}}_{n}$ is obtained from representation \eqref{NA_uni_martingale} of $\textbf{\textit{W}}_n$ by substituting the unknown individual martingale processes $M_{j}$ with the \textit{observable} quantities $G_{j} N_{j}$. Even though only the values of each $G_j$ at the jump times of $N_j$ are relevant, this construction in terms of white noise processes enables a consideration of the wild bootstrap process on a product probability space; see the Appendix for details. {\color{black} Consider for a moment the special case of a multistate model with $n$ i.i.d. individuals (Example~\ref{ex:cov_models}(a)). For instance, the competing risks model in \cite{lin97} involves at most one transition (and thus one multiplier) per individual, whereas \cite{glidden02} allows for arbitrarily many transitions but also introduces only one multiplier per individual. In contrast, our resampling approach is a completely new approach in the sense that it involves independent weightings of all jumps even within the same individual. Being able to resample the Nelson-Aalen estimator even for randomly many numbers of events per individual in this way is a real novelty and this problem has not yet been theoretically discussed before -- using any technique whatsoever. Hence, utilizing white noise processes as done in~\eqref{NA.uni.Wnh} is a new aspect in this area. } The limit distribution of $\hat{\textbf{\textit{W}}}_n$ may be approximated by simulating a large number of replicates of the $G$'s, while the data is kept fixed. For a competing risks setting with standard normally distributed multipliers, our general scheme reduces to the one discussed in \cite{lin97}. For the remainder of the paper, we summarize the available data in the $\sigma$-algebra ${\color{black}\mathcal{C}_0} = \sigma \{N_{j}(u),$ $Y_{j}(u):j = 1,\dots,k,\ u\in[0,\tau]\}.$ {\color{black}A natural way to introduce a filtration based on $\mathcal C_0$ that progressively collects information on the white process is by setting \begin{align*} \mathcal{C}_t = \mathcal C_0 \ \vee \ \sigma\{ G_j(s): j=1,\dots,k, \ s \in [0,t] \}. \end{align*} The following lemma is a key argument in an innovative, martingale-based consistency proof of the proposed wild bootstrap technique. \begin{lemma} \label{lem:mart} For each $n \in \N$, the wild bootstrap version of the multivariate Nelson-Aalen estimator $(\hat{\textbf{\textit{W}}}_{n}(t))_{t \in [0,\tau]}$ is a square-integrable martingale with respect to the filtration $(\mathcal{F}_t)_{t \in [0,\tau]}$. with orthogonal components. Its predictable variation process is given by $$ \langle \hat{\textbf{\textit{W}}}_{n} \rangle : \ t \ \longmapsto \ n \bigg( \int_0^t \frac{J_1(s)}{Y_1^2(s)} d N_1(s), \dots, \int_0^t \frac{J_k(s)}{Y_k^2(s)} d N_k(s) \bigg)$$ and its optional variation process by $$ [ \hat{\textbf{\textit{W}}}_{n} ] : \ t \ \longmapsto \ n \bigg( \int_0^t \frac{J_1(s)}{Y_1^2(s)} G_1^2(s) d N_1(s), \dots, \int_0^t \frac{J_k(s)}{Y_k^2(s)} G_k^2(s) d N_k(s) \bigg) .$$ \end{lemma} } The following conditional weak convergence result justifies the approximation of the limit distribution of $\textbf{\textit{W}}_{n}$ via $\hat{\textbf{\textit{W}}}_n$ given {\color{black}$\mathcal{C}_0$}. Both, the general framework requiring only Aalen's multiplicative intensity structure as well as using possibly non-normal multipliers are original to the present paper. \begin{theorem}\label{Th.Na.uni-What} Let $\textbf U$ be as in Theorem \ref{Th.Na.uni}. Assuming \eqref{Pre.ass11}, we have the following conditional convergence in distribution on $\mathfrak D[0,\tau]^k$ given {\color{black}$\mathcal{C}_0$ as $n \rightarrow \infty$}: \begin{eqnarray*} \hat{\textbf W}_{n}\stackrel{d}{\longrightarrow} \textbf U \quad \text{in probability.} \end{eqnarray*} \end{theorem} \begin{remark}\label{rm:wc} {\color{black} (a) It is due to the martingale property of the wild bootstrapped multivariate Nelson-Aalen estimator that we anticipate a good finite sample approximation of the unknown distribution of the Nelson-Aalen estimator. In particular, the wild bootstrap, realized by white noise processes as above, succeeds in imitating the martingale structure of the original Nelson-Aalen estimator. The predictable variation process of the wild bootstrap process equals the optional variation process of the centered Nelson-Aalen process. Hence, both processes share the same properties and approximately the same covariance structure. } \\ (b) Additionally to the proof presented in the Appendix, a more elementary consistency proof is shown in the Supplementary Material. The proof transfers the core arguments of \cite{beyersmann12b} and \cite{dobler14} to the multivariate Nelson-Aalen estimator in a more general setting: First, we show convergence of all finite-dimensional conditional marginal distributions of $\hat {\textbf{\textit{W}}}_n$ towards $\textbf{\textit{U}}$ generalizing some findings of \cite{Pauly11a}. Second, we verify conditional tightness by applying a variant of Theorem~15.6 in \cite{billingsley68}; see \cite{jacod03}, p. 356. In both cases the subsequence principle for random elements converging in probability is combined with assumption (\ref{Pre.ass11}). \\ {\color{black} (c) Suppose that $E(n^k J_1(u) / Y_1^k(u)) = O(1)$ for some $k \in \N$ and all $u \in [0,\tau]$, which for example holds for any $k \in \N$ if $Y_1$ has a number at risk interpretation. Since different increments of $\textbf{\textit{W}}_n$ (to arbitrary powers) are uncorrelated, it can be shown that the convergence in Theorem~\ref{Th.Na.uni} for single $t \in [0,\tau]$ even holds in the Mallows metric $d_p$ for any even $0 < p \leq k$; see e.g. \cite{bickel81} for such theorems related to the classical bootstrap. Provided that the $r$th moment of $G_1(u)$ exists, similar arguments show that the convergence in probability in Theorem~\ref{Th.Na.uni-What} for single $t \in [0,\tau]$ holds in the Mallows metric $d_p$ for any even $0 < p \leq r$ as well. This of course includes white noise processes with $Poi(1)$ or standard normal margins, as applied later on. } \end{remark} \section{Statistical Applications} \label{sec:stat_app} {\color{black} Throughout this section denote by $\alpha \in (0,1)$ the nominal level of all inference procedures. } \subsection{Confidence Bands} \label{sec:constrCB} After having established all required weak convergence results, we discuss different possibilities for realizing confidence bands for $A_j$ around the Nelson-Aalen estimator $\hat A_{jn}$, $j=1,\ldots,k,$ on an interval $[t_1, t_2] \subset [0,\tau]$ of interest. Later on, we propose a confidence band for differences of cumulative hazard functions. As in Section~\ref{sec:brownian_bridge}, we first focus on $A_1$ and suppress the index 1 for notational convenience. Following \cite{abgk93}, Section~IV.1, we consider weight functions \begin{eqnarray*} g_1(s) = (s(1-s))^{-1/2} \qquad \text{or} \qquad g_2 \equiv 1 \end{eqnarray*} as choices for $g$ in relation (\ref{eq:asymptdist}). The resulting confidence bands are commonly known as \emph{equal precision} and \emph{Hall-Wellner} bands, respectively. We apply a log-transformation in order to improve small sample level $\alpha$ control. Combining the previous sections' convergences with the functional delta-method and Slutsky's lemma yields \begin{theorem} \label{thm:delta_meth_conv} {\color{black} Under condition \eqref{Pre.ass11}, } for any $0 \leq t_1 \leq t_2 \leq \tau$ such that $A(t_1) > 0$, we have the following convergences in distribution on the c\`adl\`ag space $\mathfrak D[t_1,t_2]$: \begin{eqnarray} \label{eq:weak_conv_logA.1} \Big( \sqrt n \hat A_n \frac{ \log \hat A_n - \log A}{1 + \hat\sigma^2} \Big) \cdot g \circ \frac{\hat\sigma^2}{1 + \hat\sigma^2} & \stackrel{d}{\longrightarrow} &(g B^0) \circ \phi \quad \text{and}\\ \label{eq:weak_conv_logA.2} \Big( \frac{\hat W_n}{1 + \sigma^{*2}} \Big) \cdot g \circ \frac{\sigma^{*2}}{1 + \sigma^{*2}} & \stackrel{d}{\longrightarrow}& (g B^0) \circ \phi \end{eqnarray} conditionally given {\color{black}$\mathcal{C}_0$} in probability, with $\phi$ as in Section~\ref{sec:main} and the wild bootstrap variance estimator $\sigma^{*2}(s):=n\int_{(0,t]}J(s)Y^{-2}(s)G^2(s)$ $dN(s)$. \end{theorem} In particular, $\sigma^{*2}$ is a uniformly consistent estimate for $\sigma^2$ \citep{dobler14} {\color{black}and, being the optional variation process of the wild bootstrap Nelson-Aalen process, it is a natural choice of a variance estimate}. For practical purposes, we adapt the approach of \citet{beyersmann12b} and estimate $\sigma^2$ based on the empirical variance of the wild bootstrap quantities $\hat{{W}}_{n}$. The continuity of the supremum functional translates \eqref{eq:weak_conv_logA.1} and~\eqref{eq:weak_conv_logA.2} into weak convergences for the corresponding suprema. Hence, the consistency of the following critical values is ensured: \begin{eqnarray*} c_{1-\alpha}^g & = & (1-\alpha) \text{ quantile of} \quad \mathfrak L \Big( \sup_{s \in [t_1,t_2]} | g(\hat \phi(s)) B^0(\hat \phi(s)) | \Big), \\ \tilde c_{1-\alpha}^g & = & (1-\alpha) \text{ quantile of} \quad \mathfrak L \Big( \sup_{s \in [t_1,t_2]} \Big| \frac{\hat W_n(s)}{1 + \sigma^{*2}(s)} g \Big(\frac{\sigma^{*2}(s)}{1 + \sigma^{*2}(s)} \Big) \Big| \ \Big| \ {\color{black}\mathcal{C}_0} \Big), \end{eqnarray*} where $\mathfrak L(\cdot)$ denotes the law of a random variable and $\alpha\in(0,1)$ the nominal level. Here, $g$ equals either $g_1$ or $g_2$ and $\hat\phi = \frac{\hat \sigma^2}{1+\hat \sigma^2}$. Note, that $\tilde c_{1-\alpha}^g$ is, in fact, a random variable. The results are back-transformed into four confidence bands for $A$ abbreviated with $HW$ and $EP$ for the Hall-Wellner and equal precision bands and $a$ and $w$ for bands based on quantiles of the asymptotic distribution and the wild bootstrap, respectively. In our simulation studies these bands are also compared with the linear confidence band $CB_{dir}^w$, which is based on the critical value \begin{eqnarray*} \quad \tilde c_{1-\alpha} & = & (1-\alpha) \text{ quantile of} \quad \mathfrak L \Big( \sup_{s \in [t_1,t_2]} \big| \hat W_n(s) \big| \Big| \ {\color{black}\mathcal{C}_0} \Big). \end{eqnarray*} \begin{corollary}\label{cor:CBs} Under the assumptions of Theorem \ref{thm:delta_meth_conv}, the following bands for the cumulative hazard function $(A(s))_{s \in [t_1,t_2]}$ provide an asymptotic coverage probability of $1-\alpha$: \begin{eqnarray} CB_{EP}^a & = & \Big[\hat A_n(s) \exp \Big( \mp \frac{c_{1-\alpha}^{g_1}}{\sqrt n \hat A_n(s)} \hat\sigma_n(s) \Big)\Big]_{s \in [t_1,t_2]} \nonumber \\ CB_{HW}^a & = &\Big[\hat A_n(s) \exp \Big( \mp \frac{c_{1-\alpha}^{g_2}}{\sqrt n \hat A_n(s)} (1+\hat\sigma_n^2(s)) \Big)\Big]_{s \in [t_1,t_2]} \nonumber\\ CB_{EP}^w & = &\Big[\hat A_n(s) \exp \Big( \mp \frac{\tilde c_{1-\alpha}^{g_1}}{\sqrt n \hat A_n(s)} \hat\sigma_n(s) \Big)\Big]_{s \in [t_1,t_2]} \label{eq:CBs} \\ CB_{HW}^w & = & \Big[\hat A_n(s) \exp \Big( \mp \frac{\tilde c_{1-\alpha}^{g_2}}{\sqrt n\hat A_n(s)} (1+\hat\sigma_n^2(s)) \Big)\Big]_{s \in [t_1,t_2]} \nonumber \\ CB_{dir}^w &= &\Big[\hat A_n(s) \mp \frac{\tilde c_{1-\alpha}}{\sqrt n}\Big]_{s \in [t_1,t_2]}. \nonumber \end{eqnarray} \end{corollary} \begin{remark}\label{rem:WB} \begin{enumerate} \item Note that the wild bootstrap quantile $\tilde c_{1-\alpha}$ does not require an estimate of $\phi$, thereby eliminating one possible cause of inaccuracy within the derivation of the other bands. However, the corresponding band $CB_{dir}^w$ has the disadvantage to possibly include negative values. \item The confidence bands are only well-defined if the left endpoint $t_1$ of the bands' time interval is larger than the first observed event. In particular, these bands yield unstable results for small values of $\hat A_n(t_1)$ due to the division in the exponential function; see \cite{lin1994} for a similar observation. \item The present approach directly allows the construction of confidence bands for within-sample comparisons of multiple $A_1, \dots, A_k$. For instance, a confidence band for the difference $A_1 - A_2$ may be obtained via quantiles based on the conditional convergence in distribution $\hat W_{1n} - \hat W_{2n} \stackrel{d}{\longrightarrow} U_1 - U_2 \sim Gauss( 0, \psi_1 + \psi_2)$ in probability by simply applying the continuous mapping theorem and taking advantage of the independence of $U_1$ and $U_2$; see \cite{whitt80} for the continuity of the difference functional. For that purpose, the distribution of \begin{eqnarray} D(t)=\sqrt{n} g(t)(\hat A_{1n}(t)-A_1(t)-(\hat A_{2n}(t)-A_2(t))), \end{eqnarray} with positive weight function $g$ can be approximated by the conditional distribution of $\hat D(t)= g(t) (\hat W_{1n}(t)-\hat W_{2n}(t))$. With $g\equiv 1$, an approximate $(1-\alpha)\cdot 100\%$ confidence band for the difference $A_1 - A_2$ of two cumulative hazard functions on $[t_1,t_2]$ is \begin{eqnarray} \left[\left(\hat A_1(s)-\hat A_2(s)\right)\pm \tilde q_{1-\alpha} / \sqrt{n}\right]_{s\in[t_1,t_2]}, \label{eq:diffCB} \end{eqnarray} where \begin{eqnarray*} \quad \tilde q_{1-\alpha} & = (1-\alpha) \text{ quantile of} \quad \mathfrak L \Big( \sup_{s \in [t_1,t_2]} \big|\hat W_{1n}(s)-\hat W_{2n}(s) \big| \Big| \ {\color{black}\mathcal{C}_0} \Big). \end{eqnarray*} Similar arguments additionally enable common two-sample comparisons. A practical data analysis using other weight functions $g$ in the context of cumulative incidence functions is given in \cite{hieke2013}. \end{enumerate} \end{remark} \begin{remark}[Construction of Confidence Intervals] \label{rem:cis} \begin{enumerate} \item In particular, Theorem~\ref{thm:delta_meth_conv} yields a convergence result on $\mathbb{R}^m$ for a finite set of time points $\{s_1, \dots, s_m \}\subset [0,\tau] , m \in \mathbb N$. Hence, using critical values $\tilde c_{1-\alpha}$ and $\tilde c_{1-\alpha}^g$ obtained from the law of the maximum $\max_{s_1, \dots, s_m}$ instead of the supremum, a variant of Corollary~\ref{cor:CBs} specifies simultaneous confidence intervals $I_1 \times \dots \times I_m$ for $(A(s_1), \dots, A(s_m))$ with asymptotic coverage probability $1-\alpha$. Since the error multiplicity is taken into account, the asymptotic coverage probability of a single such interval $I_j$ for $A(s_j)$ is greater than $1-\alpha$. \item Due to the asymptotic independence of the entries of the multivariate Nelson-Aalen estimator, a confidence region for the value of a multivariate cumulative hazard function $(A_1(t), \dots, A_k(t))$ at time $t \in [0,\tau]$ may be found using \v{S}id\'ak's correction: Letting $J_1, \dots, J_k$ be pointwise confidence intervals for $A_1(t),$ $\dots,$ $A_k(t)$ with asymptotic coverage probability $(1-\alpha)^{1/k}$, each found using the wild bootstrap principle, the coverage probability of $J_1 \times \dots \times J_k$ for $A_1(t) \times \dots \times A_k(t)$ clearly goes to $1 -\alpha$ as $n \rightarrow \infty$. \end{enumerate} \end{remark} \subsection{Hypothesis Tests for Equivalence{\color{black}, Inferiority, Superiority,} and Equality} Adapting the principle of confidence interval inclusion as discussed in \cite{wellek10}, Section~3.1, to time-simultaneous confidence bands, hypothesis tests for equivalence of cumulative hazard functions become readily available. To this end, let $\ell, u: [t_1,t_2] \rightarrow (0,\infty)$ be positive, continuous functions and denote by $(a_n(s),\infty)_{s \in [t_1,t_2]}$ and $[0,b_n(s))_{s \in [t_1,t_2]}$ the one-sided (half-open) analogues of any confidence band of the previous subsection with asymptotic coverage probability $1-\alpha$. Furthermore, let $A_0: [t_1,t_2] \rightarrow [0,\infty)$ be a pre-specified non-decreasing, continuous function for which equivalence to $A$ shall be tested. More precisely: \begin{eqnarray*} H : \{ A(s) \leq A_0(s) - \ell(s) \text{ or } A(s) \geq A_0(s) + u(s) \text{ for some } s \in [t_1,t_2] \} \\ \text{vs.} \quad K : \{ A_0(s) - \ell(s) < A(s) < A_0(s) + u(s) \text{ for all } s \in [t_1,t_2] \}. \end{eqnarray*} \begin{corollary} \label{cor:equivalence} Under the assumptions of Theorem \ref{thm:delta_meth_conv}, a hypothesis test $\psi_n$ of asymptotic level $\alpha$ for $H$ vs $K$ is given by the following decision rule: Reject $H$ if and only if the combined two-sided confidence band $(a_n(s),b_n(s))_{s \in [t_1,t_2]}$ is fully contained in the region spanned by $(A_0(s) - \ell(s), A_0(s) + u(s))_{s \in [t_1,t_2]} $. Further, it holds under $K$ that $\E(\psi_n) \rightarrow 1$ as $n\rightarrow \infty$, i.e., $\psi_n$ is consistent. \end{corollary} {\color{black}Similar arguments lead to analogue one-sided tests for the inferiority or superiority of the true cumulative hazard function to a prespecified function $A_0$.} Moreover, statistical tests for equality of two cumulative hazard functions can be constructed using the weak convergence results of Remark~\ref{rem:WB}(c): $$H_{=} : \{ A_1 \equiv A_2 \text{ on } [t_1,t_2] \} \quad \text{vs} \quad K_{\neq} : \{ A_1(s) \neq A_2(s) \text{ for some } s \in [t_1,t_2] \}.$$ Corollary~\ref{cor:ks_test} below yields an asymptotic level $\alpha$ test for $H_{=}$. \cite{bajorunaite07} and \cite{dobler14} used similar two-sided tests for comparing cumulative incidence functions in a two-sample problem. \begin{corollary}[A Kolmogorov-Smirnov-type test] \label{cor:ks_test} Under the assumptions of Theorem~\ref{thm:delta_meth_conv} and letting $g$ again be a positive weight function, $$\varphi^{KS}_n = \mathbf 1\{ \sup_{s \in [t_1,t_2]} \sqrt{n} g(s) | \hat A_{1n}(s) - \hat A_{2n}(s) | > \tilde q_{1-\alpha} \}$$ defines a consistent, asymptotic level $\alpha$ resampling test for $H_=$ vs. $K_{\neq}$. Here $\tilde q_{1-\alpha}$ is the $(1-\alpha)$-quantile of $\mathfrak L \big( \sup_{s \in [t_1,t_2]} \big|\hat D(s) \big| \big| \ {\color{black}\mathcal{C}_0} \big)$. \end{corollary} Similarly, Theorem~\ref{thm:delta_meth_conv} enables the construction of other tests, e.g., such of Cram\'er-von Mises-type. Furthermore, by taking the suprema over a discrete set $\{s_1,\dots, s_m\} \subset [0,\tau]$, the Kolmogorov-Smirnov test of Corollary~\ref{cor:ks_test} can also be used to test \begin{align*} & \tilde H_{=} : \{ A_1(s_j) = A_2(s_j) \text{ for all } 1\le j\leq m \} \\ & \quad \text{vs.} \quad \tilde K_{\neq} : \{ A_1(s_j) \neq A_2(s_j) \text{ for some }1\le j\leq m \} . \end{align*} Note that in a similar way, two-sample extensions of Corollaries \ref{cor:equivalence} and \ref{cor:ks_test} can be established following \cite{dobler14}. {\color{black} \subsection{Tests for Proportionality}\label{sec:proptest} A major assumption of the widely used \cite{cox72} regression model is the assumption of proportional hazards over time. Several authors have developed procedures for testing the null hypothesis of proportionality, see e.g. \cite{gill1987simple}, \cite{lin1991goodness}, \cite{grambsch1994proportional}, \cite{hess1995graphical}, \cite{scheike2004estimation} or \cite{kraus2007data} and the references cited therein. We apply our theory to derive a non-parametric test for proportional hazards assumption of two samples in our very general framework, covering two-sample right-censored and left-truncated multi-state models. The framework is an unpaired two-sample model given by independent counting processes $N^{(1)}, N^{(2)}$ and predictable processes $Y^{(1)}, Y^{(2)}$, assuming the conditions of Section~\ref{sec:model} for each group, and with sample sizes $n_1$ and $n_2$, respectively. Let again $J^{(j)}(t) = \mathbf 1\{ Y^{(j)}(t) > 0 \}$, $j=1,2$. Denote by $\hat A^{(j)}_{n_j} = \int_{(0,t]} \frac{J^{(j)}(s)}{Y^{(j)}(s)} d N^{(j)}$ the Nelson-Aalen estimator of the cumulative hazard functions $A^{(j)}$ and by $\alpha^{(j)}$ the corresponding rates, $j=1,2$. To motivate a suitable test statistic we make use of the following equivalence between hazards proportionality and equality of both cumulative hazards: $$ \alpha^{(1)}(t) = c \ \alpha^{(2)}(t) \ \text{in} \ t \in [0,\tau] \ \text{for} \ c > 0 \ \ \Longleftrightarrow \ \ A^{(1)}(t) = c \ A^{(2)}(t) \ \text{in} \ t \in [0,\tau] \ \text{for} \ c > 0 , $$ which, as the null hypothesis of interest, is denoted by $H_{0,\text{prop}}$. In a natural way similar to \cite{gill1987simple} in the simple survival setup this leads to statistics of the form \begin{align*} T_{n_1,n_2} = \rho \Big( \ \sqrt{\frac{n_1 n_2}{n}} \frac{\hat A^{(2)}_{n_2}}{\hat A^{(1)}_{n_1}} \ , \ \sqrt{\frac{n_1 n_2}{n}} \frac{\hat A^{(2)}_{n_2}(\tau)}{\hat A^{(1)}_{n_1}(\tau)} \ \Big), \end{align*} $n=n_1+n_2$, where $\rho$ is an adequate distance on $\mathfrak D[0,\tau]$, e.g. $\rho(f,g) = \sup w |f - g|$ (leading to Kolmogorov-Smirnov-type tests), \ $\rho(f,g) = \int (f-g)^2 w^2 d \lambda \!\! \lambda$ (leading to Cram\'er-von-Mises-type tests), where $w: [0,\tau] \rightarrow [0,\infty)$ is a suitable weight function. Later on, we choose $w = \hat A^{(1)}_{n_1}$ which ensures the evaluation of $\rho$ on $\{ \hat A^{(1)}_{n_1} > 0 \}$. \iffalse Instead of letting the weight function be equal to zero in a neighborhood of $t=0$, in order to not divide by zero, one might as well consider the statistic $$ \tilde T_{n_1,n_2} = \sqrt{\frac{n_1 n_2}{n}} \rho \Big( \ \hat A^{(2)}_{n_2} \cdot \hat A^{(1)}_{n_1}(\tau) \ , \ \hat A^{(1)}_{n_1} \cdot \hat A^{(2)}_{n_2}(\tau) \ \Big). $$ \fi Let $\hat W_{n_1}^{(1)}$ and $\hat W_{n_2}^{(2)}$ be the obvious wild bootstrap versions of the sample-specific centered Nelson-Aalen estimators; cf. \eqref{NA.uni.Wnh}. \begin{theorem} \label{thm:prop} Let $\rho$ be either the above Kolmogorov-Smirnov- or the Cram\'er-von Mises-type statistic with $w = \hat A^{(1)}_{n_1}$. If $n_1 / n \rightarrow p \in (0,1)$ as $\min(n_1,n_2) \rightarrow \infty$, then the test for $H_{0,\textnormal{prop}}$ $$ \varphi_{n_1,n_2}^{\textnormal{prop}} = \mathbf 1 \{ T_{n_1,n_2} > \tilde q_{1 - \alpha} \} $$ has asymptotic level $\alpha$ under $H_{0,prop}$ and asymptotic power 1 on the whole complement of $H_{0,prop}$. Here $\tilde q_{1 - \alpha}$ is the $(1 - \alpha)$-quantile of $$\mathfrak L \Big( \rho\Big( \sqrt{\frac{n_1}{n}} \frac{\hat W_{n_2}^{(2)}}{\hat A_{n_1}^{(1)}} - \sqrt{\frac{n_2}{n}} \hat W_{n_1}^{(1)} \frac{\hat A_{n_2}^{(2)}}{[\hat A_{n_1}^{(1)}]^2} \ , \ \sqrt{\frac{n_1}{n}} \frac{ \hat W_{n_2}^{(2)}(\tau)}{\hat A_{n_1}^{(1)}(\tau)} - \sqrt{\frac{n_2}{n}} \hat W_{n_1}^{(1)}(\tau) \frac{\hat A_{n_2}^{(2)}(\tau)}{[\hat A_{n_1}^{(1)}(\tau)]^2} \Big) \Big| \ \mathcal{C}_0 \Big).$$ \end{theorem} \iffalse \begin{remark} (c) To discuss: Similar to Remark 4.1. also applicable for within-sample comparisons? Cox + time-dependent covariates? It is well known that stratified Cox models with time-invariant covariates and coefficients and the same baseline rate induce unconditional (i.e. nonparametric) models with proportional hazard rates. Hence, a rejection of $H_0^\textnormal{prop}$ also implies that the assumption of such a time-invariant Cox model is false. As a consequence, the practitioner should then model the covariate effect in the Cox model as time-varying or he/she might even utilize a completely different semiparametric regression model. \end{remark} \fi } \section{Simulation Study} \label{sec:simus} The motivating example behind the present simulation study is the SIR-3 data of Section \ref{sec:dataex}. The setting is a specification of Example \ref{ex:cov_models}(a) called {\it illness-death model with recovery}. As illustrated in the multistate pattern of Figure \ref{fig:illnessdeath}, the model has state space $\mathcal S=\lbrace 0,1,2\rbrace$ and includes the transition hazards $\alpha_{01},\ \alpha_{10},\ \alpha_{02},$ and $\alpha_{12}$. \begin{figure} \caption{Illness-death model with recovery and transition hazards $\alpha_{01},\ \alpha_{10},\ \alpha_{02},$ and $\alpha_{12}$ at time $t$.} \label{fig:illnessdeath} \end{figure} The simulation of the underlying quantities is based on the methodology suggested by \cite{allignol11} generalized to the time-inhomogeneous Markovian multistate framework, which can be seen as a nested series of competing risks experiments. More precisely, the individual initial states are derived from the proportions of individuals at $t=0$ and the censoring times are obtained from a multinomial experiment using probability masses equal to the increments of the censoring Kaplan-Meier estimate originated from the SIR-3 data. Similarly, event times are generated according to a multinomial distribution with probabilities given by the increments of the original Nelson-Aalen estimators. These times are subsequently included into the multistate simulation algorithm described in \cite{beyersmann12a}, Section 8.2. Since censoring times are sampled independently and each simulation step is only based on the current time and the current state, the resulting data follows a Markovian structure. A more formal justification of the multistate simulation algorithm can be found in \cite{gill1990survey} and Theorem II.6.7 in \cite{abgk93}. \begin{table} \caption{Mean number of events per transition on [5,30] provided by the simulation study of Section \ref{sec:simus}} \label{tab:nevents} \centering{\fbox{ \begin{tabular}{lllll} \multirow{2}{*}{Sample size} & \multicolumn{4}{c}{Transition}\\ \cline{2-5} & $1\rightarrow 0$ & $0\rightarrow 1$ & $0\rightarrow 2$ & \multicolumn{1}{c}{$1\rightarrow 2$} \\ \hline 93 & 20.1 &4.3& 43.9 & 10.1 \\ 186 & 42.7 & 8.3 & 96.5 & 21.7\\ 373 & 85.6 & 17.0 & 193.4 & 43.7\\ 747 & 170.9 & 33.9 & 387.4 & 87.4\\ 747*& 171 & 34 & 387 & 87\\ \hline *original data \end{tabular}}} \end{table} We consider three different sample sizes: The original number of 747 patients is stepwisely reduced to {\color{black}373, 186, and 93 patients}. For each scenario we simulate 1000 studies. As an overview, the mean number of events for each possible transition and scenario is illustrated in Table 1. The mean number of events regarding 747 patients reflects the original number of events. All numbers are restricted to the time interval [5,30], which is chosen due to a small amount of events before $t=5$ (left panel of Figure \ref{fig:CBdataex}). Further, less than 10\% of all individuals are still under observation after day 30. In particular, asymptotic approximations tend to be poor at the left- and right-hand tails; cf. Remark \ref{rem:WB}(b) and \cite{lin97}. Utilizing the \texttt{R}-package \texttt{sde} \citep{sde2014}, the quantiles $c_{1-\alpha}^g$ in (\ref{eq:CBs}) of each single study are empirically estimated by simulating 1000 sample paths of a standard Brownian bridge. These quantiles are separately derived for both the Aalen- and Greenwood-type variance estimates (\ref{eq:varaalen}) and (\ref{eq:vargreenwood}). The bootstrap critical values are based on 1000 bootstrap realizations of $\hat{\textbf{\textit{W}}}_n$ for each simulation step including both standard normal and centered Poisson variates with variance one. The latter is motivated by a slightly better performance compared to standard normal multipliers (\citealp{beyersmann12b}, and \citealp{dobler15a}). {\color{black} Furthermore, \cite{liu88} argued in a classical (linear regression) problem that wild bootstrap weights with skewness equal to one satisfy the second order correctness of the resampling approach. According to the cited simulation results, a similar result might hold true in our context, as the Poisson variates have skewness equal to one and standard normal variates are symmetric. A careful analysis of the convergence rates, however, is certainly beyond the scope of this article. In order to guarantee statistical reliability, we do not derive confidence bands for sample sizes and transitions with a mean number of observed transitions distinctly smaller than 20. The nominal level is set to $\alpha=0.05$. All simulations are performed with the \texttt{R}-computing environment version 3.3.2 \citep{Rcite}. } Following Table \ref{tab:CovProb}, {\color{black} almost all} bands constructed via Brownian bridges consistently tend to be rather conservative in our setting, i.e., result in too broad bands. Here, the usage of the Greenwood-type variance estimate yields more accurate coverage probabilities compared to the Aalen-type estimate. In contrast, the wild bootstrap approach {\color{black} mostly outperforms the Brownian bridge procedures: The log-transformed wild bootstrap bands approximately keep the nominal level even in the smaller sample sizes, except for the $0 \rightarrow 1$ transition with smallest sample size (corresponding to only 17 events in the mean; cf. Table~\ref{tab:nevents}). We also observe that the log-transformation in general improves coverage for the wild bootstrap procedure. The current simulation study showed no clear preference for the choice of weight. Note that all wild bootstrap bands for transition $0\rightarrow 2$ show a similar, but mostly reduced conservativeness compared to the bands provided by Brownian bridges.} We have to emphasize that coverage probabilities for the cumulative hazard functions are drastically decreased to approximately 75\% in all sample sizes if log-transformed pointwise confidence intervals would wrongly be interpreted time-simultaneously (results not shown). \begin{sidewaystable} \caption{Empirical coverage probabilities (\%) from the simulation study of Section \ref{sec:simus} separately for each transition and different simulated number of individuals} \label{tab:CovProb} \centering{\begin{tabular}{|lc|cccccccccc|} \hline \multicolumn{2}{|l|}{\multirow{4}{*}{}} & \multicolumn{10}{c|}{Type of Confidence Band} \\ \cline{3-12} \multicolumn{2}{|l|}{} & \multicolumn{4}{c}{Brownian Bridge} & \multicolumn{6}{c|}{Wild Bootstrap} \\ \cline{3-12} \multicolumn{2}{|l|}{} & \multicolumn{2}{c}{95\% log EP} & \multicolumn{2}{c}{95\% log HW} & \multicolumn{2}{c}{95\% log EP} & \multicolumn{2}{c}{95\% log HW} & \multicolumn{2}{c|}{95\% direct} \\ \cline{3-12} \multicolumn{2}{|l|}{} & Aalen & Green- & Aalen & Green- & Poisson & Standard & \multirow{2}{*}{Poisson} & Standard & \multirow{2}{*}{Poisson} & Standard \\ Transition & $N$ & & wood & & wood & & normal & & normal & & normal \\ \hline \multirow{2}{*}{$0\rightarrow 1$} & 373 & 96.4 & 95.9 & 95.5 & 95.3 & 92.5 & 92.5 & 92.5 & 92.5 & 91.4 & 91.9 \\ & 747 & 97.7 & 97.3 & 97.2 & 97.0 & 95.0 & 94.9 & 95.2 & 95.0 & 92.9 & 93.2 \\ \hline \multirow{4}{*}{$0\rightarrow 2$} & 93 & 98.0 & 97.1 & 98.4 & 97.3 & 97.6 & 97.8 & 97.3 & 96.0 & 96.6 & 96.6 \\ & 186 & 98.3 & 95.5 & 98.9 & 97.4 & 97.2 & 98.2 & 98.0 & 96.1 & 96.2 & 96.2 \\ & 373 & 98.1 & 95.0 & 98.2 & 96.9 & 97.2 & 97.3 & 97.1 & 97.1 & 96.0 & 96.3 \\ & 747 & 98.6 & 96.2 & 98.8 & 97.7 & 97.7 & 97.8 & 97.4 & 97.8 & 96.0 & 96.2 \\ \hline \multirow{4}{*}{$1\rightarrow 0$} & 93 & 97.0 & 94.9 & 97.0 & 95.2 & 95.1 & 95.1 & 94.8 & 94.8 & 93.7 & 93.7 \\ & 186 & 97.3 & 95.8 & 97.7 & 96.1 & 95.6 & 95.7 & 95.7 & 95.4 & 94.5 & 94.3 \\ & 373 & 97.2 & 96.3 & 97.9 & 97.0 & 95.2 & 95.3 & 95.9 & 96.3 & 95.2 & 95.3 \\ & 747 & 97.8 & 96.8 & 97.5 & 96.9 & 96.6 & 96.6 & 95.9 & 96.0 & 96.1 & 96.3 \\ \hline \multirow{3}{*}{$1\rightarrow 2$} & 186 & 97.5 & 96.7 & 97.2 & 96.2 & 94.7 & 94.5 & 94.3 & 94.7 & 93.2 & 93.3 \\ & 373 & 98.2 & 97.7 & 98.2 & 97.8 & 95.8 & 95.8 & 95.1 & 95.2 & 94.6 & 94.3 \\ & 747 & 97.2 & 96.6 & 96.6 & 96.0 & 94.4 & 95.5 & 94.3 & 94.7 & 94.9 & 95.1 \\ \hline \end{tabular}} \flushleft{EP: equal-precision band; HW: Hall-Wellner band.} \end{sidewaystable} {\color{black} The second set of simulations follows the test for proportional hazards derived in Theorem \ref{thm:prop} with regard to keeping the preassigned error level under the null hypothesis. For that purpose, we assume a competing risks model with two competing events separately for two unpaired patient groups. For an illustration, see for instance, Figure 3.1 in \cite{beyersmann12a}. We consider four different constant hazard scenarios: (I) the hazards for the type-1 event are set to $\alpha^{(1)}_{01}(t)=\alpha^{(2)}_{01}(t)=2$ (no effect on the type-1 hazard, in particular, a hazard ratio of $c=1$); (II) $\alpha^{(1)}_{01}(t)=1$ and $\alpha^{(2)}_{01}(t)=2$ (large effect); (III) $\alpha^{(1)}_{01}(t)=\alpha^{(2)}_{01}(t)=1$; (IV) $\alpha^{(1)}_{01}(t)=1$ and $\alpha^{(2)}_{01}(t)=1.5$ (moderate effect). In each scenario, we set $\alpha^{(1)}_{02}=\alpha^{(2)}_{02}(t)=2$, in particular, we consistently assume no group effect on the competing hazard. Further, scenario-specific administrative censoring times are chosen such that approximately 25\% of the individuals are censored. The simulations designs are selected such that we include different effect sizes as well as different type-1 hazard ratio configurations with respect to the competing hazards. We consider a balanced design with $n_1=n_2=n\in\lbrace 125,250,500,1000\rbrace$. The right-hand tail of the domain of interest is set to $\tau=0.3$. Simulation of the event times and types follows the procedure explained in Chapter 3.2 of \cite{beyersmann12a}. As before, we simulate 1000 studies for each scenario and sample size configuration, whereas the critical values of the Kolmogorov-Smirnov-type and Cram\'er-von-Mises-type statistics from Section~\ref{sec:proptest} are derived from 1000 bootstrap samples including both standard normal and centered Poisson variates with variance one. The results for the type I error rates (for $\alpha=0.05$) are displayed in Table \ref{tab:proptest}. As expected from consistency, the higher the number of patients the better is the type I error approached for both test statistics in each scenario. Except for Scenario (II), all procedures keep the type I error rate quite accurately for $n\geq 500$. For smaller sample sizes, all tests tend to be conservative with a particular advantage for the Kolmogorov-Smirnov statistic. \begin{sidewaystable}[] \centering \caption{Simulated size of $\phi_{n_1,n_2}^{\text{prop}}$ for nominal size $\alpha=5\%$ under different sample sizes and constant hazard configurations. In each scenario $\tau=0.3$ and $25\%$ of individuals are censored.} \begin{tabular}{|l|cccc|cccc|cccc|cccc|} \hline & \multicolumn{4}{c|}{Scenario I} & \multicolumn{4}{c|}{Scenario II} & \multicolumn{4}{c|}{Scenario III} & \multicolumn{4}{c|}{Scenario IV} \\ \cline{2-17} & \multicolumn{2}{c|}{KMS} & \multicolumn{2}{c|}{CvM} & \multicolumn{2}{c|}{KMS} & \multicolumn{2}{c|}{CvM} & \multicolumn{2}{c|}{KMS} & \multicolumn{2}{c|}{CvM} & \multicolumn{2}{c|}{KMS} & \multicolumn{2}{c|}{CvM} \\ \hline $n_i$ & SN & \multicolumn{1}{l|}{Poi} & SN & Poi & SN & \multicolumn{1}{l|}{Poi} & SN & CvM & SN & \multicolumn{1}{l|}{Poi} & SN & Poi & SN & \multicolumn{1}{l|}{Poi} & SN & Poi \\ \hline 125 & 0.029 & 0.024 & 0.033 & 0.030 & 0.029 & 0.026 & 0.027 & 0.023 & 0.045 & 0.041 & 0.028 & 0.030 & 0.046 & 0.042 & 0.030 & 0.025 \\ 250 & 0.035 & 0.039 & 0.039 & 0.040 & 0.040 & 0.038 & 0.037 & 0.034 & 0.039 & 0.040 & 0.037 & 0.034 & 0.034 & 0.034 & 0.033 & 0.030 \\ 500 & 0.057 & 0.054 & 0.059 & 0.060 & 0.034 & 0.038 & 0.040 & 0.041 & 0.056 & 0.053 & 0.047 & 0.045 & 0.044 & 0.044 & 0.043 & 0.044 \\ 1000 & 0.050 & 0.050 & 0.047 & 0.047 & 0.048 & 0.049 & 0.043 & 0.046 & 0.047 & 0.049 & 0.045 & 0.048 & 0.058 & 0.059 & 0.053 & 0.056 \\ \hline \end{tabular} \flushleft{KMS: Kolmogorov-Smirnov-type statistic, CvM: Cram\'er-von-Mises-type statistic; SN: standard normal multiplier; Poi: centered poisson multiplier.} \label{tab:proptest} \end{sidewaystable} } \section{Data Example}\label{sec:dataex} The SIR-3 (\emph{S}pread of Nosocomial \emph{I}nfections and \emph{R}esistant Pathogens) cohort study at the Charit{\'e} University Hospital in Berlin, Germany, prospectively collected data on the occurrence and consequences of hopital-aquired infections in intensive care \citep{beyersmann2006,wolkewitz2008}. A device of particular interest in critically ill patients is mechanical ventilation. The present data analysis investigates the impact of ventilation on the length of intensive care unit stay which is, e.g., of interest in cost containment analyses in hospital epidemiology \citep{beyersmann2011}. The analysis considers a random subset of 747 patients of the SIR-3 data which one of us has made publicly available \citep{beyersmann12a}. Patients may either be ventilated (state~1 as in Figure~\ref{fig:illnessdeath}) or not ventilated (state~0) upon admission. Switches in device usage are modelled as transitions between the intermediate states~0 and~1. Patients move into state~2 upon discharge from the unit. The numbers of observed transitions are reported in the last row of Table 1. We start by separately considering the two cumulative end-of-stay hazards~$A_{12}$ and $A_{02}$, followed by a more formal group comparison as in Remark \ref{rem:WB}(c). Based on the approach suggested by \citet{beyersmann12a}, Section 11.3, we find it reasonable to assume the Markov property. \begin{figure} \caption{95\% confidence bands based on standard normal multipliers for the cumulative hazard of end-of-stay from the data example in Section \ref{sec:dataex}. The solid black lines are the Nelson-Aalen estimators separately for `no ventilation' (state 0, right plot) and `ventilation' (state 1, left plot).} \label{fig:CBdataex} \end{figure} Figure \ref{fig:CBdataex} displays the Nelson-Aalen estimates of~$A_{12}$ and $A_{02}$ accompanied by simultaneous 95\% confidence bands utilizing the 1000 wild bootstrap versions with standard normal variates and restricted to the time interval [5,30] of intensive care unit days. As before, the left-hand tail of the interval is chosen, because Nelson-Aalen estimation regarding $A_{12}$ picks up at $t=5$, cf. the left panel of Figure \ref{fig:CBdataex}. \textcolor{black}{Graphical validation of empirical means and variances of $\hat{\boldsymbol W}_{n}$ showed good compliance compared to the theoretical limit quantities stated in Remark \ref{rm:wc}.} Bands using Poisson variates are similar (both results not shown). \begin{figure} \caption{95\% equal precision confidence bands based on standard normal multipliers and 95\% log-transformed pointwise confidence intervals for the cumulative hazard of end-of-stay from the data example in Section \ref{sec:dataex}. The solid black lines are the Nelson-Aalen estimators separately for `no ventilation' (state 0, right plot) and `ventilation' (state 1, left plot).} \label{fig:CBCI} \end{figure} Figure \ref{fig:CBCI} also displays the 95\% pointwise confidence intervals based on a log-transformation. The performance of both equal precision and Hall-Wellner bands is comparable for transitions out of the ventilation state. However, the latter tend to be larger for the $0\rightarrow 2$ transitions for later days due to more unstable weights at the right-hand tail. Equal precision bands are graphically competitive when compared to the pointwise confidence intervals. Ventilation significantly reduces the hazard of end-of-stay, since the upper half-space is not contained in the 95\% confidence band of the cumulative hazard difference, see Figure \ref{fig:DiffCB}. \begin{figure} \caption{95\% confidence bands from relation \eqref{eq:diffCB} based on standard normal multipliers and 95\% linear pointwise confidence intervals for difference of the two cumulative hazards of end-of-stay from the data example in Section \ref{sec:dataex}. The solid black lines is the difference `ventilation vs. no ventilation' of the Nelson-Aalen estimators within the two ventilation groups.} \label{fig:DiffCB} \end{figure} \section{Discussion and further Research} \label{sec:discussion} We have given a rigorous presentation of a weak convergence result for the wild bootstrap methodology for the multivariate Nelson-Aalen estimator in a general setting only assuming Aalen's multiplicative intensity structure of the underlying counting processes. This allowed the construction of time-simultaneous confidence bands and intervals as well as asymptotically valid equivalence and equality tests for cumulative hazard functions. In the context of time-to-event analysis, our general framework is not restricted to the standard survival or competing risks setting, but also covers arbitrary Markovian multistate models with finite state space, other classes of intensity models like relative survival or excess mortality models, and even specific semi-Markov situations. Additionally, independent left-truncation and right-censoring can be incorporated. {\color{black} The procedure has also been used to construct a test for proportional hazards.} Easy and computationally convenient implementation and within- or two-sample comparisons demonstrate its attractiveness in various practical applications. Future work will be on the approximation of the asymptotic distribution corresponding to the matrix of transition probabilities (see \citealp{AalenJoh78}) \textcolor{black}{and functionals thereof in general Markovian multistate models. This is of great practical interest, because no similar Brownian Bridge procedure is available to perform time-simultaneous statistical inference. In particular, previous implications rely on pointwise considerations.} Note that such an approach would significantly simplify the original justifications given by \cite{lin97} and generalizes his idea mainly used in the context of competing risks \citep{scheike03,hyun2009,beyersmann12b}. In addition, we plan to extend the utilized wild bootstrap technique to general semiparametric regression models; see \cite{lin00} for an application in the survival context. {\color{black} Current work investigates to which degree the martingale properties presented in this article may be exploited to obtain wild bootstrap consistencies for such functionals of Nelson-Aalen estimates or for estimators in semiparametric regression models. We are confident that the present approach will lead to reliable inference procedures in these contexts for which there has been only little research on such general methodology. } In contrast to the procedure of \citet{schoenfeld2002} and other recent publications mentioned in the introduction, the more general illness-death model with recovery does not rely on a constant hazards assumption and captures both the time-dependent structure of mechanical ventilation and the competing event `death in ICU'. This significantly improves medical interpretations. The widths of the confidence bands were competitive compared to the pointwise confidence intervals, i.e., demonstrated usefulness in practical situations. Applications of our theory are not restricted to studies investigating mechanical ventilation, but may also be helpful to investigate, for instance, the impact of immunosuppressive therapy in leukemia diagnosed patients \citep[cf.][]{schmoor2013}. {\color{black} The proposed procedure has even been applied in a recent study investigating femoral fracture risk in an elderly population \citep{bluhmki2016}.} It has to be emphasized that our simulation study suggested that the {\color{black} wild bootstrap approach leads to more powerful procedures (i.e. to narrower confidence bands) compared to the approximation via Brownian bridges. } As expected, the applied log-transformation results in improved small sample properties compared to the untransformed {\color{black}wild bootstrap} bands. Based on the current simulation study, however, it was difficult to clearly recommend which type of band and which type of multiplier should be used. \appendix \section{Proofs}\label{app} {\color{black} \begin{proof}[Proof of Lemma~\ref{lem:mart}] Due to similarity, it is enough to concentrate at the first component only; thus, we subsequently suppress the subscript `1'. The independence of all white noise processes immediately imply the orthogonality of all component processes. At first, we verify the martingale property; the square-integra\-bility is obviously fulfilled since $E(G_1^2(0)) < \infty$. To this end, let $0 \leq s \leq t$. By measurability of the counting and the predictable process with respect to $\mathcal{C}_0$, we have \begin{align*} E(\hat W_n(t) \ | \ \mathcal{C}_s ) = & \sqrt{n} \int_{(0,t]} \frac{J(u)}{Y(u)} E( G(u) \ | \ \mathcal{C}_s) \ d N(u) \\ = & \sqrt{n} \int_{(0,s]} \frac{J(u)}{Y(u)} G(u) \ d N(u) + \sqrt{n} \int_{(s,t]} \frac{J(u)}{Y(u)} E(G(u)) \ d N(u) = \hat W_n(s) \end{align*} by the independence of $\sigma(G(u))$ and $\mathcal C_s$ for all $u > s$. Hence, the martingale property is shown. The predictable variation process $\langle \hat W_{n} \rangle$ is the compensator of $\hat W_n^2$, i.e. we calculate \begin{align*} & E(\hat W_n^2(t) \ | \ \mathcal{C}_s) \\ & = n \Big( \int_{(0,s]} \int_{(0,s]} + \int_{(s,t]} \int_{(0,s]} + \int_{(0,s]} \int_{(s,t]} + \int_{(s,t]} \int_{(s,t]} \Big) E(G(u) G(v) \ | \ \mathcal{C}_s ) \\ & \quad \times \frac{J(u) J(v)}{Y(u) Y(v)}\ d N(u) \ d N(v) \\ & = n \Big( \int_{(0,s]} \int_{(0,s]} G(u) G(v) + \int_{(s,t]} \int_{(0,s]} E(G(u)) G(v) \\ & \quad + \int_{(0,s]} \int_{(s,t]} G(u) E(G(v)) + \int_{(s,t]} \int_{(s,t]} E(G(u) G(v)) \Big) \frac{J(u) J(v)}{Y(u) Y(v)} \ d N(u) \ d N(v) \\ & = n \Big( \int_{(0,s]} \int_{(0,s]} G(u) G(v) \frac{J(u) J(v)}{Y(u) Y(v)} \ dN(u) d N(v) + \int_{(s,t]} E(G^2(u)) \frac{J(u)}{Y^2(u)} \ d N(u) \Big) \\ & = \hat W_n^2(s) + n \int_{(0,t]} \frac{J(u)}{Y^2(u)} \ d N(u) - n \int_{(0,s]} \frac{J(u)}{Y^2(u)} \ d N(u), \end{align*} again by the $\mathcal C_s$-measurability of $G(u)$ for $u \leq s$ and their independence for $u > s$. The second to last equality is due to the independence of $G(u)$ and $G(v)$ for $u \neq v$. Hence, $(\hat W_n^2(t) - n \int_{(0,t]} \frac{J(u)}{Y^2(u)} \ d N(u))_{t \in [0,\tau]}$ is a martingale. Letting $\Delta f$ denote the jump-size process fo a c\`adl\`ag function $f$, the definition of the optional variation process yields \begin{align*} [\hat W_n ] (t) = \sum_{0 < s \leq t} (\Delta \hat W_n(s) )^2 = n \sum_{0 < s \leq t} G^2(u) \frac{J(u)}{Y^2(u)} \Delta N(u) = n \int_{(0,t]} G^2(u) \frac{J(u)}{Y^2(u)} \ d N(u), \end{align*} where the sum is taken over all jump points of $N$. \end{proof} \begin{proof}[Proof of Theorem~\ref{Th.Na.uni-What}] It is enough to verify the conditions of Rebolledo's martingale central limit theorem (in conditional probability); see e.g. Theorem~II.5.1 in \cite{abgk93}. Since the filtration $\mathcal C_0$ at time $s=0$ is not trivial, the resulting weak convergence will hold given $\mathcal C_0$ as well, in probability. From the classical theory we know that the Aalen-type variance estimator, which is in fact the predictable variation process of $\hat W_n$, is uniformly consistent for the variance function. It remains to prove the Lindeberg condition (2.5.3) on page 83 in \cite{abgk93}. But, by the same arguments as in the proof of Lemma~\ref{lem:mart}, this is exactly the same as the Lindeberg condition for the Nelson-Aalen estimator itself. And this holds due to the main assumption~\eqref{eq:NA}. Hence, Rebolledo's martingale central limit theorem yields the desired weak convergence as well as the uniform consistency of the optional variation process. \end{proof} } \begin{proof}[Proof of Theorem~\ref{thm:delta_meth_conv}] For convergence~\eqref{eq:weak_conv_logA.1}, see Section~IV.1 in \cite{abgk93} in combination with Slutsky's theorem. Convergence~\eqref{eq:weak_conv_logA.2} follows from the consistency of $\sigma^{*2}$, Slutsky's theorem and Theorem~\ref{Th.Na.uni-What}, since $\hat W_n$ asymptotically mimicks the distribution of $\sqrt n ( \hat A_n - A)$. The functional delta-method for $(x \mapsto \log x)$ completes the proof. \end{proof} \begin{proof}[Proof of Corollaries~\ref{cor:CBs} and ~\ref{cor:ks_test}] Due to the continuous limit distribution the conditional quantiles converge as well in probability; see e.g. \cite{janssen03}, Lemma~1. The consistency of $\varphi_n^{KS}$ under $K_{\neq}$ follows from the convergence in probability of the conditional quantile towards a finite value and from the uniform consistency of the multivariate Nelson-Aalen estimator for the cumulative hazard functions. Since the factor $\sqrt{n}$ tends to infinity, the test statistic also goes to infinity in probability under $K_{\neq}$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:equivalence}] The proof extends the arguments of \cite{wellek10}, Section 3.1, from confidence intervals to confidence bands. Write $H = H_1 \cup H_2$ where \begin{align*} & H_1 : \{ A(s) \leq A_0(s) - \ell(s) \text{ for some } s \in [t_1,t_2] \} \\ \text{and} \quad & H_2 : \{ A(s) \geq A_0(s) + u(s) \text{ for some } s \in [t_1,t_2] \}. \end{align*} Suppose $H$ is true and let without loss of generality be $H_1$ true due to analogy. Then the probability of a false rejection of $H$ amounts to \begin{align*} & P( A_0(s) - \ell(s) < a_n(s) \text{ and } b_n(s) < A_0(s) + u(s) \text{ for all } s \in [t_1,t_2] ) \\ & \leq P( A_0(s) - \ell(s) < a_n(s) \text{ for all } s \in [t_1,t_2] ) \\ & \leq P( A(s) < a_n(s) \text{ for some } s \in [t_1,t_2] ) \longrightarrow \alpha. \end{align*} Here the last inequality holds since $H_1$ is true and the convergence is due to the asymptotic coverage probability of the confidence band $(a_n(s), \infty)_{s \in [t_1,t_2]}$. In order to prove consistency, suppose the alternative hypothesis $K$ is true and choose any $\varepsilon$ such that $$ 0 < \varepsilon < \inf_{s \in [t_1,t_2]} -(A_0(s) - \ell(s) - A(s)) \wedge (A_0(s) + u(s) - A(s)). $$ Thus, by the (uniform) consistency of the Nelson-Aalen estimator and the wild bootstrap quantiles, the probability of a correct rejection of $H$ equals \begin{align*} & P( A_0(s) - \ell(s) < a_n(s) \text{ and } b_n(s) < A_0(s) + u(s) \text{ for all } s \in [t_1,t_2] ) \\ & \geq P(A(s) - \varepsilon < a_n(s) \text{ and } b_n(s) < A(s) + \varepsilon \text{ for all } s \in [t_1,t_2]) \longrightarrow 1 \end{align*} as $n \rightarrow \infty$. For the convergence in the previous display, also note that $a_n \xrightarrow{\text{ }P\text{ }} A$ as well as $b_n \xrightarrow{\text{ }P\text{ }} A$ uniformly in $[t_1,t_2]$. \end{proof} {\color{black} \begin{proof}[Proof of Theorem~\ref{thm:prop}] Let $t_0 > 0 $. Denote by $\mathfrak{D}_{>0}[t_0,\tau] \subset \mathfrak{D}[t_0,\tau]$ the cone of positive c\`adl\`ag functions that are bounded away from zero. It is easy to see that the functional $\phi: \mathfrak{D}^2_{>0}[t_0,\tau] \rightarrow \mathfrak{D}_{>0}[t_0,\tau], \ (f,g) \mapsto \frac{f}{g}$ is Hadamard-differentiable tangentially to the set of pairs of continuous functions $\mathfrak C^2[t_0,\tau]$ with continuous and linear Hadamard-derivative $$ \phi'_{(f,g)}: \mathfrak C^2[t_0,\tau] \rightarrow \mathfrak C[t_0,\tau], \quad (h_1,h_2) \longmapsto \frac{h_1}{g} - h_2 \frac{f}{g^2}. $$ A simpler Hadamard-differentiability result holds for $\phi$'s restriction to $\tau$, i.e. $\phi|_{\tau}: (0,\infty)^2 \ni (f(\tau), g(\tau)) \mapsto \frac{f(\tau)}{g(\tau)}$ with continuous, linear Hadamard-derivative $${(\phi|_\tau)}'_{(f,g)}: \R^2 \rightarrow \R, \quad (h_1(\tau),h_2(\tau)) \longmapsto \frac{h_1(\tau)}{g(\tau)} - h_2(\tau) \frac{f(\tau)}{g^2(\tau)}.$$ Hence, we apply the functional $\delta$-method and the continuous mapping theorem to $$\sqrt{\frac{n_1 n_2}{n}} (\phi(\hat A_{n_2}^{(2)}, \hat A_{n_1}^{(1)}) - \phi(A^{(2)}, A^{(1)})) \quad \text{and} \quad \phi'_{(\hat A_{n_2}^{(2)}, \hat A_{n_1}^{(1)})} \Big( \sqrt{\frac{n_1}{n}} \hat W_{n_2}^{(2)}, \sqrt{\frac{n_2}{n}} \hat W_{n_1}^{(1)} \Big), $$ respectively, verifying their equality in distribution in the limit (conditionally in probability for the latter). Proceed similarly with the restricted functional $\phi|_{\tau}$. Furthermore, the difference functional of both above functionals retains the Hadamard-differentiability tangentially to the set of pairs of continuous functions. Our specific choices of the distance $\rho$ are continuous functionals, hence we are able to apply the continuous mapping theorem again. To conclude the proof of the asymptotic behaviour of $\varphi_{n_1,n_2}^{\textnormal{prop}}$ under $H_0^{\textnormal{prop}}$, note that the particular weight function solves the problem of dividing by zero at $t_0 = 0$. For the asymptotic power assertion, let $t_1 \in [0,\tau]$ at which $H_0^{\textnormal{prop}}$ is violated. Then $$ \rho \Big( \ \frac{\hat A^{(2)}_{n_2}}{\hat A^{(1)}_{n_1}} \ , \ \frac{\hat A^{(2)}_{n_2}(\tau)}{\hat A^{(1)}_{n_1}(\tau)} \ \Big)$$ converges in probability to a positive value, whence $T_{n_1,n_2} \stackrel{p}{\rightarrow} \infty$ follows. The conditional quantiles, however, still converge to a finite constant in probability by the above arguments. \end{proof} } \section{Supplementary Material: Alternative Proof of Theorem~\ref{Th.Na.uni-What}} Before proving the conditional convergence in distribution stated in Theorem~\ref{Th.Na.uni-What}, we extend the conditional central limit theorem (CCLT) A.1 given \cite{beyersmann12b} to our context. For that purpose, consider $N(\tau)=\sum_{j=1}^k N_{j}(\tau)$ as the random number of totally observed jumps in $[0,\tau]$. Due to the general framework only assuming Aalen's multiplicative intensity model, random sums with a random number $N$ of summands occur and need to be analyzed, since each jump of the counting processes requires its own multiplier $G_{j}(u)$ in the resampling scheme. Thus, we state a more general CCLT as given in \cite{beyersmann12b}, where $\| \cdot \|$ denotes the Euclidean norm on $\R^p$, $p \in \N$. $\mathfrak L$ again denotes the law. Throughout, the resampled quantities are modelled via projection on a product probability space $(\Omega_1 \times \Omega_2, \mathcal{A}_1 \otimes \mathcal{A}_2, P_1 \otimes P_2)$, where the white noise processes only depend on the second and the data only on the first coordinate. \begin{theorem}\label{th:wcext} Let $\textbf{\textit{Z}}_{n;l} : (\Omega_1, \mathcal{A}_1, P_1) \rightarrow (\R^p, \mathcal{B}^p), l = 1,\dots, N,$ be a triangular array of $\R^p$ random variables, $ p\in\N$, where $N: (\Omega_1, \mathcal{A}_1, P_1) \rightarrow (\N_0, \mathcal{P}(\N_0))$ is an integer-valued random variable, non-decreasing in $n$, such that $N \stackrel{P}{\rightarrow} \infty$ as $n \rightarrow \infty$. Let $G_{n;l}: (\Omega_2, \mathcal{A}_2, P_2)\rightarrow (\R, \mathcal{B}), l \in \N,$ be rowwise i.i.d. random variables with $E(G_{n;1}) = 0$ and $var(G_{n;1})=1$. Modelled on the product space $(\Omega_1 \times \Omega_2, \mathcal{A}_1 \otimes \mathcal{A}_2, P_1 \otimes P_2)$, the arrays $(N, \textbf Z_{n;l}: l\le N)$ and $(G_{n;l})_{l \in \N}$ are independent. Suppose that $\textbf Z_{n;l}$ fulfills the convergences \begin{eqnarray} \max_{1 \leq l \leq N} \| \textbf Z_{n;l} \| \stackrel{P}{\longrightarrow} 0 \label{eq:thm_cclt_1} \\ \sum_{l=1}^N \textbf Z_{n;l} \textbf Z_{n;l}' \stackrel{P}{\longrightarrow} \boldsymbol{\Gamma}, \label{eq:thm_cclt_2} \end{eqnarray} where $\bs \Gamma$ is a positive definite covariance matrix. Then, conditionally given $(N, \textbf Z_{n;l}: l\leq N)$, the following weak convergence holds in probability: \begin{eqnarray} \mathcal{L} \Big( \sum_{l=1}^N G_{n;l} \textbf Z_{n;l} \ \Big| \ N, \textbf Z_{n;l}: l\leq N \Big) \stackrel{d}{\longrightarrow} N(\bs{0}, \bs \Gamma) \label{eq:thm_cclt_3}. \end{eqnarray} \end{theorem} \begin{proof} Since $N$ is non-decreasing in $n$ with $N \stackrel{P}{\rightarrow} \infty$, it follows that $N(\omega_1, \omega_2) \rightarrow \infty$ for $P_1$-almost all $\omega_1\in\Omega_1$, independently of the value $\omega_2 \in \Omega_2$. Thus, for $P_1$-almost all such fixed $\omega_1\in\Omega_1$, we have a deterministic number of summands $N(\omega_1,\cdot)$. By the subsequence principle, choose a subsequence $(n') \subseteq (n) = \mathbb N$ along which \eqref{eq:thm_cclt_1} and \eqref{eq:thm_cclt_2} hold for almost every $\omega_1 \in \Omega_1$ as well. Applying the CCLT A.1 in \cite{beyersmann12b} with its conditions being almost surely fulfilled, the weak convergence \eqref{eq:thm_cclt_3} follows $P_1$-almost surely along $n'$. A further application of the subsequence principle, going back to convergence in probability, completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{Th.Na.uni-What}] {\it Conditional Finite-Dimensional Convergence of $\hat{\textbf W}_n$}.\\ Due to asymptotic mutual independence, we only consider the first entry $\hat W_{1n}$ of $\hat{\textbf{\textit{W}}}_{n}$ and suppress the subscript `1' subsequently. Define countably many i.i.d. random variables $ \tilde G_{n;1}, \tilde G_{n;2},\ldots$ with $E(\tilde G_{n;1})=0$ and $var(\tilde G_{n;1})=1$, that are independent of $\mathcal C_n$, and define processes $ Z_{n;1},\ldots,Z_{n;N(\tau)}$ such that equation (\ref{NA.uni.Wnh}) is re-expressed as \begin{align} \hat{W}_n(t) =\sum_{v\in T}G(v) \sqrt{n} X_v(t) \stackrel{d}{=} \sum\limits_{l=1}^{N(\tau)} \tilde G_{n;l}Z_{n;l}(t), \label{eq:triarray} \end{align} where $\stackrel{d}{=}$ denotes equality in distribution. Here $X_v(t):= \boldsymbol{1} \{ v \leq t \} \Delta N(v) / Y(v) $ and $T=\lbrace u\in[0,\tau] \ \vert \ \Delta N(u)=1\rbrace$ contains all jump times of the counting process $N$. Then, the general framework of Theorem \ref{th:wcext} is fulfilled for the triangular array $Z_{n;l}(t_j),$ $l=1,\ldots,N(\tau),$ \ $j=1,\dots, r$, for any finite subset $\{t_1,\dots,t_r\} \subset [0,\tau]$. Next, conditions~\eqref{eq:thm_cclt_1} and~\eqref{eq:thm_cclt_2} are verified in a similar manner as in \cite{beyersmann12b}. Applying the subsequence principle for convergence in probability to assumption (\ref{Pre.ass11}), it follows that for every subsequence there exists a further subsequence, say $n$, such that as $n\rightarrow\infty$ \begin{align} \label{eq:YbynAS} \underset{u\in[0,\tau]}{\sup}\Big| \frac{Y(u)}{n}-y(u)\Big|\xrightarrow{\text{ }a.s.\text{ }}0, \end{align} i.e., the left-hand side converges to zero for $P_1$-almost all $\omega\in\Omega_1$. Fix an arbitrarily small $\epsilon>0$ and an $\omega$ for which \eqref{eq:YbynAS} holds. The following arguments implicitly consider all $n \geq n_0(\omega,\epsilon)$ for an $n_0$ determined by~\eqref{eq:YbynAS}. Hence, the left-hand side of~\eqref{eq:YbynAS} is less than $\epsilon$ for all such $n \geq n_0$. Choose a $\gamma_\epsilon = \gamma_{\epsilon}(\omega) >0$ such that \begin{align*} \underset{{u\in[0,\tau]}}\sup \frac{n}{Y(\omega,u)} \leq \frac{\gamma_{\epsilon}}{y(u)} \leq \frac{\gamma_{\epsilon}}{\underset{{v\in[0,\tau]}}\inf y(v)}=:c_{\epsilon}. \end{align*} Since $X_v$ is (at most) a one-jump process on $[0,\tau]$, we have \begin{align*} \underset{{l = 1, \dots, N(\tau)}}\sup \ \underset{{t\in[0,\tau]}}\sup | Z_{n;l}(t) | & \leq \sqrt{n} \ \underset{{v \in T}}\sup X_{v}(\omega,\tau) \leq n^{-1/2} \frac{n}{Y(\omega, \tau)} \leq n^{-1/2} {c_\epsilon}\xrightarrow{n\rightarrow \infty} 0 . \end{align*} In particular, $\textbf{\textit Z}_{n;l}=( Z_{n;l}(t_1),\ldots,Z_{n;l}(t_k))'$ satisfies $ \underset{1 \leq l \leq N(\tau)}\max \| \textbf{\textit Z}_{n;l}(t) \| \stackrel{P}{\longrightarrow} 0 $, and \eqref{eq:thm_cclt_1} holds. For simplicity, condition~\eqref{eq:thm_cclt_2} is only shown for two time points $0\le t_1\le t_2\le \tau$, such that $\textbf{\textit{Z}}_{n;l}=(Z_{n;l}(t_1),Z_{n;l}(t_2))'$. Representation (\ref{eq:triarray}) implies that \begin{align*} \sum\limits_{l=1}^{N(\tau)} \textbf{\textit Z}_{n;l} \textbf{\textit Z}_{n;l}'=n \sum_{v\in T} \begin{pmatrix} X_v^2(t_1) &X_v(t_1) X_v(t_2)\\ X_v(t_1) X_v(t_2) & X_v^2(t_2) \end{pmatrix}. \end{align*} The off-diagonals equal $X_v(t_1) X_v(t_2)= \boldsymbol{1} \{ v \leq t_1 \} \Delta N(v) / Y^2(v)$ and the other two components are obtained for $t_1 = t_2$. Using the Doob-Meyer decomposition~\eqref{eq:doobmeyer}, it follows that \begin{align*} n \sum_{v\in T}X_v(t_1) X_v(t_2) = n^{-1} \underset{(0,t_1]}{\int{}} \Big( \frac{n}{Y(u)} \Big)^2 d M (u) + \underset{(0,t_1]}{\int{}} \frac{n}{Y(u)} \alpha(u) d u . \end{align*} As in \cite{beyersmann12b}, Rebolledo's martingale central limit theorem (\citealp{abgk93}, Theorem~II.5.1) shows the negligibility of the martingale integral. The remaining integral converges to $\psi(t_1,t_2) = \int_{(0,t_1]} \alpha(u) / y(u) d u$ in probability due to assumption~\eqref{eq:YbynAS}. Consequently, we conclude that, as $n\rightarrow \infty$, \begin{align*} \sum\limits_{l=1}^{N(\tau)} \textbf{\textit Z}_{n;l}\textbf{\textit Z}_{n;l}'\xrightarrow{\text{ }P\text{ }} \begin{pmatrix} \psi(t_1,t_1) & \psi(t_1,t_2) \\ \psi(t_1,t_2) & \psi(t_2,t_2) \end{pmatrix}. \end{align*} Let $U$ be a zero-mean Gaussian process with covariance function $\psi$. Extending previous arguments to $r\in\mathbb N$ time points $t_1,\ldots,t_r$, Theorem \ref{th:wcext} implies, conditionally on $\mathcal{C}_n$, the finite-dimensional weak convergence \begin{align*} (\hat{W}_n(t_1), \dots, \hat{W}_n(t_r))' \stackrel{d}{\longrightarrow} (U(t_1), \dots, U(t_r))' \end{align*} in probability. Conditionally on $\mathcal{C}_n$, only the white noise processes $G_{1}, \dots, G_k$ in \eqref{NA.uni.Wnh} are random and, in particular, stochastically independent. This implies the multivariate conditional weak convergence \begin{align*} (\hat{\textbf{\textit W}}_n(t_1), \dots, \hat{\textbf{\textit W}}_n(t_r))' \stackrel{d}{\longrightarrow} (\textbf{\textit U}(t_1), \dots, \textbf{\textit U}(t_r))' \quad \text{in probability}, \end{align*} where $\textbf{\textit U} = (U_1, \dots, U_k)'$ has independent components and the asserted covariance structure. The {\it conditional tightness of $\hat{\textbf W}_n$} follows similarly as in the proof of Theorem~3.1 in \cite{dobler14}. As previously, tightness of $\hat{\textbf{\textit{W}}}_n$ is separately studied for each single component, i.e., we only consider $\hat W_{jn}$ and suppress the subscript `$j$' of the estimators and counting processes as above. Let $0\leq r \leq s \leq t \leq \tau$. Then, Theorem~15.6 in \cite{billingsley68} using $\gamma = 2$ and $\alpha = 1$ in combination with the remark on p. 356 in \cite{jacod03} leads us to the following conditional expectation: \begin{align*} & E[(\hat{W}_n(t) - \hat{W}_n(s))^2(\hat{W}_n(s) - \hat{W}_n(r))^2 \ \left| \right. \ {\color{purple}\mathcal C_0} ] \\ & = n^2 E\Big[ \Big( \int\limits_{(s,t]} G(u) \frac{J(u)}{Y(u)} d N(u) \Big)^2 \Big( \int\limits_{(r,s]} G(v) \frac{J(v)}{Y(v)} d N(v) \Big)^2 \ \Big| \ {\color{purple}\mathcal C_0} \Big]\\ & = n^2 \int\limits_{(s,t]} \int\limits_{(s,t]} \int\limits_{(r,s]} \int\limits_{(r,s]} \frac{J(u_1)}{Y(u_1)} \frac{J(u_2)}{Y(u_2)} \frac{J(v_1)}{Y(v_1)} \frac{J(v_2)}{Y(v_2)} \\ & \quad \times E[ G(u_1) G(u_2) G(v_1) G(v_2) ] d N(v_2) d N(v_1) d N(u_2) d N(u_1). \end{align*} Since the multipliers $G(u), \ u \in T $ are independent and the intervals $(r,s]$ and $(s,t]$ are disjoint, the remaining expectation decomposes into a product of $E[G(u)]$ or $E[G^2(u)]$. Here, each expectation of a multiplier to the power of one vanishes due to $E[G(u)] = 0$ and a multiplier to the power of two only occurs whenever $u_1 = u_2 \in T$ or $v_1 = v_2 \in T$. Since $E[G^2(u)] = 1$, the above display simplifies to \begin{align*} n^2 \int\limits_{(s,t]} \frac{J(u)}{Y(u)^2} d N(u) \int\limits_{(r,s]} \frac{J(v)}{Y(v)^2} d N(v) = [ \hat\sigma^2(t) - \hat\sigma^2(s) ] [ \hat\sigma^2(s) - \hat\sigma^2(r) ] \leq [ \hat\sigma^2(t) - \hat\sigma^2(r)]^2 \end{align*} with $\hat \sigma^2$ defined as in \eqref{eq:varaalen}. By Theorem~IV.1.2 in \cite{abgk93} the convergence in probability of the right-hand side to $ (\sigma^2(t) - \sigma^2(r))^2 $ holds uniformly in $r,t \in [0,\tau]$. Following the lines of \cite{dobler14} by utilizing the proposition in \cite{jacod03}, p. 356, conditional tightness is shown along subsubsequences almost surely. Another application of the subsequence principle shows the stated result. \end{proof} \end{document}
arXiv
\begin{definition}[Definition:Continuous Real Function at Point/Definition 2] Let $A \subseteq \R$ be any subset of the real numbers. Let $f: A \to \R$ be a real function. Let $x \in A$ be a point of $A$. '''$f$ is continuous at $x$''' {{iff}} the limit $\ds \lim_{y \mathop \to x} \map f y$ exists and: :$\ds \lim_{y \mathop \to x} \map f y = \map f {\lim_{y \mathop \to x} y}$ \end{definition}
ProofWiki
The arithmetic mean of eight positive integers is 7. If one of the eight integers is removed, the mean becomes 6. What is the value of the integer that is removed? If the mean of the eight integers is 7, then the sum of those eight integers is $8 \cdot 7=56$. If the mean of the seven remaining numbers is 6, then the sum of those numbers is $7 \cdot 6=42$. Thus, the removed number is $56-42=\boxed{14}$
Math Dataset
\begin{document} \title[]{Trigonometric analogue of the identities associated with twisted sums of divisor functions } \author{Debika Banerjee} \address{Debika Banerjee\\ Department of Mathematics\\ Indraprastha Institute of Information Technology IIIT, Delhi\\ Okhla, Phase III, New Delhi-110020, India.} \email{[email protected]} \author{Khyati Khurana} \address{Khyati Khurana\\ Department of Mathematics\\ Indraprastha Institute of Information Technology IIIT, Delhi\\ Okhla, Phase III, New Delhi-110020, India} \email{[email protected]} \thanks{2010 \textit{Mathematics Subject Classification.} Primary 11M06, 11T24.\\ \textit{Keywords and phrases.} Dirichlet character; Dirichlet $L-$functions; Bessel functions; weighted divisor sums; Ramanujan’s lost notebook; Vorono\"i summation formula. } \maketitle \begin{abstract} Inspired by two entries published in Ramanujan's lost notebook on Page 355, B. C. Berndt et al.\cite{MR3351542} presented Riesz sum identities for Ramanujan entries by introducing the twisted divisor sums. Later S. Kim \cite{MR3541702} derived analogous results by replacing twisted divisor sums with twisted sums of divisor functions. Recently the authors \cite{devika2023} of the present paper deduced the Cohen-type identities as well as Vorono\"i summation formulas associated with these twisted sums of divisor functions. The present paper aims to derive an equivalent version of the results in the previous paper in terms of identities involving finite sums of trigonometric functions and the doubly infinite series. As an application, the authors provide an identity for $r_6(n)$, which is analogous to famous Hardy's result where $r_6(n)$ denotes the representation of natural numbers $n$ as a sum of six squares. \end{abstract} \section{Introduction} The lost notebook of Ramanujan contains several beautiful identities. Some are intimately connected with the famous {\it circle} and {\it divisor} problems. Among these results, on Page 355 in his lost notebook, we encounter the following two important identities involving a finite trigonometric sum and a doubly infinite series of Bessel functions. \begin{entry}\label{entry1} If $0<\theta<1$ and $x>0$, then \begin{align*} \sum_{n=1}^\infty F\left(\frac{x}{n} \right)\sin(2\pi n \theta)=\pi x \left(\frac{1}{2}-\theta\right)-\frac{\cot(\pi \theta)}{4} +\frac{\sqrt{x}}{2} \sum_{m=1}^\infty \sum_{n=0}^\infty \left\{ \frac{J_1(4\pi \sqrt{m(n+\theta) x} ) }{ \sqrt{m(n+\theta)} } - \frac{J_1(4\pi \sqrt{m(n+1-\theta) x} ) }{ \sqrt{m(n+1-\theta)} } \right\} . \end{align*} \end{entry} \begin{entry}\label{entry2} If $0<\theta<1$ and $x>0$, then \begin{align*} \sum_{n=1}^\infty F\left(\frac{x}{n} \right)\cos(2\pi n \theta)= \frac{1}{4}-x \log(2\sin(\pi \theta))+\frac{\sqrt{x} }{2} \sum_{m=1}^\infty \sum_{n=0}^\infty \left\{ \frac{I_1(4\pi \sqrt{m(n+\theta) x} ) }{ \sqrt{m(n+\theta)} } + \frac{I_1(4\pi \sqrt{m(n+1-\theta) x} ) }{ \sqrt{m(n+1-\theta)} } \right\} . \end{align*} \end{entry} where \begin{align*} F(x)=\begin{cases} \lfloor x \rfloor \ \ \quad \text{if } x \text{ is not an integer},\\\ x-\frac{1}{2}\ \ \quad \text{if } x \text{ is an integer; } \end{cases} \end{align*} \begin{align}\label{bessel} I_\nu(z)=-Y_\nu(z)-\frac{2}{\pi } K_\nu(z). \end{align} where $J_\nu$ and $Y_\nu$ are the Bessel functions of the first and second kind and $ K_\nu(z)$ denotes the modified Bessel function of order $\nu$ \cite[p.~ 40, 64, 78]{MR1349110}. These identities have three interpretations. The double series in Entry \ref{entry1} and Entry \ref{entry2} can be interpreted as iterated series in two possible ways. It can also be interpreted in another way where the products of the indices tend to infinity. As mentioned above, these entries have a strong connection with the classical Gauss circle problem and the Dirichlet divisor problem, which motivated B. C. Berndt et al. to study these types of identities and offer proof of Entry \ref{entry1} under all three interpretations, however, Entry \ref{entry2} under only two formulations in the papers \cite{MR2221114, MR2871168,MR3019715}. As an application of Entry \ref{entry1} in \cite{MR2221114}, they derived the following beautiful identity associated with $r_2(n)$ where $r_2(n)$ denotes the number of representation of $n$ as a sum of two squares. \begin{align}\label{newrep} \sideset{}{'}\sum_{0\leq n\leq x} r_2(n) =\pi x +2\sqrt{x}\sum_{n=0}^\infty\sum_{m=1}^\infty\left\{ \frac{J_1\left(4\pi \sqrt{m(n+\frac{1}{4})x } \ \right)}{\sqrt{m (n+\frac{1}{4})\ } }- \frac{J_1\left(4\pi \sqrt{m(n+\frac{3}{4})x } \ \right)}{\sqrt{m (n+\frac{3}{4})\ } }\right\}. \end{align} The prime $ \prime $ on the summation sign on the left-hand side implies that weight $1/2$ is considered if $x$ is an integer. By substituting $\theta=1/4$ in Entry \ref{entry1}, the authors rediscovered Hardy's formula \cite{hardy1915expression}, \cite[p.~243-263]{MR0242628}. \begin{align}\label{hardy} \sideset{}{'} \sum_{0\leq n\leq x} r_2(n) =\pi x +\sum_{n=1}^\infty r_2(n)\left(\frac{x}{n}\right)^{1/2}J_1(2\pi \sqrt{nx}). \end{align} The equation \eqref{hardy} also implies \eqref{newrep} by appealing to Jacobi's formula \cite[p.~ 56, Theorem 3.2.1]{MR2246314}. \begin{align} r_2(n)=4\sum_{\substack{d|n \\ d\ odd} }(-1)^{(d-1)/2}. \end{align} In the light of the fact that the identities \eqref{hardy} and \eqref{newrep} are equivalent, one can infer that Entry \ref{entry1} is a generalization of Hardy's result \eqref{hardy}. After Hardy, Dixon and Ferrar \cite{dixon1934some} in 1934 derived another such identity. They proved that, for $\Re(\nu)>0$ and $ x>0$, \begin{align}\label{ferrar} \sum_{n=0}^\infty r_2(n) n^{\nu/2} K_{\nu}(2\pi \sqrt{nx})=\frac{\Gamma(\nu+1)}{2\pi^{\nu+1}}x^{\frac{\nu}{2}}\sum_{n=0}^\infty \frac{r_2(n)}{(n+x)^{\nu+1}}. \end{align} Plugging $\nu=1/2,$ in the \eqref{ferrar} and using the property of the Bessel function \cite[p. ~ 80, eq. 13]{MR1349110} \begin{align}\label{property} K_{\frac{1}{2}}(z)=\sqrt{\frac{\pi }{2z}}e^{-z}, \end{align} we obtain another Hardy's result \cite[eq. (2.12)] {hardy1915expression}. \begin{align}\label{hardysecond} \sum_{n=1}^\infty r_2(n)e^{-s\sqrt{n}}=\frac{2\pi }{s^2}-1+2\pi s\sum_{n=1}^\infty\frac{r_2(n)}{(s^2+4\pi^2n)^\frac{3}{2}}, \end{align} where $\Re s>0$. In his paper, Hardy \cite{hardy1915expression} used \eqref{hardysecond} to deduce a lower bound for the error term $P(x)$, which appears in the summatory function of $r_2(n)$. However, K. Chandrasekharan and R. Narasimhan \cite[eq. (56)]{MR171761}, obtained an interesting identity analogous to \eqref{hardysecond} for Ramanujan tau function. For $\Re s>0$, \begin{align} \sum_{n=1}^\infty \tau(n)e^{-s\sqrt{n}}=2^{36}\pi^{23/2}\Gamma\left(\frac{25}{2}\right)\sum_{n=1}^\infty \frac{\tau(n)}{(s^2+16\pi^2n)^{25/2}}. \end{align} Analogous to Entry \ref{entry1}, Entry \ref{entry2} is related to the Dirichlet divisor problem. The divisor problem is the estimation of the error term $\Delta(x)$ that appears in summatory function of $d(n)=\sum_{d|n}1$. Vorono\"i \cite{voronoi1904fonction} in 1904, expressed $\Delta(x)$ in terms of Bessel functions. \begin{align}\label{vor bessel} \sideset{}{'}\sum_{n\leq x} d(n) = x \log x+(2\gamma-1)x + \frac{1}{4}+\sum_{n=1}^\infty d(n ) \left(\frac{x}{n}\right)^{1/2}I_1(4\pi \sqrt{nx}), \end{align} where $I_1(z)$ is defined in \eqref{bessel}. Vorono\"i in fact employed \eqref{vor bessel} to prove $\Delta(x)=O(x^{1/3+\epsilon})$. After that, many number theorists worked with the above expression to improve the bound of $\Delta(x)$. The occurrence of $I_1(z)$ in \eqref{vor bessel} indicates that there must be some relation between Entry \ref{entry2} and \eqref{vor bessel}. Upon observing this fact, B. C. Berndt et al. obtained an identity equivalent to Entry \ref{entry2} in \cite{MR2221114} by introducing twisted divisor sum $d_\chi (n)$ defined by \begin{align*} d_\chi (n) = \sum_{d|n}\chi (d), \end{align*} where $\chi$ is a primitive Dirichlet character modulo $q$ and $\tau(\chi)=\sum_{h=1}^q\chi(q)e^{2\pi i h/q}$. Their identity reads as the following \begin{align*} \sideset{}{'}\sum_{n\leq x} d_\chi (n) =- \frac{x}{\tau({\Bar{\chi}})}\sum_{h=1}^{q-1}\Bar{\chi}(h)\log (2\sin(\pi h/q))+\frac{\sqrt{q}}{\tau({\Bar{\chi}})}\sum_{n=1}^\infty d_{\bar{\chi}} (n)\left( \frac{x}{n}\right)^{1/2}I_1(4\pi \sqrt{nx/q}), \end{align*} where $\chi$ is a non-principal, even primitive character modulo $q$. Hence \eqref{vor bessel} can be considered a character analogue of Entry \ref{entry2}. The authors in their follow-up papers \cite{MR2994091,MR3351542} generalized Ramanujan's identities by studying Riesz sums for twisted divisor sums. Later in 2017, S. Kim \cite{MR3541702} extended the definition of twisted sum into twisted sums of divisor functions. \begin{align}\label{twisted sum divisor} \sigma_{k,\chi}(n):=\sum_{d|n}d^k \chi(d), \ \ \ \ \bar{\sigma}_{k,\chi}(n):=\sum_{d|n}d^k \chi(n/d), \ \ \ \ \sigma_{\chi_1,\chi_2}(n):=\sum_{d|n}d^k \chi_1(d) \chi_2(n/d), \end{align} and derived a trigonometric analogue of Riesz sum identities for the twisted sums of divisor functions. As a corollary of the main results, the author offered a result which specializes in the Riesz sum identity for $r_6(n)$ where $r_6(n)$ denotes the number of representations of $n$ as a sum of six squares denoted by $r_6(n)$. The function $r_6(n)$ can be expressed as (see \cite[p.~ 63]{MR2246314} ) \begin{align}\label{berndt} r_6(n)=16\sum_{\substack{d|n \\ \frac{n}{d}\ odd } }(-1)^{ (n/d-1)/2 }d^2-4\sum_{\substack{d|n \\ d \ odd } }(-1)^{ (d-1)/2 }d^2. \end{align} Identities of the type \eqref{ferrar} have been drawing the attention of many number theorists. Popov \cite[equation (6)]{popov1935uber} extended \eqref{ferrar} for $r_k(n)$ where $r_k(n)$ denotes the representation of the natural number as a sum of k-squares. His formula is given by \begin{align*} \sum_{n=0}^\infty r_k(n) n^{\nu/2} K_{\nu}(2\pi \sqrt{n\beta})=\frac{\Gamma(\nu+\frac{k}{2})}{2\pi^{\nu+\frac{k}{2}}}\beta^{\frac{\nu}{2}}\sum_{n=0}^\infty \frac{r_k(n)}{(n+\beta)^{\nu+\frac{k}{2}}}. \end{align*} where $\Re{\sqrt{\beta}}$, $\Re{\nu}>0$. Identities associated with the $K$-Bessel function have been studied by many number theorists due to its importance. It appears in the Fourier expansion of the standard non-holomorphic Eisenstein series of weight $0$ on $SL(2, \mathbb{Z})$. Upon observing this fact, Cohen in 2010 \cite{MR2744771} established the following useful identity, \begin{align}\label{guinand} 4x^{\frac{1}{2}}\sum_{n=1}^{\infty} \frac{\sigma_{\nu}(n)}{n^{\nu/2}} K_{\nu/2}(2\pi n x)\ +\ \Lambda(s)(x^{(1-\nu)/2} - x^{(\nu-1)/2} )& = 4x^{-\frac{1}{2}}\sum_{n=1}^{\infty} \frac{\sigma_{\nu}(n)}{n^{\nu/2}} K_{\nu/2}\left( \frac{2 \pi n}{x}\right)\notag\\ &+\ \Lambda(-s)( x^{-(1+\nu)/2} - x^{(1+\nu)/2}), \end{align} where $\Lambda(s)= \pi^{-\frac{s}{2}}\Gamma\left( \frac{s}{2}\right)\zeta(s)$. He obtained several beautiful identities as an application of \eqref{guinand}. One of them is mentioned below. \begin{proposition}\label{Cohen-type} \cite[p.~62, Theorem 3.4]{MR2744771} For $\nu \notin \mathbb{Z}$ such that $\Re(\nu) \geq 0$ and any integer N such that $N \geq \lfloor \frac{\Re(\nu)+1}{2}\rfloor$ then \small \begin{align} \label{Cohen Identity} &8 \pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) =-\frac{ \Gamma(\nu) \zeta(\nu)}{(2\pi)^{\nu-1} } + \frac{\Gamma(1+\nu) \zeta(1+\nu)}{\pi^{\nu+1} 2^\nu x} + \left\{ \frac{\zeta(\nu)x^{\nu-1}}{\sin\left(\frac{\pi \nu}{2}\right)} + \frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N} \zeta(2j)\ \zeta(2j-\nu)x^{2j-1} \right.\notag\\&\left.\ \ \hspace{5cm}-\pi\frac{\zeta(\nu+1) x^{\nu}} {\cos(\frac{\pi\nu}{2}) } +\frac{2 }{ \sin \left(\frac{\pi \nu}{2}\right)}x^{2N+1}\sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-x^{\nu-2N}} { n^2-x^2 }\right) \right\}. \end{align} \end{proposition} Cohen's identity \eqref{Cohen Identity} proved to be very useful in deriving a Vorono\"i summation formula for the divisor function $\sigma_s(n)$. B. C. Berndt et al. \cite{MR3558223} employed \eqref{Cohen Identity} to obtain the following Vorono\"i summation formula. \begin{proposition}\label{vorlemma1} \cite[p.~841, Theorem 6.1]{MR3558223} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $-\frac{1}{2}< \Re{(\nu)}<\frac{1}{2}$ Then \begin{align*} & \sum_{\alpha<j <\beta} {\sigma}_{-\nu}(j)f(j) = \int_{\alpha} ^{ \beta }f(t) \left\{ \zeta(1-\nu) \ t^{-\nu} + \zeta(\nu+1) \ \right\} dt \notag\\ &+ 2\pi \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) (t)^{-\frac{\nu}{2}} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{nt}) - Y_{\nu}(4\pi \sqrt{nt})\right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}(4\pi \sqrt{nt}) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align*} \end{proposition} Recently B. C. Berndt et al. \cite{MR4489312} studied the general version of \eqref{ferrar} for a certain class of arithmetical function studied by K. Chandrasekharan and R. Narasimhan \cite{MR171761}. More precisely, they considered those arithmetical functions whose functional equation consists of only one gamma factor. However, the first author and Maji \cite{MR4570432} studied identities analogous to \eqref{ferrar} for general divisor function defined by \begin{align*} \sigma_{z}^{(k)}(n)=\sum_{d^k|n} d^z. \end{align*} It is important to note that the Dirichlet series associated with $\sigma_{z}^{(k)}(n)$ does not fit into K. Chandrasekharan and R. Narasimhan's setting \cite{MR171761}. In particular, they derived several identities, including most of Cohen's results in \cite{MR2744771}. They offered the following identity as an immediate consequence of their main result. \begin{proposition}\label{first paper} Let $a$ and $x$ be two positive real numbers and $k\geq 1$ be an odd integer. For $\Re(\nu)>0$, we have \begin{align*} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^\infty\sigma_k(n)n^{\frac{\nu}{2}}K_{\nu}(a\sqrt{nx})=\frac{(-1)^{\frac{k+1}{2}}}{2} \Gamma(\nu+k+1)\left( 2\pi \right)^{k+1}\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2}\frac{n}{x}+1\right)^{\nu+k+1} }+Q_{\nu}(x), \end{align*} where \begin{align*} & Q_{\nu}(x)=-\frac{a^{2k+2}\Gamma(\nu)\zeta(-k)}{2^{2k+4} }x^{k+1}+\frac{a^{2k}\Gamma(1+\nu)\zeta(1-k)}{2^{2k+1}}x^{k}+\frac{1}{2}\Gamma(1+k+\nu)\Gamma(1+k)\zeta(1+k) . \end{align*} \end{proposition} The above identity was also obtained by B. C. Berndt et al. in \cite[equation (6.11)]{MR4489312} as a particular case of their main result. Proposition \ref{Cohen-type}, \ref{vorlemma1}, and \ref{first paper} can be considered to be the identity corresponding to character modulo $1$. The first and second authors extended these results to character modulo $q$ in their forthcoming paper \cite{devika2023}. In that paper \cite{devika2023}, the authors established the Cohen-type identities or identities of the form \eqref{Cohen Identity} for these twisted sums of the divisor functions defined in \eqref{twisted sum divisor}. In that same paper, they derived the Vorono\"i summation formula by appealing to their Cohen-type identities. This paper focuses on deriving trigonometric analogues of our previous results derived in \cite{devika2023}. More precisely, we consider the identities associated with the $K$-Bessel function and the following weighted sums of divisor functions. \begin{align}\label{weighted sums} \sum_{d|n}d^{z} \sin \left( 2\pi d \theta \right), \ \ \sum_{d|n}d^z \sin \left(\frac{2\pi n \theta}{d}\right), \ \ \sum_{d|n}d^z \cos \left( 2\pi d \theta \right), \ \ \sum_{d|n}d^z \cos \left(\frac{2\pi n \theta}{d}\right), \end{align} etc. We prove these identities are equivalent to their previous results in \cite{devika2023}. Moreover, we present formulas for the following two infinite series, \begin{align}\label{r66} \sum_{n=1}^\infty r_6(n)n^{\nu/2} K_{\nu}(a \sqrt{nx}),\ \ \ \ \ \ \ \sum_{n=1}^{\infty} r_6(n) e^{-4\pi \sqrt{nx}}, \end{align} We also derive an identity from our two main results that give rise to \eqref{r66}. The paper is organized as follows: In the next section, we state the main results. Section \ref{cohen identities...} and Section \ref{voronoi identities...} provide some special kind of Cohen-type identities and Vorono\"i summation formulas, respectively. In Section \ref{preliminary}, we state some significant results which we need to prove our results. Sections \ref{proof of integer nu} and \ref{proof of cohen identities...} and \ref{proof of voronoi...} are devoted to the proof of identities mentioned in sections \ref{integer results}, \ref{cohen identities...} and \ref{voronoi identities...} respectively. \section{Main Results }\label{integer results} As mentioned above, our results are trigonometric analogue or equivalent versions of the identities associated with twisted sums of divisor functions obtained in \cite{devika2023}. Therefore it is important to state those identities immediately after their corresponding equivalent version involving finite trigonometric sum and $K$-Bessel function.\\ Throughout this section, we will take $z=k$ in \eqref{weighted sums} where $k \in \mathbb{Z}_+$. Then we have \begin{theorem}\label{odd1_based} Let $a$ and $x$ be two positive real numbers. Let $ 0<\theta<1$ and $k\geq 0 $ be an even integer. Then for any $\Re{(\nu)}>0$ we have \begin{align}\label{p7} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( 2\pi d \theta \right)\nonumber \\ &= -\frac{(-1)^{\frac{k}{2}}a^{2k+2}k!}{2^{2k+4}(2\pi)^{k+1}} \Gamma(\upsilon) \left(\zeta(1+k,\theta) - \zeta(1+k, 1-\theta)\right) x^{k+1} +\delta_k \frac{\pi \Gamma(1+\nu)}{8} \left(\zeta(0, \theta) - \zeta(0, 1-\theta)\right) \notag\\ &+\frac{(-1)^{\frac{k}{2}}}{4} (2 \pi )^{k+1} \sum_{d=1}^{\infty}d^k \sum_{ m=0}^{\infty}\left\{ \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2d}{a^2x}(m+\theta))^{1+\nu+k} } - \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2d}{a^2x}(m+1-\theta))^{1+\nu+k} } \right\} , \end{align} where $\delta_k$ is given by \begin{align}\label{del_k0} \delta_k=\begin{cases} 1\quad \ \ \text{if } k=0 ,\\ 0\quad \ \ \text{else } . \end{cases} \end{align} \end{theorem} We remark that Theorem \ref{odd1_based} is equivalent to \cite[Theorem 2.1]{devika2023}. But for the sake of completeness, we would like to mention it here. \begin{theorem}\label{M1} Let $k\geq 0$ be an even integer, and $\chi$ be an odd primitive Dirichlet character modulo $q$. Then for any $\Re{(\nu)}>0$ we have\begin{align}\label{M11} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^\infty\sigma_{k,\chi}(n)n^{\frac{\nu}{2}}K_\nu(a\sqrt{nx})=&\frac{(-1)^{\frac{k}{2}}i k!q^ka^{2k+2}}{2^{2k+3}(2\pi)^{k+1}} \Gamma(\nu)\tau(\chi) L(1+k,\Bar{\chi})\ x^{k+1}+\delta_{k}\frac{\Gamma(1+\nu)L(1,\chi)}{4} \notag\\ &-\frac{(-1)^{\frac{k}{2}}i}{2q} \tau(\chi) (2 \pi )^{k+1}\sum_{n=1}^\infty \Bar{\sigma}_{k,\Bar{\chi}}(n) \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } , \end{align} where $\delta_k$ is defined in \eqref{del_k0}. \end{theorem} \begin{theorem}\label{odd2_based} Let $a$ and $x$ be two positive real numbers. Let $ 0<\theta<1$ and $k\geq 2 $ be an even integer. Then for any $\Re{(\nu)}>0$ we have \begin{align}\label{r7} & \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left(\frac{2\pi n \theta}{d}\right)=\frac{(-1)^\frac{k}{2}2^k\pi^{k+1}}{4}\Gamma(\upsilon+k+1) \left(\zeta(-k,\theta) - \zeta(-k, 1-\theta)\right)\notag \\& +\frac{(-1)^{\frac{k}{2}}}{4} (2 \pi )^{k+1} \Gamma(\upsilon+k+1)\sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left\{ \frac{ (m+\theta)^k }{(1+\frac{16\pi^2r}{a^2x}(m+\theta))^{1+\nu+k} } - \frac{ (m+1-\theta)^k }{(1+\frac{16\pi^2r}{a^2x}(m+1-\theta))^{1+\nu+k} } \right\} . \end{align}\end{theorem} Analogous to Theorem \ref{odd1_based}, one can demonstrate that \cite[Theorem 2.4]{devika2023} is an equivalent version of Theorem \ref{odd2_based}. We will state the result here. \begin{theorem}\label{M2} Let $k\geq 2$ be an even integer, and $\chi$ be an odd primitive Dirichlet character modulo $q$. Then we have\begin{align}\label{ll1} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^\infty\Bar{\sigma}_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})=&\frac{k!}{2}\Gamma(\upsilon+k+1)L(1+k,\chi) \notag\\ &- \frac{(-1)^{\frac{k}{2}}i}{2}\tau(\chi) \left(\frac{ 2\pi }{q}\right)^{k+1}\sum_{n=1}^\infty\sigma_{k,\Bar{\chi}}(n)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }. \end{align} \end{theorem} Theorem \ref{odd1_based}, together with Theorem \ref{odd2_based}, gives rise to the following beautiful identity associated with $r_6(n)$. \begin{corollary}\label{cor1r} Let $a$ and $x$ be two positive real numbers. Let $ 0<\theta<1$. Then for any $\Re{(\nu)}>0$ we have \begin{align} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+3} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^2 \left\{ 16\sin \left(\frac{2\pi n \theta}{d}\right) -4\sin \left( 2\pi d \theta \right)\right\} \notag\\ &=\frac{16}{3} \pi^3\Gamma(\nu+3) (\theta-3\theta^2+2\theta^3 ) -\frac{a^6}{256} \Gamma(\nu)(\cot(\pi \theta)+\cot^3(\pi \theta))x^3\notag\\ &+ (2 \pi )^3\Gamma(\nu+3)\sum_{n=1}^\infty\sum_{m=0}^\infty \left\{ \frac{ n^2-4(m+\theta)^2 }{(1+\frac{16\pi^2n}{a^2x}(m+\theta))^{\nu+3} } - \frac{ n^2-4(m+1-\theta)^2 }{(1+\frac{16\pi^2n}{a^2x}(m+1-\theta))^{\nu+3} } \right\}. \end{align} \end{corollary} One can also obtain an interesting identity analogous to Hardy's result in \eqref{ferrar} by substituting $\theta=1/4$. \begin{corollary}\label{cor2r} Let $a$ and $x$ be two positive real numbers. Then for any $\Re{(\nu)}>0$ we have \begin{align} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+3} \sum_{n=1}^{\infty} r_6(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) = \frac{\pi^3}{2} \Gamma(\nu+3) -\frac{a^6}{{128} } \Gamma(\nu) x^{3}\notag\\ &+ (2 \pi )^3\Gamma(\nu+3)\sum_{n=1}^\infty\sum_{m=0}^\infty \left\{ \frac{ n^2-4(m+1/4)^2 }{(1+\frac{16\pi^2n}{a^2x}(m+1/4))^{\nu+3} } - \frac{ n^2-4(m+3/4)^2 }{(1+\frac{16\pi^2n}{a^2x}(m+3/4))^{\nu+3} } \right\}. \end{align} \end{corollary} In particular, $\nu=1/2,a=4\pi $ yields an identity analogous to \eqref{hardysecond}. \begin{corollary}\label{cor3r} For $x>0$, we have \begin{align} & \sum_{n=1}^{\infty} r_6(n) e^{-4\pi \sqrt{nx}} = \frac{15}{512 \pi^3} x^{-3} -1 + \frac{15}{32\pi^3} x^{-3}\sum_{n=1}^\infty\sum_{m=0}^\infty \left\{ \frac{ n^2-4(m+1/4)^2 }{(1+\frac{ n}{ x}(m+1/4))^{ \frac{7}{2}} } - \frac{ n^2-4(m+3/4)^2 }{(1+\frac{ n}{ x}(m+3/4))^{\frac{7}{2}} } \right\}. \end{align} \end{corollary} The corresponding cosine version of the above identities are listed below. \begin{theorem}\label{even1_based} Let $a$ and $x$ be two positive real numbers. Let $ 0<\theta<1$ and $k\geq 1 $ be an odd integer. Then for any $\Re{(\nu)}>0$ we have \begin{align}\label{1290} & \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left( 2\pi d \theta \right) = \frac{(-1)^{\frac{k-1}{2}}a^{2k+2}k! }{2^{2k+4}(2\pi)^{k+1} } \Gamma(\upsilon)\left\{ \zeta(k+1,\theta)+\zeta(k+1,1-\theta) \right\}x^{k+1} \notag\\ & +\frac{(-1)^{\frac{k+1}{2}}(2 \pi )^{k+1} \Gamma(\upsilon+k+1)}{4} \sum_{d=1}^\infty d^k \sum_{m=0 }^\infty \left\{ \frac{1}{\left(\frac{16\pi^2}{a^2}\frac{d(m+\theta)}{x}+1\right)^{\upsilon+k+1} } +\frac{1}{\left(\frac{16\pi^2}{a^2}\frac{d(m+1-\theta)}{x}+1\right)^{\upsilon+k+1} } \right\}\notag\\ & \ \ -\delta_{k,1}\ \frac{ a^2 }{16}\Gamma(1+\nu)x , \end{align} where $\delta_{k,1}$ is given by \begin{align}\label{del_k} \delta_{k,1}=\begin{cases} 1\quad \ \ \text{if } k=1 ,\\ 0\quad \ \ \text{else } . \end{cases} \end{align} \end{theorem} Theorem \ref{even1_based} is proved using the following Theorem \cite[Theorem 2.6]{devika2023} and Proposition \ref{first paper}. But the following Theorem \cite[Theorem 2.6]{devika2023} can be deduced directly using Theorem \ref{even1_based}. \begin{theorem}\label{thmeven1} Let $k$ be a positive odd integer and let $\chi$ be a non-principal even primitive Dirichlet character modulo q, then we have \begin{align}\label{thm3} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^\infty\sigma_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})=&\frac{(-1)^{\frac{k-1}{2}}\Gamma(k+1)a^{2k+2}q^k}{2^{2k+3}(2\pi)^{k+1}} \tau(\chi)\Gamma(\upsilon)L(1+k,\Bar{\chi}) x^{k+1}\notag\\ &+\frac{(-1)^{\frac{k+1}{2}}}{2q}\tau(\chi) (2 \pi )^{k+1}\sum_{n=1}^\infty \Bar{\sigma}_{k,\Bar{\chi}}(n) \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }. \end{align} \end{theorem} \begin{theorem}\label{even2_based} Let $a$ and $x$ be two positive real numbers. Let $ 0<\theta<1$ and $k\geq 1 $ be an odd integer. Then for any $\Re{(\nu)}>0$ we have \begin{align*} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left(\frac{ 2\pi n \theta }{d}\right) = \frac{(-1)^{\frac{k+1}{2}}\left( 2\pi \right)^{k+1} }{8 }\Gamma(\upsilon+k+1) \left\{ \zeta(-k,\theta)+\zeta(-k,1-\theta) \right\} \\ &+\frac{(-1)^{\frac{k+1}{2}}(2 \pi )^{k+1}}{4 } \Gamma(\upsilon+k+1) \sum_{r=1}^\infty \sum_{m=0 }^\infty \left\{ \frac{(m+ \theta)^k}{\left(\frac{16\pi^2}{a^2}\frac{r(m+\theta)}{x}+1\right)^{\upsilon+k+1} } +\frac{(m+1-\theta)^k}{\left(\frac{16\pi^2}{a^2}\frac{r(m+1-\theta)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\&-\frac{a^{2k+2} }{2^{2k+4} } \zeta(-k)\Gamma(\nu) x^{k+1}. \end{align*} \end{theorem} Similar to Theorem \ref{even1_based}, Theorem \ref{even2_based} is based on the following Theorem \cite[Theorem 2.9]{devika2023} and Proposition \ref{first paper}. But the following result can be proved directly using Theorem \ref{even2_based}. \begin{theorem}\label{even2} Let k be a positive odd integer and $\chi$ be a non-principal even primitive Dirichlet character modulo q. Then we have \begin{align} \label{thm4} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1}\sum_{n=1}^\infty\Bar{\sigma}_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})=&\frac{\Gamma(k+1)}{2}\Gamma(\upsilon+k+1)L(1+k,\chi) \notag\\ &+\frac{(-1)^{\frac{k+1}{2}}}{2}\tau(\chi) \left(\frac{ 2\pi }{q}\right)^{k+1}\sum_{n=1}^\infty\sigma_{k,\Bar{\chi}}(n)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }. \end{align} \end{theorem} Next, we state the identities involving two trigonometric functions, which are the following: \begin{theorem}\label{botheven_odd1_based} Let $a$ and $x$ be two positive real numbers and $ 0<\theta,\psi<1$. If $k\geq 1 $ is an odd integer. Then for any $\Re{(\nu)}>0$ we have \begin{align}\label{rr3} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( 2\pi d \theta\right)\sin \left( \frac{2\pi n \psi}{d}\right)\notag\\ =&-\frac{(-1)^{\frac{k+1}{2}}}{8 \left( 2\pi \right)^{-k-1}} \Gamma(\upsilon+k+1) \sum_{m,n\geq 0} \left\{ \frac{ (n+\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+\theta)}{x}+ 1 \right)^{\upsilon+k+1}} -\frac{ (n+1- \psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+\theta)}{x}+ 1 \right)^{\upsilon+k+1}} \right.\notag\\&\left.\ \ - \frac{ (n+\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+1-\theta)}{x}+ 1 \right)^{\upsilon+k+1}} + \frac{ (n+1-\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-\psi)(m+1-\theta)}{x}+ 1 \right)^{\upsilon+k+1}} \right\} . \end{align} \end{theorem} \begin{theorem}\label{botheven_odd2_based} Let $a$ and $x$ be two positive real numbers and $ 0<\theta,\psi<1$. If $k\geq 1 $ is an odd integer. Then for any $\Re{(\nu)}>0$ we have \begin{align*} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left( 2\pi d \theta\right)\cos \left( \frac{2\pi n \psi}{d}\right) = \frac{(-1)^{\frac{k-1}{2}}a^{2k+2}k! }{2^{2k+4}(2\pi)^{k+1} } \Gamma(\upsilon)\left\{ \zeta(k+1, \theta) \right.\notag\\&\left.\ +\zeta(k+1,1- \theta) \right\} x^{k+1}+\frac{(-1)^{\frac{k+1}{2}}}{8 } \left( {2\pi } \right)^{k+1} \Gamma(\nu+k+1)\sum_{n,m\geq 0}\left\{ \frac{(n+\psi)^k }{\left(\frac{16\pi^2}{a^2}\frac{(m+\theta)(n+\psi)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ +\frac{(n+1-\psi)^k }{\left(\frac{16\pi^2}{a^2}\frac{(m+\theta)(n+1-\psi)}{x}+1\right)^{\upsilon+k+1} } +\frac{(n+\psi)^k }{\left(\frac{16\pi^2}{a^2}\frac{(m+1-\theta)(n+\psi)}{x}+1\right)^{\upsilon+k+1} } +\frac{(n+1-\psi)^k }{\left(\frac{16\pi^2}{a^2}\frac{(m+1-\theta)(n+1-\psi)}{x}+1\right)^{\upsilon+k+1} } \right\}. \end{align*} \end{theorem} Our next result \cite[Theorem 2.11]{devika2023} is the equivalent version of Theorem \ref{botheven_odd1_based}. But to prove Theorem \ref{botheven_odd2_based}, we need our next result \cite[Theorem 2.11]{devika2023} along with Theorem \ref{even1_based} and Theorem \ref{even2_based} and Proposition \ref{first paper}. \begin{theorem}\label{botheven_odd} Let k be a positive odd integer. Let $\chi_1$ and $\chi_2$ be primitive characters modulo p and q, respectively, such that either both are non-principal even characters or both are odd characters. Then we have \begin{align}\label{THM5} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})=&\frac{(-1)^{\frac{k+1}{2}}}{2p} \left(\frac{2\pi }{q}\right)^{k+1} \tau(\chi_1)\tau(\chi_2) \sum_{n=1}^\infty \sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} }. \end{align} \end{theorem} Our next results concern both sine and cosine functions. \begin{theorem}\label{even-odd1_based} Let $a$ and $x$ be two positive real numbers and $ 0<\theta,\psi<1$. If $k\geq 0$ is an even integer. Then for any $\Re{(\nu)}>0$ we have \begin{align*} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left( 2\pi d \theta\right)\sin \left( \frac{2\pi n \psi}{d}\right) \\&=\frac{(-1)^{\frac{k}{2}}\left( 2\pi \right)^{k+1}}{8 } \Gamma(\upsilon+k+1) \sum_{m,n\geq 0}^\infty \left\{\frac{(n+\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+\theta)}{x}+1\right)^{\upsilon+k+1} }-\frac{(n+1-\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-\psi)(m+\theta)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{(n+1-\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-\psi)(m+1-\theta)}{x}+1\right)^{\upsilon+k+1} } +\frac{(n+\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+1-\theta)}{x}+1\right)^{\upsilon+k+1} } \right\} . \notag \\ \end{align*} \end{theorem} \begin{theorem}\label{even-odd2_based} Let $a$ and $x$ be two positive real numbers and $ 0<\theta,\psi<1$. If $k\geq 0$ is an even integer. Then for any $\Re{(\nu)}>0$ we have \begin{align*} &\left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( 2\pi d \theta\right)\cos \left( \frac{2\pi n \psi}{d}\right) =-\frac{(-1)^{\frac{k}{2}}a^{2k+2}k!}{ 2^{2k+4}(2\pi)^{k+1}} \Gamma(\upsilon) \left(\zeta(1+k,\theta) \right.\notag\\&\left.\ - \zeta(1+k, 1-\theta)\right)x^{k+1} +\frac{(-1)^{\frac{k}{2}}\left( 2\pi \right)^{k+1}}{8 } \Gamma(\upsilon+k+1) \sum_{m,n\geq 0}^\infty \left\{\frac{(n+\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+\theta)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ +\frac{(n+1-\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-\psi)(m+\theta)}{x}+1\right)^{\upsilon+k+1} } -\frac{(n+1-\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-\psi)(m+1-\theta)}{x}+1\right)^{\upsilon+k+1} } -\ \frac{(n+\psi)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+\psi)(m+1-\theta)}{x}+1\right)^{\upsilon+k+1} } \right\}. \end{align*} \end{theorem} To prove Theorem \ref{even-odd1_based}, one requires the following Theorem \cite[Theorem 2.17]{devika2023} and Theorem \ref{odd2_based}. Theorem \ref{even-odd2_based} is based on the following Theorem \cite[Theorem 2.17]{devika2023} and Theorem \ref{odd1_based}. Conversely, Theorem \ref{even-odd1_based} and Theorem \ref{even-odd2_based} imply the following theorem independently. \begin{theorem}\label{even-odd} Let $k\geq 0$ be an even integer. Let $\chi_1$ and $\chi_2$ be primitive characters modulo $p$ and $q $, respectively, such that one of them is a non-principal even character and the other is an odd character. Then for $\Re{(\nu)}>0$ we have \begin{align}\label{THM6} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+k+1}\sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})=\frac{(-1)^{\frac{k}{2}}}{2ip} \left(\frac{2\pi }{q}\right)^{k+1} \tau(\chi_1)\tau(\chi_2) \sum_{n=1}^\infty \sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} }. \end{align} \end{theorem} \begin{remark} It is useful to mention that we have obtained Theorem \ref{M1}, Theorem \ref{M2}, Theorem \ref{thmeven1}, Theorem \ref{even2}, Theorem \ref{botheven_odd} and Theorem \ref{even-odd} in our previous paper \cite{devika2023}, without using any known identities. The proofs were entirely based on analytic techniques. \end{remark} \section{Cohen Type Identity}\label{cohen identities...} \begin{theorem}\label{oddcohen based} Let $x>0$, $ 0<\theta<1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8 \pi x^{\nu/2} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( 2\pi d \theta \right)= \frac{1 }{\cos\left(\frac{\pi \nu}{2}\right)} \zeta(\nu+1) \left(\zeta(1,\theta) - \zeta(1, 1-\theta)\right)x^\nu\\ & -\frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,\theta) - \zeta(1-\nu, 1-\theta)\right) +\frac{1 }{ 2 x\cos\left(\frac{\pi \nu}{2}\right)} \left(\zeta(-\nu,\theta) - \zeta(-\nu, 1-\theta)\right)\\ & -\frac{1 }{ \cos\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j) \left(\zeta(2j-\nu,\theta) - \zeta(2j-\nu, 1-\theta)\right)x^{2j-1}\\ & -\frac{1 }{ \cos\left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{d=1}^{\infty}d^{-\nu-1 } \sum_{ m=0}^{\infty}\left\{ \frac{ \left( d(m+\theta) \right)^{\nu+1-2N}- x ^{\nu+1-2N} }{ (m+\theta)\left(d^2(m+\theta)^2-x^2 \right)} - \frac{ \left( d(m+1-\theta) \right)^{\nu+1-2N}- x ^{\nu+1-2N} }{ (m+1-\theta)\left(d^2(m+1-\theta)^2-x^2 \right)} \right\}. \end{align*} \end{theorem} The equivalent version of Theorem \ref{oddcohen based} is the following \cite[Theorem 3.7]{devika2023}. \begin{theorem}\label{oddcohen} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0 .$ Let $\chi$ be an odd primitive character modulo $q$. Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align}\label{THM7} & 8\pi x^{\nu/2} \sum_{n=1}^{\infty} \sigma_{-\nu, {\chi}}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) =-\frac{ \Gamma(\nu) L(\nu, {\chi})}{(2\pi)^{\nu-1} } + \frac{2\Gamma(1+\nu) L(1+\nu, {\chi})}{(2\pi)^{\nu+1} }x^{-1} + \frac{iq^{1-\nu} }{\tau(\chi)} \left\{ \frac{2\zeta(\nu+1)L(1,\bar{\chi})(qx)^{\nu}} {\cos(\frac{\pi\nu}{2}) } \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ - \frac{2}{ \cos \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N} \zeta(2j)\ L(2j-\nu, \bar{\chi})(qx)^{2j-1} -\frac{2 }{ \cos\left(\frac{\pi \nu}{2}\right) }(qx)^{2N+1}\sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi}}(n) \frac{ \left( n^{\nu+1-2N}-(qx)^{\nu+1-2N}\right)}{ n\ (n^2-(qx)^2)} \right\}. \end{align} \end{theorem} \begin{theorem}\label{oddcohen2 based} Let $x>0$, $ 0<\theta<1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8 \pi x^{\nu/2} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi n \theta}{d} \right)= \frac{2}{(2\pi)^\nu } \Gamma(\nu) \zeta(\nu) \left(\zeta(1, \theta) - \zeta(1, 1-\theta)\right)\notag\\ + & \frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} x^\nu \left(\zeta(1+\nu, \theta) - \zeta(1+\nu, 1-\theta)\right) +\frac{x^{\nu-1} }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \left(\delta_{(0,1)}^{(\nu)} +1 \right) \left(\zeta(\nu,\theta) - \zeta(\nu, 1-\theta)\right)\notag\\ + & \frac{1 }{ \cos\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j+1-\nu) \left(\zeta(2j+1,\theta) - \zeta(2j+1 , 1-\theta)\right)x^{2j} \notag\\ +& \frac{x^{2N} }{ \cos\left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty \left\{ (m+\theta)^{-\nu} \frac{ (r(m+\theta))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+\theta)^2-x^2} - (m+1-\theta)^{-\nu} \frac{ (r(m+1-\theta))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+1-\theta)^2-x^2} \right\}, \end{align*} where $\delta_{(0,1)}^{(\nu)}$ is defined as \begin{align} \delta_{(0,1)}^{(\nu)}=\begin{cases} 1\quad &\ \ \text{if }\ \Re{(\nu)} \in (0,1),\\ 0\quad &\ \ \text{else }. \end{cases}\label{deltasymbol} \end{align} \end{theorem} Analogous to Theorem \ref{oddcohen based}, one can prove that the equivalent version of Theorem \ref{oddcohen2 based} is \cite[Theorem 3.10]{devika2023}. Next, we state the corresponding cosine version of the above identities. \begin{theorem} \label{evencohen based} Let $x>0$, $ 0<\theta<1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8 \pi x^{\nu/2} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left( 2\pi d \theta \right)= -\frac{\pi }{\cos\left(\frac{\pi \nu}{2}\right)} \zeta(\nu+1) x^\nu\\ & -\frac{\pi }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,\theta) + \zeta(1-\nu, 1-\theta)\right) -\frac{1 }{ 2 x\sin\left(\frac{\pi \nu}{2}\right)} \left(\zeta(-\nu,\theta) + \zeta(-\nu, 1-\theta)\right)\\ & +\frac{1 }{ \sin\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j) \left(\zeta(2j-\nu,\theta) + \zeta(2j-\nu, 1-\theta)\right)x^{2j-1}\\ & +\frac{1 }{ \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{d=1}^{\infty}d^{-\nu } \sum_{ m=0}^{\infty}\left\{ \frac{ \left( d(m+\theta) \right)^{\nu-2N}- x^{\nu-2N} }{ \left(d^2(m+\theta)^2-x^2 \right)} - \frac{ \left( d(m+1-\theta) \right)^{\nu-2N}- x^{\nu-2N} }{ \left(d^2(m+1-\theta)^2-x^2 \right)} \right\}. \end{align*} \end{theorem} For deriving Theorem \ref{evencohen based}, one needs to use \cite[Theorem 3.1]{devika2023} and Proposition \ref{Cohen-type}. Conversely, Theorem \ref{evencohen based} will imply \cite[Theorem 3.1]{devika2023}. \begin{theorem} \label{evencohen2 based} Let $x>0$, $ 0<\theta<1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8 \pi x^{\nu/2} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left( \frac{2\pi n \theta}{d} \right) = -\frac{ \Gamma(\nu) \zeta(\nu )}{(2\pi)^{\nu-1} } \notag\\ &+\frac{x^{\nu-1}}{2\sin \left(\frac{\pi \nu}{2} \right)} \left(\delta_{(0,1)}^{(\nu)} +1 \right)\left\{ \zeta(\nu, \theta )+ \zeta(\nu, 1- \theta) \right\} - \frac{x^{\nu-1}}{\phi(q) \sin \left(\frac{\pi \nu}{2} \right)} (q^\nu-1) \zeta(\nu) \delta_{(0,1)}^{(\nu)} \notag\\ & -\frac{\pi \ x^\nu}{2\cos \left(\frac{\pi \nu}{2} \right)} \left\{ \zeta(1+\nu, \theta )+ \zeta(1+\nu, 1- \theta ) \right\} +\frac{1}{\sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j-\nu) x^{2j-1} \left\{ \zeta(2j, \theta )+ \zeta(2j , 1- \theta ) \right\}\notag\\ &+\frac{ x^{2N+1} }{ \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty \left\{ (m+\theta)^{-\nu} \frac{ (r(m+\theta))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+\theta)^2-x^2} + (m+1-\theta)^{-\nu} \frac{ (r(m+1-\theta))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+1-\theta)^2-x^2} \right\}, \end{align*} where $\delta_{(0,1)}^{(\nu)} $ is defined in \eqref{deltasymbol}. \end{theorem} Analogous to Theorem \ref{evencohen based}, one can prove Theorem \ref{evencohen2 based} using \cite[Theorem 3.4]{devika2023} and Proposition \ref{Cohen-type}. Conversely, Theorem \ref{evencohen2 based} directly implies the identity \cite[Theorem 3.4]{devika2023}. Now, we state the identities involving two trigonometric functions: \begin{theorem} \label{cohen2 even-odd1based} Let $x>0$, $ 0<\theta,\psi <1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} 8\pi x^{\frac{\nu}{2}}&\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( 2\pi d \theta\right)\sin \left( \frac{2\pi n \psi}{d}\right)\notag\\ &=\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu, \theta) - \zeta(1-\nu, 1-\theta)\right) \left(\zeta(1, \psi) - \zeta(1, 1-\psi)\right)\notag\\ &- \frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)}x^{\nu} \left(\zeta(1, \theta) - \zeta(1 , 1-\theta)\right) \left(\zeta(\nu+1, \psi) - \zeta(\nu+1, 1-\psi)\right)\notag\\ &+\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N x^{2j}\left(\zeta(2j+1-\nu, \theta) - \zeta(2j+1-\nu, 1-\theta)\right) \left(\zeta(2j+1, \psi) - \zeta(2j+1, 1-\psi)\right)\notag\\ &+\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)}x^{2N}\sum_{m,n\geq 0}^{\infty} \left\{ \frac{(n+\psi)^{-\nu-1} }{(m+\theta)}\left( \frac{((n+\psi)(m+\theta))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+1-\psi)^{-\nu-1} }{(m+\theta)}\left( \frac{((n+1-\psi)(m+\theta))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+1-\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+\psi)^{-\nu-1} }{(m+1-\theta)}\left( \frac{((n+\psi)(m+1-\theta))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+\psi)(m+1-\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ \frac{(n+1-\psi)^{-\nu-1} }{(m+1-\theta)}\left( \frac{((n+1-\psi)(m+1-\theta))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+1-\psi)(m+1-\theta))^2-x^2 } \right) \right\}. \end{align*} \end{theorem} \begin{theorem} \label{cohen2 even-odd2based} Let $x>0$, $ 0<\theta,\psi <1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left( 2\pi d \theta\right)\cos \left( \frac{2\pi n \psi}{d}\right) \notag\\ &=-\frac{\pi \ x^\nu}{2\cos \left(\frac{\pi \nu}{2} \right)} \left\{ \zeta(1+\nu, \psi)+ \zeta(1+\nu, 1- \psi ) \right\} - \frac{\pi }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,\theta) + \zeta(1-\nu, 1-\theta)\right) \notag\\ &+\frac{1}{2 \sin\left(\frac{\pi \nu}{2}\right) }\sum_{j=1}^N x^{2j-1} \left\{\zeta(2j,\psi)+\zeta(2j,1-\psi) \right\} \left\{\zeta(2j-\nu, \theta) + \zeta(2j-\nu, 1-\theta) \right\} \notag\\ &+\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)}x^{2N+1}\sum_{m,n\geq 0}^{\infty} \left\{ {(n+\psi)^{-\nu} } \left( \frac{((n+\psi)(m+\theta))^{\nu-2N}-x^{\nu-2N}}{ ((n+\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ {(n+1-\psi)^{-\nu} } \left( \frac{((n+1-\psi)(m+\theta))^{\nu-2N}-x^{\nu-2N}}{ ((n+1-\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ {(n+\psi)^{-\nu} } \left( \frac{((n+\psi)(m+1-\theta))^{\nu-2N}-x^{\nu-2N}}{ ((n+\psi)(m+1-\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ {(n+1-\psi)^{-\nu} } \left( \frac{((n+1-\psi)(m+1-\theta))^{\nu-2N}-x^{\nu-2N}}{ ((n+1-\psi)(m+1-\theta))^2-x^2 } \right) \right\}. \end{align*} \end{theorem} The equivalent version of the Theorem \ref{cohen2 even-odd1based} is \cite[Theorem 3.15]{devika2023}. To prove Theorem \ref{cohen2 even-odd2based}, one requires the Theorem \cite[Theorem 3.13]{devika2023}, Theorem \ref{evencohen based}, Theorem \ref{evencohen2 based} and Proposition \ref{Cohen-type}. But Theorem \ref{cohen2 even-odd2based} imply the theorem \cite[Theorem 3.13]{devika2023} independently. \begin{theorem} \label{cohen2 even-odd3based} Let $x>0$, $ 0<\theta,\psi <1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left( {2\pi d \theta } \right)\sin \left( \frac{2\pi n\psi}{d}\right)= \frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} x^\nu \left(\zeta(1+\nu, \psi) - \zeta(1+\nu, 1-\psi)\right)\notag\\ & \ \ \ \ \ \ +\frac{ 1}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \left(\zeta(1, \psi) - \zeta(1, 1-\psi)\right) \left\{ \zeta(1-\nu, \theta)+\zeta(1-\nu,1- \theta) \right\} \notag\\ &\ \ \ \ \ \ +\frac{ 1}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N x^{2j} \left( \zeta(2j+1,\psi)-\zeta(2j+1,1-\psi) \right) \left\{\zeta(2j+1-\nu, \theta) + \zeta(2j+1-\nu, 1- \theta) \right\} \notag\\ &+\frac{ 1}{2 \cos \left(\frac{\pi \nu}{2}\right)}x^{2N}\sum_{m,n\geq 0}^{\infty} \left\{ {(n+\psi)^{-\nu} } \left( \frac{((n+\psi)(m+\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ {(n+1-\psi)^{-\nu} } \left( \frac{((n+1-\psi)(m+\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ {(n+\psi)^{-\nu} } \left( \frac{((n+\psi)(m+1-\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+\psi)(m+1-\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ {(n+1-\psi)^{-\nu} } \left( \frac{((n+1-\psi)(m+1-\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-\psi)(m+1-\theta))^2-x^2 } \right) \right\}. \end{align*} \end{theorem} \begin{theorem} \label{cohen2 even-odd4based} Let $x>0$, $ 0<\theta,\psi <1$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} & 8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( {2\pi d\theta } \right)\cos \left( \frac{2\pi n\psi}{d}\right) =- \frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,\theta) - \zeta(1-\nu, 1-\theta)\right)\notag\\ & \ \ \ \ \ \ +\frac{ x^{\nu}}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \left(\zeta(1, \theta) - \zeta(1, 1-\theta)\right) \left\{ \zeta(1+\nu,\psi)+\zeta(1+\nu,1-\psi) \right\} \notag\\ & \ \ \ \ \ \ -\frac{ 1}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N x^{2j-1} \left(\zeta(2j-\nu, \theta) - \zeta(2j-\nu, 1-\theta)\right) \left\{ \zeta(2j,\psi)+\zeta(2j,1-\psi) \right\} \notag\\ &\ \ \ \ \ \ -\frac{ 1}{2 \cos \left(\frac{\pi \nu}{2}\right)}x^{2N+1}\sum_{m,n\geq 0}^{\infty} \left\{ \frac{(n+\psi)^{-\nu-1} }{(m+\theta)}\left( \frac{((n+\psi)(m+\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\ \frac{(n+1-\psi)^{-\nu-1} }{(m+\theta)}\left( \frac{((n+1-\psi)(m+\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-\psi)(m+\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+\psi)^{-\nu-1} }{(m+1-\theta)}\left( \frac{((n+\psi)(m+1-\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+\psi)(m+1-\theta))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+1-\psi)^{-\nu-1} }{(m+1-\theta)}\left( \frac{((n+1-\psi)(m+1-\theta))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-\psi)(m+1-\theta))^2-x^2 } \right) \right\}. \end{align*} \end{theorem} To prove Theorem \ref{cohen2 even-odd3based}, one requires \cite[Theorem 3.17]{devika2023} and Theorem \ref{oddcohen2 based}. Theorem \ref{cohen2 even-odd4based} is based on the Theorem \cite[Theorem 3.18]{devika2023} and Theorem \ref{oddcohen based}. Conversely, Theorem \ref{cohen2 even-odd3based} and Theorem \ref{cohen2 even-odd4based} imply the theorem \cite[Theorem 3.17, Theorem 3.18]{devika2023}, respectively. \section{Voronoi summation formula}\label{voronoi identities...} Here, we state the identities analogous to Entry \ref{entry1} which is the following: \begin{theorem}\label{vor1.1} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{(\nu)}<\frac{1}{2}$. Then, we have \begin{align}\label{vv} &\sum_{\alpha<j<\beta} \sum_{d|j}d^{-\nu} \sin \left( {2\pi d \theta}\right) f(j) =- {(2\pi)^{\nu}} \Gamma(-\nu) \sin\left(\frac{\pi \nu}{2}\right)\left\{ \zeta(-\nu,\theta)- \zeta(-\nu,1-\theta)\right\} \int_\alpha^\beta {f(t) } \mathrm{d}t \notag\\ -&\pi \int_\alpha^\beta \frac{f(t)}{t^{\frac{\nu}{2}} } \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} } \sum_{ m=0}^\infty \left[ \left(m+\theta\right)^\frac{\nu}{2}\left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t}\ \right) + Y_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left.\ \ \ \ \ \ \ \ - \left(m+1-\theta\right)^\frac{\nu}{2}\left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t}\ \right) + Y_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right] \mathrm{d}t. \end{align} \end{theorem} We demonstrate that Theorem \ref{vor1.1} is equivalent to the following theorem \cite[Theorem 4.4]{devika2023}: \begin{theorem}\label{voro2} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that $\chi$ is an odd primitive character modulo $q$. For $0< \Re{\nu}<\frac{1}{2}$, we have \begin{align}\label{vv1} &\frac{ q^{1+\frac{\nu}{2}} } {\tau(\chi )} \sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) = \frac{ q^{1+\frac{\nu}{2}}}{\tau(\chi)} L(1+\nu,\chi) \int_\alpha^\beta {f(t) } \mathrm{d}t + 2 \pi i \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) (t)^{-\frac{\nu}{2}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) + Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} \end{theorem} \begin{theorem}\label{vor1.2} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{(\nu)}<\frac{1}{2}$. Then, we have \begin{align*} &\sum_{\alpha<j<\beta} \sum_{d|j}d^{-\nu} \sin \left(\frac{2\pi j \theta}{d}\right)\frac{f(j)}{j}= -\frac{ \Gamma(\nu)\sin{\left( \frac{\pi \nu}{2} \right)}}{(2\pi )^\nu }\{ \zeta(\nu,\theta)-\zeta(\nu,1-\theta)\}\int_\alpha^\beta \frac{f(t) }{t^{\nu+1}} \mathrm{d}t \notag\\ + &\pi \int_{\alpha} ^{ \beta }\frac{f(t)}{ t^{\frac{\nu}{2}+1} } \sum_{r=1}^{\infty} r^{\frac{\nu}{2}} \sum_{ m=0}^\infty \left[ \left\{ \left( \frac{2}{\pi} \frac{K_{\nu}\left(4\pi \sqrt {r\left(m+\theta \right)t} \ \right)}{\left(m+\theta \right)^{\frac{\nu}{2}}} - \frac{Y_{\nu}\left(4\pi \sqrt {r\left(m+\theta \right)t} \ \right)}{\left(m+\theta \right)^{\frac{\nu}{2}}} \right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{J_{\nu}\left(4\pi \sqrt {r\left(m+\theta \right)t} \ \right)}{\left(m+\theta \right)^{\frac{\nu}{2}}} \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. \ \ \ \ \ \ \ -\left\{ \left( \frac{2}{\pi} \frac{K_{\nu}\left(4\pi \sqrt {r\left(m+1-\theta \right)t} \ \right)}{\left(m+1-\theta \right)^{\frac{\nu}{2}}} - \frac{Y_{\nu}\left(4\pi \sqrt {r\left(m+1-\theta \right)t} \ \right)}{\left(m+1-\theta \right)^{\frac{\nu}{2}}} \right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{J_{\nu}\left(4\pi \sqrt {r\left(m+1-\theta \right)t} \ \right)}{\left(m+1-\theta \right)^{\frac{\nu}{2}}} \cos \left(\frac{\pi \nu}{2}\right) \right\} \right]\mathrm{d}t. \end{align*} \end{theorem} Similar to Theorem \ref{vor1.1}, one can show that the equivalent version of Theorem \ref{vor1.2} is Theorem \cite[Theorem 4.3]{devika2023}. The identities in the next two theorems are analogous to Entry \ref{entry2}. \begin{theorem}\label{vor1.3} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{\nu}<\frac{1}{2}$ Then, \begin{align}\label{vor1234} &\sum_{\alpha<j<\beta} f(j) \sum_{d|j}d^{-\nu} \cos \left( {2\pi d \theta}\right) = {(2\pi)^{\nu}} \Gamma(-\nu) \cos\left(\frac{\pi \nu}{2}\right)\left\{ \zeta(-\nu,\theta)+ \zeta(-\nu,1-\theta)\right\} \int_\alpha^\beta {f(t) } \mathrm{d}t \notag\\ +&\pi \int_\alpha^\beta \frac{f(t)}{t^{\frac{\nu}{2}} } \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} } \sum_{ m=0}^\infty \left[ \left(m+\theta\right)^\frac{\nu}{2}\left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t}\ \right) - Y_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left.\ \ \ \ \ \ \ \ + \left(m+1-\theta\right)^\frac{\nu}{2}\left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t}\ \right) - Y_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right] \mathrm{d}t. \end{align} \end{theorem} Theorem \ref{vor1.3} is proved using the following Theorem \cite[Theorem 4.2]{devika2023} and Proposition \ref{vorlemma1}. But the following identity can be deduced directly using Theorem \ref{vor1.3}. \begin{theorem}\label{vore2} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that $\chi$ is a non-principal, even primitive character modulo $ q$. For $0< \Re{\nu}<\frac{1}{2}$, we have \begin{align}\label{fann} &\frac{ q^{1+\frac{\nu}{2}} } {\tau(\chi )} \sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) = \frac{ q^{1+\frac{\nu}{2}}}{\tau(\chi)} L(1+\nu,\chi) \int_\alpha^\beta {f(t) } \mathrm{d}t + 2 \pi \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int \frac{f(t) }{t^{\frac{\nu}{2}}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} \mathrm{d}t. \end{align} \end{theorem} \begin{theorem}\label{vor1.4} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{\nu}<\frac{1}{2}$ Then , \begin{align*} & \sum_{\alpha<j <\beta} \sum_{d/j} {d} ^{-\nu} \cos\left(\frac{2 \pi j \theta }{d}\right)f(j) =\frac{ \Gamma(\nu)\cos{\left( \frac{\pi \nu}{2} \right)}}{(2\pi )^\nu }\{ \zeta(\nu,\theta)+\zeta(\nu,1-\theta)\} \int_{\alpha} ^{ \beta }\frac{f(t)}{ t^{{\nu} } } \mathrm{d}t \notag\\ + &\pi \int_{\alpha} ^{ \beta }\frac{f(t)}{ t^{\frac{\nu}{2}} } \sum_{r=1}^{\infty} r^{\frac{\nu}{2}} \sum_{ m=0}^\infty \left[ \left\{ \left( \frac{2}{\pi} \frac{K_{\nu}\left(4\pi \sqrt {r\left(m+\theta \right)t} \ \right)}{\left(m+\theta \right)^{\frac{\nu}{2}}} - \frac{Y_{\nu}\left(4\pi \sqrt {r\left(m+\theta \right)t} \ \right)}{\left(m+\theta \right)^{\frac{\nu}{2}}} \right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{J_{\nu}\left(4\pi \sqrt {r\left(m+\theta \right)t} \ \right)}{\left(m+\theta \right)^{\frac{\nu}{2}}} \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. \ \ \ \ \ \ \ +\left\{ \left( \frac{2}{\pi} \frac{K_{\nu}\left(4\pi \sqrt {r\left(m+1-\theta \right)t} \ \right)}{\left(m+1-\theta \right)^{\frac{\nu}{2}}} - \frac{Y_{\nu}\left(4\pi \sqrt {r\left(m+1-\theta \right)t} \ \right)}{\left(m+1-\theta \right)^{\frac{\nu}{2}}} \right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{J_{\nu}\left(4\pi \sqrt {r\left(m+1-\theta \right)t} \ \right)}{\left(m+1-\theta \right)^{\frac{\nu}{2}}} \sin \left(\frac{\pi \nu}{2}\right) \right\} \right]\mathrm{d}t. \end{align*} \end{theorem} Analogous to Theorem \ref{vor1.3}, Theorem \ref{vor1.4} is based on the following Theorem \cite[Theorem 4.1]{devika2023} and Proposition \ref{vorlemma1}. Conversely, Theorem \ref{vor1.4} directly imply identity \cite[Theorem 4.1]{devika2023}. Next, we state the identities involving two trigonometric functions, which are the following: \begin{theorem}\label{vor1.6} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{\nu}<\frac{1}{2}$ Then , \begin{align}\label{1234} &\sum_{\alpha<j <\beta} \sum_{d/j} {d} ^{-\nu} \cos \left( {2\pi d \theta}\right)\cos\left(\frac{2 \pi j \psi }{d}\right)f(j) \notag\\ =\frac{\pi}{2} & \int_{\alpha} ^{ \beta }\frac{f(t) }{t^{\frac{\nu}{2}}}\sum_{m,n\geq 0}^{\infty} \left[ \left( \frac{m+\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. +\left( \frac{m+\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. +\left( \frac{m+1-\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. +\left( \frac{m+1-\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right]\mathrm{d}t. \end{align} \end{theorem} For deriving Theorem \ref{vor1.6}, one requires the following Theorem \cite[Theorem 4.5]{devika2023}, Theorem \ref{vor1.3}, Theorem \ref{vor1.4} and Proposition \ref{vorlemma1}. But Theorem \ref{vor1.6} imply the theorem \cite[Theorem 4.5]{devika2023} independently. \begin{theorem}\label{voree} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that both $\chi_1$ and $\chi_2$ be non-principal even primitive characters modulo $ p$ and $ q,$ resp. For $0< \Re{\nu}<\frac{1}{2}$, we have \begin{align}\label{fann2} & \frac{ p^{1+{\frac{\nu}{2}}} q^{1-{\frac{\nu}{2}}} }{ \tau(\chi_1)\tau(\chi_2)} \sum_{\alpha<j <\beta} {\sigma}_{-\nu, \chi_1,\chi_2}(j) f(j) = 2\pi \sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi_2},\bar{\chi_1}}(n) \ n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) (t)^{-\frac{\nu}{2}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} \mathrm{d}t. \end{align} \end{theorem} \begin{theorem}\label{vor1.5} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{\nu}<\frac{1}{2}$ Then , \begin{align*} &\sum_{\alpha<j <\beta} \sum_{d/j} {d} ^{-\nu} \sin \left( {2\pi d \theta}\right)\sin \left(\frac{2 \pi j \psi }{d}\right)\frac{f(j)}{j} \notag\\ =\frac{\pi}{2} & \int_{\alpha} ^{ \beta }\frac{f(t) }{t^{\frac{\nu}{2}+1}} \sum_{m,n\geq 0}^{\infty} \left[ \left( \frac{m+\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. -\left( \frac{m+\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. -\left( \frac{m+1-\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. +\left( \frac{m+1-\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) \sin \left(\frac{\pi \nu}{2}\right) \right\} \right]\mathrm{d}t. \end{align*} \end{theorem} We remark that Theorem \ref{vor1.5} is equivalent to the result \cite[Theorem 4.7]{devika2023}. \begin{theorem}\label{vor1.7} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{\nu}<\frac{1}{2}$ Then , \begin{align*} &\sum_{\alpha<j <\beta} \sum_{d/j} {d} ^{-\nu} \cos \left( 2\pi d \theta\right)\sin \left( \frac{2\pi j \psi}{d}\right) \frac{f(j)}{j} \notag\\ =\frac{\pi}{2} & \int_{\alpha} ^{ \beta }\frac{f(t) }{t^{\frac{\nu}{2}+1}} \sum_{m,n\geq 0}^{\infty} \left[ \left( \frac{m+\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. -\left( \frac{m+\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. +\left( \frac{m+1-\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. -\left( \frac{m+1-\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) - Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right]\mathrm{d}t. \end{align*} \end{theorem} \begin{theorem}\label{vor1.8} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume $0< \Re{\nu}<\frac{1}{2}$ Then , \begin{align*} &\sum_{\alpha<j <\beta} \sum_{d/j} {d} ^{-\nu} \sin \left( 2\pi d \theta\right)\cos \left( \frac{2\pi j \psi}{d}\right) f(j) \notag\\ =-\frac{\pi}{2} & \int_{\alpha} ^{ \beta }\frac{f(t) }{t^{\frac{\nu}{2}}} \sum_{m,n\geq 0}^{\infty} \left[ \left( \frac{m+\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+\psi)(m+\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. +\left( \frac{m+\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. -\left( \frac{m+1-\theta}{n+\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+\psi)(m+1-\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right.\notag\\&\left. -\left( \frac{m+1-\theta}{n+1-\psi}\right)^{\nu/2} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) + Y_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t})\right) \sin \left(\frac{\pi \nu}{2}\right) \right.\right.\notag\\&\left.\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{(n+1-\psi)(m+1-\theta)t}) \cos \left(\frac{\pi \nu}{2}\right) \right\} \right]\mathrm{d}t. \end{align*} \end{theorem} To prove Theorem \ref{vor1.7}, one requires \cite[Theorem 4.10]{devika2023} and Theorem \ref{vor1.2}. Theorem \ref{vor1.8} is based on the Theorem \cite[Theorem 4.9]{devika2023} and Theorem \ref{vor1.1}. Conversely, Theorem \ref{vor1.7} and Theorem \ref{vor1.8} directly imply the theorems \cite[Theorem 4.10, Theorem 4.9]{devika2023}, respectively. \section{Preliminaries}\label{preliminary} We begin this section by recalling some important results which we will use throughout the paper. \iffalse We first observe that the generating function for $\sigma_{z, \chi}(n)$ and $\Bar{\sigma}_{z,\chi}(n)$ for any character $\chi$ modulo $q$ is the following: \begin{align} \zeta(s)L(s-z, \chi)=\sum_{m=1}^{\infty} \frac{1}{m^s} \sum_{d=1}^{\infty} \frac{d^z \chi(d)}{d^s} = \sum_{n=1}^{\infty} \frac{\sigma_{z, \chi}(n)}{n^s}, \label{Lfz1} \\ \zeta(s-z)L(s,\chi)=\sum_{n=1}^\infty \frac{\Bar{\sigma}_{z,\chi}(n)}{n^s},\label{Lfz2}\\ L(s-z,\chi)L(s,\chi^\prime)=\sum_{n=1}^\infty \frac{ \sigma_{z,\chi,\chi^\prime}(n)}{n^s},\label{Lfz3} \end{align} for $\Re(s)>\Re(z)+1$, where $\zeta(s)$ denotes the the Riemann zeta function and $L(s,\chi)$ denotes the Dirichlet $L$- function associated with $\chi$ for $\Re(s)>1$. We recall that \fi The functional equation of $\zeta(s)$ is given by, \begin{align}\label{1st_use} \Gamma(s) \zeta(s) & = \frac{\pi^{s} \zeta(1-s)}{2^{1-s} \cos\left(\frac{\pi s}{2}\right) }. \end{align} Next, we state the functional equation for $L(s, \chi)$. \begin{align}\label{ll(s)} L(s, \chi) & =i^{-\kappa} \frac{\tau(\chi)}{ \pi}\left(\frac{(2\pi)}{q}\right)^{s} \Gamma(1-s) \sin \frac{\pi (s+\kappa)}{2} L(1-s, \bar{\chi}). \end{align} where \begin{align*} \kappa=\kappa(\chi)=\begin{cases} & 0 \ \mbox{if} \ \chi(-1)=1,\\ & 1 \ \mbox{if} \ \chi(-1)=-1.\\ \end{cases} \end{align*} Now replacing $s$ by $s-z$ in \eqref{ll(s)}, we obtain \begin{align*} L(s-z, \chi) & =i^{-\kappa} \frac{\tau(\chi)}{\pi}\left(\frac{(2\pi)}{q}\right)^{s-z} \Gamma(1+z-s) \sin \frac{\pi (s+\kappa-z) }{2} L(1+z-s, \bar{\chi}). \end{align*} So, we can rewrite the above equation as \begin{align}\label{exact l} \Gamma(1+z-s)L(1+z-s, \bar{\chi})= i^{\kappa}\frac{\pi}{\tau(\chi)}\left( \frac{q}{2\pi}\right)^{s-z}\frac{L(s-z, \chi)}{\sin{\pi(\frac{s+\kappa-z}{2})}}. \end{align} Next, we write the $L$-function in terms of the Hurwitz zeta function i.e. \begin{align}\label{Hurwitz} L(s, \chi) =\frac{1}{q^s}\sum_{r=1}^{q-1} \zeta\left(s,\frac{r}{q}\right)\chi(r), \end{align} where Hurwitz zeta function $\zeta(s,\alpha)= \sum_{n=0}^\infty\frac{ 1}{(n+\alpha)^s}$, which is defined for $\Re(s)>1$ and $0<\alpha<1$. We will also note that \cite[p.69, p.71]{MR1790423} \begin{align}\label{both} \tau(\chi)\tau( \bar{\chi})= \begin{cases} -q&\quad\ ;\text{for odd primitive }\chi \ mod \ q , \\ q&\quad\ ; \text{for even non principal primitive }\chi \ mod \ q. \end{cases} \end{align} and the fact \begin{align}\label{prop} \sum_{\substack{\chi \ mod \ q\\ \chi \ odd}}\chi(a) \bar{\chi}(h)=\begin{cases} & \pm \frac{\phi(q)}{2} \ \ \mbox{if} \ h \equiv \pm a \ (mod \ q)\\ &0 \ \ \mbox{otherwise ;} \end{cases}\\ \sum_{\substack{\chi \ mod \ q\\ \chi \ even}}\chi(a) \bar{\chi}(h)=\begin{cases} & \frac{\phi(q)}{2} \ \ \mbox{if} \ h \equiv \pm a \ (mod \ q)\\ &0 \ \ \mbox{otherwise.} \end{cases} \label{prop1} \end{align} Here we would like to mention another identity \cite[Lemma 2.5]{MR3181548} namely \begin{align} \begin{cases} & \sin \left(\frac{2\pi h d}{q}\right)=\frac{1}{i \phi(q)} \sum_{\substack{\chi \ mod \ q\\ \chi \ odd}} \chi(d) \tau(\bar{\chi}) \chi(h), \\ & \cos \left(\frac{2\pi h d}{q}\right)=\frac{1}{ \phi(q)} \sum_{\substack{\chi \ mod \ q\\ \chi \ even}} \chi(d) \tau(\bar{\chi}) \chi(h), \end{cases} \label{sin}\end{align} whenever $(d,q)=(h,q)=1$. Next, we recall the factorization theorem for Gauss Sums \cite[p.65]{MR1790423} \begin{align}\label{gauss} \chi(n)\tau(\Bar{\chi})=\sum_{h=1}^{q-1}\Bar{\chi}(h)e^{2\pi i nh/q}, \end{align} for any character modulo $\chi$ modulo $q$. \iffalse Now we state some results \cite{devika2023} : \subsection{ Results for the case $z=k \in\mathbb{Z}$ } In this subsection we assume $X=\frac{4}{a^2x}$ , where $a,x>0$. For $ \sigma_{k,\chi}(n)=\sum_{d/n}d^k \chi(d) $, where $\chi$ is the Dirichlet character, we have the following identities. \begin{theorem}\label{odd1} Let $k\geq 0$ be an even integer, and $\chi$ be an odd primitive Dirichlet character modulo $q$. Then for any $\Re{(\nu)}>0$ we have\begin{align} \label{thm1} \sum_{n=1}^\infty\sigma_{k,\chi}(n)n^{\frac{\upsilon} {2}}K_\upsilon(a\sqrt{nx})=&\frac{(-1)^{\frac{k}{2}}ik!q^k}{2(2\pi)^{k+1}} X^\frac{\upsilon}{2}\tau(\chi)\Gamma(\upsilon)L(1+k,\Bar{\chi})+\delta_{k}\frac{\Gamma(1+\nu) }{4} X^{\frac{\upsilon}{2}+1}L(1,\chi) \notag \\ &-\frac{(-1)^{\frac{k}{2}}i}{2q}\tau(\chi)X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\sum_{n=1}^\infty \Bar{\sigma}_{k,\Bar{\chi}}(n) \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } . \end{align} where \begin{align*} \delta_{k}=\begin{cases} 1 &\quad \ \ \text{if } k=0 ,\\ 0&\quad \ \ \text{if } k>0. \end{cases} \end{align*} \end{theorem} \iffalse {\color{red} The result corresponding to $\nu=0$ is as follows:} \begin{theorem}\label{0thm1} Let $\chi$ be an odd primitive Dirichlet character modulo $q$. Then we have \begin{align}\label{thm2} \left( \frac{a^2x}{4}\right)^{\frac{\nu}{2}+1} \sum_{n=1}^\infty\sigma_{0,\chi}(n)n^{\frac{\nu}{2}}K_\nu(a\sqrt{nx})=\frac{ia^2}{16\ \pi }\Gamma(\nu)\tau(\chi)L(1,\Bar{\chi})x+\frac{\Gamma(1+\nu)L(1,\chi)}{4}-\frac{i \pi}{q}\tau(\chi)\sum_{n=1}^\infty \Bar{\sigma}_{0,\Bar{\chi}}(n) \frac{\Gamma(\nu+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\nu+1} }. \end{align} \end{theorem} \fi \begin{theorem}\label{odd2} Let $k\geq 2$ be an even integer, and $\chi$ be an odd primitive Dirichlet character modulo $q$. Then we have\begin{align}\label{thm2} \sum_{n=1}^\infty\Bar{\sigma}_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})=&\frac{k!}{2}\Gamma(\upsilon+k+1)L(1+k,\chi)X^{k+1+\frac{\upsilon}{2}}\notag\\ &- \frac{(-1)^{\frac{k}{2}}i}{2}\tau(\chi)X^{\frac{\upsilon}{2}}\left(\frac{ 2\pi X}{q}\right)^{k+1}\sum_{n=1}^\infty\sigma_{k,\Bar{\chi}}(n)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }. \end{align} \end{theorem} Substituting $\chi_1=\chi_2=\chi$ in the Theorem \ref{botheven_odd}, we obtain the following interesting identities. \begin{corollary}\label{last1 cor} Let $k\geq 1$ be an odd integer and $\chi$ be a non-principal primitive character modulo $q$. Then for $\Re{(\nu)}>0$ we have \begin{align*} \sum_{n=1}^\infty \sigma_{k}(n)\chi(n)n^{\frac{\nu}{2}}K_\nu(a\sqrt{nx})=&\frac{(-1)^{\frac{k+1}{2}}}{2q} \left(\frac{2\pi X}{q}\right)^{k+1} \tau^2(\chi)X^{\frac{\nu}{2}} \sum_{n=1}^\infty \sigma_{k}(n) \ \Bar{\chi}(n) \ \frac{\Gamma(\nu+k+1)}{\left(\frac{16\pi^2}{a^2q^2}\frac{n}{x}+1\right)^{\nu+k+1} }. \end{align*} \end{corollary} \fi \iffalse \begin{theorem}\label{baroddcohen} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Let $\chi$ be an odd primitive character modulo $ q.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\nu/2} \sum_{n=1}^{\infty} \bar{\sigma}_{-\nu, \bar{\chi}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) = \frac{2\Gamma(\nu) \zeta(\nu) L(0, \bar{\chi})}{(2\pi)^{\nu-1 }} + {2(2\pi)^{\nu-1}} \Gamma(1-\nu) L(1-\nu, \bar{\chi}) x^{\nu-1} \ \delta_{(0,1)}^{(\nu)} \nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{ i q }{ \tau(\chi)} \left\{ \frac{\pi L(1+\nu,\chi)}{ \sin\left(\frac{\pi \nu}{2}\right)}(qx)^{ \nu} +\frac{2}{ \cos\left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N-1} {\zeta(2j+1-\nu)L(2j+1 ,\chi)(qx)^{2j}} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{ L(\nu,\chi)}{ \cos \left(\frac{\pi \nu}{2}\right)} (qx)^{\nu-1} + \frac{2 }{ \cos\left(\frac{\pi \nu}{2}\right) }(qx)^{2N}\sum_{n=1}^{\infty} {\sigma}_{-\nu, \chi}(n) \left(\frac{n^{\nu-2N+1}-(qx)^{\nu-2N+1}}{n^2-(qx)^2} \right) \right\}, \end{align*} where $\delta_{(0,1)}^{(\nu)} $ is defined in \eqref{deltasymbol}. \end{theorem} \begin{theorem}\label{evencohen} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0 .$ Let $\chi$ be a non-principal even primitive character modulo $ q.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\nu/2} \sum_{n=1}^{\infty} \sigma_{-\nu, \bar{\chi}}(n) n^{\nu/2}K_{\nu}(4\pi\sqrt{nx}) =-\frac{ \Gamma(\nu) L(\nu, \bar{\chi})}{(2\pi)^{\nu-1} } + \frac{2\Gamma(1+\nu) L(1+\nu, \bar{\chi})}{(2\pi)^{\nu+1} }x^{ -1}\\ & \ \ \ \ \ \ + \frac{2q^{1-\nu} }{\tau(\chi) \sin \left(\frac{\pi \nu}{2}\right)} \left\{\sum_{j=1}^{N} \zeta(2j)\ L(2j-\nu, \chi)(qx)^{2j-1} + (qx)^{2N+1}\sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \chi}(n) \frac{\left( n^{\nu-2N}-(qx)^{\nu-2N}\right)}{ (n^2-(qx)^2)} \right\}. \end{align*} \end{theorem} \begin{theorem}\label{barevencohen} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0 .$ Let $\chi$ be a non-principal even primitive character modulo $ q.$ Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\nu/2} \sum_{n=1}^{\infty} \bar{\sigma}_{-\nu, \bar{\chi}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) \notag\\ = & \frac{2 \Gamma(1-\nu)}{(2\pi)^{1-\nu}} L(1-\nu, \bar{\chi}) x^{\nu-1} \ \delta_{(0,1)}^{(\nu)} +\frac{ q }{ \tau(\chi)} \left\{ - \frac{\pi L(1+\nu,\chi)}{ \cos \left(\frac{\pi \nu}{2}\right)}(qx)^{\nu} +\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^N {\zeta(2j-\nu)L(2j ,\chi)(qx)^{2j-1}} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{ L(\nu,\chi)}{ \sin \left(\frac{\pi \nu}{2}\right)} (qx)^{\nu-1} + \frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}(qx)^{2N+1}\sum_{n=1}^{\infty} {\sigma}_{-\nu, \chi}(n) \left(\frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{n^2-(qx)^2}\right) \right\}, \end{align*} where $\delta_{(0,1)}^{(\nu)} $ is defined in \eqref{deltasymbol}. \end{theorem} \begin{theorem}\label{cohenee} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0 .$ Both $\chi_1$ and $\chi_2$ are non-principal even primitive characters modulo $ p$ and $ q$, resp. Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} & 8\pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi_1},\bar{\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) =\frac{ 2 p^{1-\nu}q }{\tau(\chi_1)\tau(\chi_2) \sin\left(\frac{\pi \nu}{2}\right)}\left\{\sum_{j=1}^{N} L(2j,\chi_2)\ L(2j-\nu, \chi_1)(pqx)^{2j-1} \right.\notag\\&\left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +{(pqx)^{2N+1} }\sum_{n=1}^\infty{\sigma}_{-\nu, \chi_2,\chi_1}(n) { }\left( \frac{n^{\nu-2N}-(pqx)^{\nu-2N}}{n^2-(pqx)^2} \right) \right\}. \end{align*} \end{theorem} Substituting $\chi_1=\chi_2=\chi$ in the above theorem, we get the following: \begin{corollary}\label{coheneecor} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0 .$ Let $\chi$ be non-principal even primitive character modulo $q$. Then, for any integer $N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty} \sigma_{-\nu }(n) \Bar{\chi}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) =\frac{ q^{2-\nu} }{\tau^2(\chi) }\left\{\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N} L(2j,\chi)\ L(2j-\nu, \chi)(q^2x)^{2j-1} \right.\notag\\&\left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{2}{ \sin\left(\frac{\pi \nu}{2}\right)}{(q^2x)^{2N+1} }\sum_{n=1}^\infty\sigma_{-\nu }(n) {\chi}(n) \left( \frac{n^{\nu-2N}-(q^2x)^{\nu-2N}}{n^2-(q^2x)^2} \right) \right\}. \end{align*} \end{corollary} \begin{theorem}\label{cohenoo} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0$ and both $\chi_1$ and $\chi_2$ be odd primitive characters modulo $p$ and $ q$ resp. Then, for any integer $N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8 \pi x^{\nu/2}\sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi_1},\bar{\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) = \Gamma(\nu)L(\nu, \bar{\chi_1})L(0, \bar{\chi_2}) \frac{2 }{(2\pi)^{\nu-1}} \\ & -\frac{ p^{1-\nu}q}{ \tau(\chi_1)\tau(\chi_2)} \left\{ -\frac{2L(\nu+1,\chi_2)L(1,\chi_1)(pqx)^{\nu}}{ \sin(\pi\frac{\nu}{2})} +\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N-1} L(2j+1,\chi_2)\ L(2j+1-\nu, \chi_1)(pqx)^{2j} \right.\notag\\&\left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{2}{ \sin\left(\frac{\pi \nu}{2}\right)}(pqx)^{2N} \sum_{n=1}^{\infty}\frac{{\sigma}_{-\nu, \chi_2,\chi_1}(n)}{n} \left( \frac{n^{\nu-2N+2}-(pqx)^{\nu-2N+2}}{n^2-(pqx)^2)} \right) \right\}. \end{align*} \end{theorem} Substituting $\chi_1=\chi_2=\chi$ in the above theorem, we get the following: \begin{corollary}\label{cohenoocor} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0 .$ Let $\chi$ be an odd primitive character modulo $p$. Then, for any integer $N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty} \sigma_{-\nu }(n) \Bar{\chi}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) = \Gamma(\nu)L(\nu, \bar{\chi})L(0, \bar{\chi}) \frac{2 }{(2\pi)^{\nu-1}} \notag \\ &-\frac{ p^{2-\nu}}{ \tau^2(\chi) } \left\{ -\frac{2L(\nu+1,\chi_2)L(1,\chi_1)(p^2x)^{\nu}}{ \sin(\pi\frac{\nu}{2})} +\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N-1} L(2j+1,\chi)\ L(2j+1-\nu, \chi)(p^2x)^{2j} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{2}{ \sin\left(\frac{\pi \nu}{2}\right)}(p^2x)^{2N} \sum_{n=1}^{\infty}\frac{\sigma_{-\nu }(n) {\chi}(n)}{n} \left( \frac{n^{\nu-2N+2}-(p^2x)^{\nu-2N+2}}{n^2-(p^2x)^2)} \right) \right\}. \end{align*} \end{corollary} \begin{theorem}\label{coheneo} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Let $\chi_1$ be a non-principal even primitive character modulo $ p$ and $\chi_2$ be an odd primitive character modulo $ q$. Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8 \pi x^{\nu/2} \sum_{n=1}^{\infty} \sigma_{-\nu, \bar{\chi_1},\bar{\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) =\frac{ 2 }{(2\pi)^{\nu-1}} \Gamma(\nu)L(\nu, \bar{\chi_1})L(0, \bar{\chi_2}) +\frac{i p^{1-\nu}q}{\tau(\chi_1)\tau(\chi_2)} \left\{ \frac{2}{ \cos \left(\frac{\pi \nu}{2}\right)} \ \right.\notag\\&\left.\ \ \times \sum_{j=1}^{N-1} L(2j+1,\chi_2) L(2j+1-\nu, \chi_1)(pqx)^{2j} + \frac{2 }{ \cos\left(\frac{\pi \nu}{2}\right) }(pqx)^{2N}\sum_{n=1}^{\infty} {\sigma}_{-\nu, \chi_2,\chi_1}(n) \left(\frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \right\}. \end{align*} \end{theorem} \begin{theorem}\label{cohenoe} Let $x>0$ and $\nu \notin \mathbb{Z}$, where $\Re{(\nu)}\geq0.$ Let $\chi_1$ be an odd primitive character modulo $ p$ and $\chi_2$ be a non-principal even primitive character modulo $ q$. Then, for any integer $ N$ such that $ N\geq \lfloor\frac{\Re{(\nu)}+1}{2}\rfloor$, we have \begin{align*} &8\pi x^{\nu/2}\sum_{n=1}^{\infty} \sigma_{-\nu, \bar{\chi_1},\bar{\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) =\frac{2i p^{1-\nu}q}{ \tau(\chi_1)\tau(\chi_2) \cos \left(\frac{\pi \nu}{2}\right)} \left\{ {L(\nu+1,\chi_2)L(1,\chi_1)(pqx)^{\nu}} \right.\notag\\&\left.\ \ - \sum_{j=1}^{N} L(2j,\chi_2)\ L(2j-\nu, \chi_1)(pqx)^{2j-1} - (pqx)^{2N+1}\sum_{n=1}^{\infty}\frac{{\sigma}_{-\nu, \chi_2,\chi_1}(n) }{n} \left( \frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \right\}. \end{align*} \end{theorem} \subsection{ Results for the case $z=-\nu \in\mathbb{C}$ } In this subsection, we state the identities Voronoi summation formula: \begin{theorem}\label{voro1} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that $\chi$ is an odd primitive character modulo $q$. For $0 < \Re{\nu}<\frac{1}{2}$, we have \begin{align*} & \frac{ q^{1-\frac{\nu}{2}} } {\tau(\chi )} \sum_{\alpha<j <\beta} \frac{\bar{\sigma}_{-\nu, \chi }(j)}{j} f(j) = -\frac{ q^{1-\frac{\nu}{2}}}{\tau(\chi)} L(1-\nu,\chi) \int_\alpha^\beta \frac{f(t) }{t^{\nu+1}} \mathrm{d}t-2\pi i \sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}-1} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) + J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} \mathrm{d}t. \end{align*} \end{theorem} \begin{theorem}\label{voro2} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that $\chi$ is an odd primitive character modulo $q$. For $0< \Re{\nu}<\frac{1}{2}$, we have \begin{align*} &\frac{ q^{1+\frac{\nu}{2}} } {\tau(\chi )} \sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) = \frac{ q^{1+\frac{\nu}{2}}}{2\tau(\chi)} L(1+\nu,\chi) \int_\alpha^\beta {f(t) } \mathrm{d}t + 2 \pi i \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_\alpha^\beta {f(t) } t^{-\frac{\nu}{2}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) + Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} \mathrm{d}t. \end{align*} \end{theorem} \begin{theorem}\label{vore1} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that $\chi$ is a non-principal, even primitive character modulo $ q$. For $0< \Re{\nu}<\frac{1}{2}$, we have \begin{align*} &\frac{ q^{1-\frac{\nu}{2}} } {\tau(\chi )} \sum_{\alpha<j <\beta} {\bar{\sigma}_{-\nu, \chi }(j)} f(j) = \frac{ q^{1-\frac{\nu}{2}}}{\tau(\chi)} L(1-\nu,\chi) \int_\alpha^\beta \frac{f(t) }{t^{\nu}} \mathrm{d}t+2\pi \sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} \mathrm{d}t. \end{align*} \end{theorem} \begin{theorem}\label{vore2} Let $0< \alpha< \beta$ and $\alpha, \beta \notin \mathbb{Z}$. Let $f$ denote a function analytic inside a closed contour strictly containing $[\alpha, \beta ]$. Assume that $\chi$ is a non-principal, even primitive character modulo $ q$. For $0< \Re{\nu}<\frac{1}{2}$, we have \begin{align*} &\frac{ q^{1+\frac{\nu}{2}} } {\tau(\chi )} \sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) = \frac{ q^{1+\frac{\nu}{2}}}{2\tau(\chi)} L(1+\nu,\chi) \int_\alpha^\beta {f(t) } \mathrm{d}t + 2 \pi \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_\alpha^\beta {f(t) } t^{-\frac{\nu}{2}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} \mathrm{d}t. \end{align*} \end{theorem} \fi \section{Proof of Main Results when $z \in \mathbb{Z}$ }\label{proof of integer nu} Throughout this section we consider $X=4/(a^2x)$ where $a,x>0$ be positive real real number. \begin{proof}[Theorem \rm{\ref{odd1_based}} and Theorem \ref{M1} ][](Theorem \ref{M1} $\Rightarrow$ Theorem \ref{odd1_based}) First, we notice that the double series on the right-hand side of identity \eqref{p7} converges absolutely and uniformly on any compact interval for $\theta\in (0,1)$. Therefore, it is sufficient to prove the theorem for $\theta=h/q$, where $q$ is prime and $0<h<q.$ Now we multiply the identity \eqref{M11} in Theorem \ref{M1} with $ \frac{1}{i\phi(q)} \chi(h) \tau(\bar{\chi})$ and then take the sum on odd primitive character $\chi$ modulo $q$. Then from the left-hand side of the identity in \eqref{M11}, we have \begin{align} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{n=1}^\infty\sigma_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) &= \frac{1}{i\phi(q)}\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d|n}d^k \sum_{\chi \ odd } \chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left(\frac{2\pi d h}{q}\right),\label{1.1} \end{align} where in the last step, we have used \eqref{sin}. The sum in the first term of the right-hand side of \eqref{M11} becomes \begin{align} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h)\tau(\chi) \tau(\bar{\chi}) L(1+k, \bar{\chi}) =- \frac{q^{-k}}{2i}\left(\zeta(1+k, h/q) - \zeta(1+k, 1-h/q)\right),\label{1.2} \end{align} where we have used \eqref{Hurwitz}, \eqref{both} and \eqref{prop}. Similarly, the sum in the second term of \eqref{M11} becomes\begin{align} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) L(1, \chi) = -\frac{\pi }{q \phi(q)} \sum_{\chi \ odd } \chi(h)\tau(\chi) \tau(\bar{\chi}) L(0, \bar{\chi}) = \frac{\pi }{2}\left(\zeta(0, h/q) - \zeta(0, 1-h/q)\right). \label{1.22} \end{align} The infinite sum in the last term in the right-hand side of \eqref{M11} transforms into the following. \begin{align} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h)\tau(\chi) \tau(\bar{\chi}) \sum_{n=1}^\infty \Bar{\sigma}_{k,\Bar{\chi}}(n) \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } = - \frac{q}{i\phi(q)}\sum_{n=1}^\infty \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }\sum_{d|n}d^k \sum_{\chi \ odd } \chi(h) \Bar{ \chi}(n/d) \notag\\ &= - \frac{q}{i\phi(q)} \sum_{d=1}^\infty d^k \sum_{r=1}^\infty \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi \ odd } \chi(h) \Bar{ \chi}(r) \notag\\ &= - \frac{q}{2i } \sum_{d=1}^\infty d^k \left( \sum_{\substack{r=1\{\rho}\equiv h( q)}}^\infty \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } -\sum_{\substack{r=1\{\rho}\equiv -h( q)}}^\infty \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \right) \notag\\ &= - \frac{q}{2i } \sum_{d=1}^{\infty}d^k \sum_{ m=0}^{\infty}\left( \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2(mq+h)d}{qa^2x})^{1+\nu+k} } - \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2(mq+q-h)d}{qa^2x})^{1+\nu+k} } \right) \notag\\ &= - \frac{q}{2i } \sum_{d=1}^{\infty}d^k \sum_{ m=0}^{\infty}\left( \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2d}{a^2x}(m+h/q))^{1+\nu+k} } - \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2d}{a^2x}(m+1-h/q))^{1+\nu+k} } \right). \label{1.3} \end{align} Employing \eqref{1.1}, \eqref{1.2}, \eqref{1.22}, and \eqref{1.3} in \eqref{M11}, we get the desired result. \iffalse \begin{align} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{n=1}^\infty\sigma_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})= -\frac{(-1)^{\frac{k+1}{2}}k!}{4(2\pi)^{k+1}i} X^\frac{\upsilon}{2} \Gamma(\upsilon) \left(\zeta(1+k, h/q) - \zeta(1+k, 1-h/q)\right)\notag\\ &+\frac{(-1)^{\frac{k+1}{2}}}{4i} X^\frac{\upsilon}{2}(2 \pi X)^{k+1} \sum_{d=1}^{\infty}d^k \sum_{ m=0}^{\infty}\left( \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2d}{a^2x}(m+h/q))^{1+\nu+k} } - \frac{ \Gamma(\upsilon+k+1) }{(1+\frac{16\pi^2d}{a^2x}(m+1-h/q))^{1+\nu+k} } \right) \notag\\ & +\delta_k \frac{\pi \Gamma(1+\nu)}{8} X^{1+\frac{\nu}{2}}\left(\zeta(0, h/q) - \zeta(0, 1-h/q)\right) . \label{1.4} \end{align} where $\delta_k$ is defined in \eqref{del_k0}. Combining \eqref{1.1} with \eqref{1.4}, we get the result. \fi (Theorem \ref{odd1_based} $\Rightarrow$ Theorem \ref{M1}) Let $\theta=h/q$, and $\chi$ be an odd primitive character modulo $q$. Multiplying the identity \eqref{p7} in Theorem \ref{odd1_based} by $\bar{\chi}(h)/\tau(\bar{\chi})$, and then summing on $h$, $ 0<h<q$, from the left hand side of the identity \eqref{p7}, we have \begin{align}\label{p1} & \frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( 2\pi d h/q \right)\notag\\ &= \frac{1}{2 i \tau(\bar{\chi})} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sum_{h=1}^{q-1}\bar{\chi}(h) \left( e^{2\pi i d h/q} - e^{-2\pi i d h/q}\right)\notag\\ &= \frac{1}{2 i } \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \left( \chi(d)-\chi(-d)\right)\notag\\ &=i^{-1}\sum_{n=1}^\infty\sigma_{k,\chi}(n)n^{\frac{\upsilon} {2}}K_\upsilon(a\sqrt{nx}), \end{align} where in the penultimate step, we have used \eqref{gauss}. It remains to evaluate the right-hand side of \eqref{p7}. To do this we first observe by \eqref{Hurwitz} and \eqref{both} \begin{align}\label{p2} \frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\left(\zeta(1+k,h/q) - \zeta(1+k, 1- h/q)\right) =-2 q^{k}\tau({\chi})L(1+k,\Bar{\chi}). \end{align} Similarly we can obtain \begin{align}\label{p3} \frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\left(\zeta(0,h/q) - \zeta(0, 1- h/q)\right) &=-\frac{2\tau({\chi})}{q} L(0,\Bar{\chi}) =i^{-1} L(1,{\chi}), \end{align} in the last step, we have used the functional equation for $L$-function \eqref{ll(s)}. Next, we examine the last expression on the right-hand side of \eqref{p7}. \begin{align}\label{p14} &\frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{d=1}^{\infty}d^k \sum_{ m=0}^{\infty}\left( \frac{ 1 }{(1+\frac{16\pi^2d}{a^2x}(m+h/q))^{1+\nu+k} } - \frac{ 1 }{(1+\frac{16\pi^2d}{a^2x}(m+1-h/q))^{1+\nu+k} } \right) \notag\\ &=\frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{d=1}^{\infty}d^k \sum_{\substack{r=1 \\ r \equiv h(q)}}^\infty \frac{ 1 }{(1+\frac{16\pi^2dr}{a^2xq} )^{1+\nu+k} } -\frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{d=1}^{\infty}d^k \sum_{\substack{r=1 \\ r \equiv -h(q)}}^\infty \frac{ 1 }{(1+\frac{16\pi^2dr}{a^2xq} )^{1+\nu+k} } \notag\\ &=\frac{2}{\tau(\bar{\chi})} \sum_{d=1}^{\infty} \sum_{ r=1 }^\infty \frac{ d^k \bar{\chi}(r) }{(1+\frac{16\pi^2dr}{a^2xq} )^{1+\nu+k} } =-\frac{2\tau({\chi}) }{q} \sum_{ n=1 }^\infty \frac{ \Bar{\sigma}_{k,\Bar{\chi} }(n)}{(1+\frac{16\pi^2dr}{a^2xq} )^{1+\nu+k} }. \end{align} Inserting \eqref{p1}, \eqref{p2}, \eqref{p3} and \eqref{p14} into \eqref{p7} we get the result. \end{proof} \begin{proof}[Theorem \rm{\ref{odd2_based}} and Theorem \rm{\ref{M2}}][] (Theorem \ref{M2} $\Rightarrow$ Theorem \ref{odd2_based}) The proof of this theorem is exactly similar to the Theorem \rm{\ref{odd1_based}} except the fact we will use the identity \eqref{ll1} of Theorem \ref{M2}. To avoid repetitions, we skip the detail of the proof. \iffalse We will proceed exactly in a similar way as in the proof of the previous theorem. We will multiply the identity in Theorem \ref{odd2} with $ \frac{1}{i\phi(q)} \chi(h) \tau(\bar{\chi})$ and then take the sum on odd primitive character $\chi$ modulo $q$. The left-hand side of the identity in Theorem \ref{odd1} becomes \begin{align} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{n=1}^\infty\Bar{\sigma}_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) = \frac{1}{i\phi(q)}\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d/n}d^k \sum_{\chi \ odd } \chi(n/d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left(\frac{2\pi n h}{dq}\right),\label{3.1} \end{align} Now we examine the right-hand side of identity in Theorem \eqref{odd2}. To do this, we first observe \begin{align} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) L(1+k, {\chi}) = \frac{(-1)^\frac{k}{2}2^k\pi^{k+1}}{2k!}\left(\zeta(-k, h/q) - \zeta(-k, 1-h/q)\right),\label{3.2} \end{align} where we have used Functional equation of $L-$function \eqref{ll(s)}, then \eqref{Hurwitz} and \eqref{prop}. Lastly, we consider \begin{align} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h)\tau(\chi) \tau(\bar{\chi}) \sum_{n=1}^\infty {\sigma}_{k,\Bar{\chi}}(n) \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } = - \frac{q}{i\phi(q)}\sum_{n=1}^\infty \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }\sum_{d/n}d^k \sum_{\chi \ odd } \chi(h) \Bar{ \chi}(d) \notag\\ &= - \frac{q}{i\phi(q)} \sum_{r=1}^\infty \sum_{d=1}^\infty d^k \frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi \ odd } \chi(h) \Bar{ \chi}(d) \notag\\ &= - \frac{q}{2i } \Gamma(\upsilon+k+1) \sum_{r=1}^\infty \left( \sum_{\substack{d=1\{\delta}\equiv h(mod q)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } -\sum_{\substack{d=1\{\delta}\equiv -h(mod q)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \right) \notag\\ &= - \frac{q}{2i } \Gamma(\upsilon+k+1) \sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left( \frac{ (mq+h)^k }{(1+\frac{16\pi^2(mq+h)r}{qa^2x})^{1+\nu+k} } - \frac{ (mq+q-h)^k }{(1+\frac{16\pi^2(mq+q-h)r}{qa^2x})^{1+\nu+k} } \right) \notag\\ &= - \frac{q^{k+1}}{2i } \Gamma(\upsilon+k+1) \sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left( \frac{ (m+h/q)^k }{(1+ (m+h/q)\frac{ 16\pi^2r}{a^2x})^{1+\nu+k} } - \frac{ (m+1-h/q)^k }{(1+ (m+1-h/q)\frac{16\pi^2r}{a^2x})^{1+\nu+k} } \right). \notag\\ \label{3.3} \end{align} Using \eqref{3.2}, \eqref{3.3} and Theorem \eqref{odd2}, we obtain \begin{align} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{n=1}^\infty\Bar{\sigma}_{k,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) =\frac{(-1)^\frac{k}{2}2^k\pi^{k+1}}{4}\Gamma(\upsilon+k+1) X^{k+1+\frac{\upsilon}{2}}\left(\zeta(-k, h/q) - \zeta(-k, 1-h/q)\right)\notag \\&+\frac{(-1)^{\frac{k+1}{2}}}{4i} X^{\frac{\upsilon}{2}}\left( 2\pi X\right)^{k+1} \Gamma(\upsilon+k+1) \sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left( \frac{ (m+h/q)^k }{(1+ (m+h/q)\frac{ 16\pi^2r}{a^2x})^{1+\nu+k} } - \frac{ (m+1-h/q)^k }{(1+ (m+1-h/q)\frac{16\pi^2r}{a^2x})^{1+\nu+k} } \right) .\label{3.4} \end{align}Combining \eqref{3.1} and \eqref{3.4}, we get the result. \fi (Theorem \ref{odd2_based} $\Rightarrow$ Theorem \ref{M2}) Multiplying the identities in Theorem \ref{odd2_based} by $\bar{\chi}(h)/\tau(\bar{\chi})$, and then summing on $h$, $ 0<h<q$, we can show that Theorem \ref{odd2_based} implies Theorem \ref{M2}. \end{proof} \begin{proof}[Corollary \rm{\ref{cor1r}}][] Multiplying \eqref{p7} of Theorem \ref{odd1_based} by $-4$ and \eqref{r7} of Theorem \ref{odd2_based} by $16$, and then adding both the expressions, putting $k=2$ yield \begin{align}\label{m1} &\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^2 \left\{ 16\sin \left(\frac{2\pi n \theta}{d}\right) -4\sin \left( 2\pi d \theta \right)\right\} \notag\\ &=- {16} \pi^3\Gamma(\nu+3) X^{\frac{\nu}{2}+3 }( \zeta(-2,\theta) - \zeta(-2,1-\theta) ) -\frac{1}{4\pi^3}X^{\frac{\nu}{2} } \Gamma(\nu) ( \zeta(3,\theta) - \zeta(3,1-\theta) )\notag\\ &+X^{\frac{\nu}{2} }(2 \pi X)^3\Gamma(\nu+3)\sum_{n=1}^\infty\sum_{m=0}^\infty \left\{ \frac{ n^2-4(m+\theta)^2 }{(1+\frac{16\pi^2r}{a^2x}(m+\theta))^{\nu+3} } - \frac{ n^2-4(m+1-\theta)^2 }{(1+\frac{16\pi^2r}{a^2x}(m+1-\theta))^{\nu+3} } \right\}. \end{align} By using the well known identity \cite{MR0434929}\begin{align*} \zeta(-n,\theta)=-\frac{B_{n+1}(\theta)}{n+1} \end{align*} where $B_{n}(\theta)$ is the nth Bernoulli polynomial. So we have \begin{align}\label{m2} \zeta(-2,\theta)- \zeta(-2,1-\theta)=-\frac{1}{3}({B_{3}(\theta)}-B_{3}(1-\theta))=-\frac{1}{3}(\theta-3\theta^2+2\theta^3 ). \end{align} By using partial fraction expansion for $\cot(\pi \theta)$, we can easily get \begin{align*} \zeta(n,1-\theta)+(-1)^n \zeta(n,\theta)=-\frac{\pi }{(n-1)!}\frac{d^{n-1}}{dx^{n-1}}\cot(\pi\theta). \end{align*} Hence by above, we get \begin{align}\label{m3} ( \zeta(3,\theta) - \zeta(3,1-\theta) )=\pi^3(\cot(\pi \theta)+\cot^3(\pi \theta)). \end{align} Substituting \eqref{m2}, \eqref{m3} in \eqref{m1}, we get the result. \end{proof} \begin{proof}[Corollary \rm{\ref{cor2r}}][] Proof directly follows from \eqref{berndt} and Corollary \rm{\ref{cor1r}}. \end{proof} \begin{proof}[Corollary \rm{\ref{cor3r}}][] Substituting $\nu=1/2,a=4\pi $ in the Corollary \rm{\ref{cor2r}} and using \eqref{property}, we get the result. \end{proof} \begin{proof}[Theorem \rm{\ref{even1_based}} and Theorem \rm{\ref{thmeven1}}][] (Theorem \ref{thmeven1} and Proposition \ref{first paper} $\Rightarrow$ Theorem \ref{even1_based}) We begin our proof by considering the expression on the left-hand side of the identity in Theorem \ref{even1_based}. Employing \eqref{sin}, we have \begin{align} &\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left(\frac{ 2\pi dh }{q}\right) =\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \left(\sum_{\substack{d|n\\ q|d}} d^k +\sum_{\substack{d|n\\ q\nmid d}}d^k \cos \left(\frac{ 2\pi dh }{q}\right)\right)\notag\\ &=\sum_{m=1}^{\infty} (qm)^{\nu/2} K_{\nu}(a\sqrt{qmx}) \sum_{d|m}(qd)^k+ \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{\substack{d|n\\ q\nmid d}} \frac{d^k}{\phi(q)} \sum_{\chi \ even } \chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=q^{\frac{\nu}{2}+k}\sum_{m=1}^{\infty} m^{\nu/2} K_{\nu}(a\sqrt{qmx}) \sum_{d|m}d^k- \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{\substack{d|n\\ q\nmid d}} \frac{d^k}{\phi(q)} \chi_0(d) \notag\\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{\substack{d|n\\ q\nmid d}} \frac{d^k}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}}\chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=q^{\frac{\nu}{2}+k}\sum_{m=1}^{\infty} m^{\nu/2} K_{\nu}(a\sqrt{qmx}) \sum_{d|m}d^k- \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx})\frac{1}{ {\phi(q)}}\left( \sum_{d|n}d^k- \sum_{\substack{d|n\\ q| d}} d^k \right) \notag\\& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{ d|n } d^k\chi(d) \notag\\ &=\frac{q^{\frac{\nu}{2}+k+1}}{\phi(q)}\sum_{m=1}^{\infty} \sigma_k(m)m^{\nu/2} K_{\nu}(a\sqrt{qmx}) - \frac{1}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_k(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \sigma_{k,\chi}(n)n^{\nu/2} K_{\nu}(a\sqrt{nx} ). \label{even1.1} \end{align} Now, we first evaluate the first two sums on the right-hand side of \eqref{even1.1}. By Proposition \eqref{first paper}, we have \begin{align} &\frac{q^{\frac{\nu}{2}+k+1}}{\phi(q)}\sum_{m=1}^{\infty} \sigma_k(m)m^{\nu/2} K_{\nu}(a\sqrt{qmx}) - \frac{1}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_k(n) n^{\nu/2} K_{\nu}(a\sqrt{nx})=-\frac{\Gamma(\nu) \zeta(-k)}{ 4\phi(q)} X^{\frac{\nu}{2}} \left( q^{k+1}-1 \right) \notag\\ &-\frac{\Gamma(1+\nu) }{4}X^{1+\frac{\nu}{2}} \delta_{k,1} +\frac{(-1)^{\frac{k+1}{2}}}{2\phi(q)} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1)\left( 2\pi X\right)^{k+1}\left(\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }-\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2}\frac{n}{x}+1\right)^{\upsilon+k+1} }\right), \label{even1.2} \end{align} where $\delta_{k,1}$ is defined in \eqref{del_k}. Now, we examine the last sum on the right-hand side of \eqref{even1.1}. By \eqref{thm3} of Theorem \eqref{thmeven1}, we have \begin{align} &\frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \sigma_{k,\chi}(n)n^{\nu/2} K_{\nu}(a\sqrt{nx}) =\frac{(-1)^{\frac{k-1}{2}}k!q^{k+1}}{2(2\pi)^{k+1}\phi(q)} X^\frac{\upsilon}{2} \Gamma(\upsilon) \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)L(1+k,\Bar{\chi})\notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{(-1)^{\frac{k+1}{2}}}{2\phi(q)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\Gamma(\upsilon+k+1)\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)\sum_{n=1}^\infty \frac{ \Bar{\sigma}_{k,\Bar{\chi}}(n)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }.\label{even1.3} \end{align} We consider \begin{align} \sum_{\substack{\chi \neq \chi_0\\ \chi\ even}} \chi(h)L(1+k,\Bar{\chi})&= \sum_{\chi \ even}\chi(h)L(1+k,\Bar{\chi})-L(1+k, {\chi_0})\notag\\ &= \sum_{\chi\ even}\chi(h) \frac{1}{q^{k+1}}\sum_{r=1}^{q-1}\bar{\chi}(r) \zeta(k+1,r/q)- \left(\sum_{n=1}^\infty\frac{1}{n^{k+1}} -\sum_{n=1}^\infty\frac{1}{(nq)^{k+1}}\right)\notag\\ &= \frac{1}{q^{k+1}}\sum_{r=1}^{q-1} \zeta(k+1,r/q) \sum_{\chi\ even}\chi(h)\bar{\chi}(r)- \left( 1-\frac{1}{q^{k+1}} \right)\zeta(k+1) \notag\\ &= \frac{\phi(q)}{2q^{k+1}} \left\{ \zeta(k+1,h/q)+\zeta(k+1,1-h/q) \right\} - \left( 1-\frac{1}{q^{k+1}} \right)\zeta(k+1), \label{even1.4} \end{align} in the last line, we used \eqref{Hurwitz} and \eqref{prop1}. Now, we examine the last expression in \eqref{even1.3}, and we obtain \begin{align} & \frac{1}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)\sum_{n=1}^\infty \frac{\Bar{\sigma}_{k,\Bar{\chi}}(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } = \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{\sum_{d/n}d^k}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \bar{\chi}(n/d) \notag\\ &= \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{\sum_{d/n}d^k}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \left\{ \sum_{ \chi even} \chi(h) \bar{\chi}(n/d) - \chi_0(n/d)\right\} \notag\\ &= \frac{1}{2} \sum_{d=1}^\infty d^k \sum_{\substack{r=1\{\rho}\equiv \pm h(mod q) }}^\infty \frac{1}{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } - \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{ \left(\sigma_k(n)-\sigma_k(n/q)\right)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ &=\frac{1}{2} \sum_{d=1}^\infty d^k \sum_{m=0 }^\infty \left\{ \frac{1}{\left(\frac{16\pi^2}{a^2q}\frac{d(mq+h)}{x}+1\right)^{\upsilon+k+1} } +\frac{1}{\left(\frac{16\pi^2}{a^2q}\frac{d(mq+q-h)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{ \sigma_k(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } +\frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{ \sigma_k(n/q) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }\notag\\ &=\frac{1}{2} \sum_{d=1}^\infty d^k \sum_{m=0 }^\infty \left\{ \frac{1}{\left(\frac{16\pi^2}{a^2}\frac{d(m+h/q)}{x}+1\right)^{\upsilon+k+1} } +\frac{1}{\left(\frac{16\pi^2}{a^2}\frac{d(m+1-h/q)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{ \sigma_k(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } +\frac{1}{\phi(q)} \sum_{r=1}^\infty \frac{ \sigma_k(r) }{\left(\frac{16\pi^2}{a^2}\frac{r}{x}+1\right)^{\upsilon+k+1} }. \label{even1.5} \end{align} Substituting \eqref{even1.4}, \eqref{even1.5} in \eqref{even1.3}, we obtain \begin{align} &\frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \sigma_{k,\chi}(n)n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ &=\frac{(-1)^{\frac{k-1}{2}}k! }{2(2\pi)^{k+1} } X^\frac{\upsilon}{2} \Gamma(\upsilon)\left\{ \frac{ 1}{2 } \left( \zeta(k+1,h/q)+\zeta(k+1,1-h/q) \right) - \frac{ q^{k+1}- 1 }{\phi(q)} \zeta(k+1) \right\}\notag\\ & +\frac{(-1)^{\frac{k+1}{2}}}{4} X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\Gamma(\upsilon+k+1)\sum_{d=1}^\infty d^k \sum_{m=0 }^\infty \left\{ \frac{1}{\left(\frac{16\pi^2}{a^2}\frac{d(m+h/q)}{x}+1\right)^{\upsilon+k+1} } +\frac{1}{\left(\frac{16\pi^2}{a^2}\frac{d(m+1-h/q)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{(-1)^{\frac{k+1}{2}}}{2\phi(q)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\Gamma(\upsilon+k+1) \sum_{n=1}^\infty \frac{ \sigma_k(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{(-1)^{\frac{k+1}{2}}}{2\phi(q)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\Gamma(\upsilon+k+1)\sum_{r=1}^\infty \frac{ \sigma_k(r) }{\left(\frac{16\pi^2}{a^2}\frac{r}{x}+1\right)^{\upsilon+k+1} }. \label{even1.6}\end{align} Inserting \eqref{even1.2} and \eqref{even1.6} into \eqref{even1.1}, we get the result. (Theorem \ref{even1_based} $\Rightarrow$ Theorem \ref{thmeven1}) Let $\theta=h/q$, and $\chi$ be an even primitive non-principal character modulo $q$. Multiplying the identity \eqref{1290} in Theorem \ref{even1_based} by $\bar{\chi}(h)/\tau(\bar{\chi})$, and then summing on $h$, $ 0<h<q$. The remaining steps are similar to proof of Theorem \ref{M1}. The proof is easy and similar to the case when Theorem \end{proof} \begin{proof}[Theorem \rm{\ref{even2_based}} and Theorem \rm{\ref{even2}}][] The proof is similar to the previous one. To avoid repetition we skip the detail. \end{proof} \iffalse We begin our proof by considering the expression on the left-hand side of the identity in Theorem \ref{even2_based}. Employing \eqref{sin} \begin{align} &\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left(\frac{ 2\pi nh }{dq}\right) =\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \left(\sum_{\substack{d|n\\ q/d}} \left(\frac{n}{d}\right)^k +\sum_{\substack{d|n\\ q\nmid d}}\left(\frac{n}{d}\right)^k \cos \left(\frac{ 2\pi dh }{q}\right)\right)\notag\\ &=\sum_{m=1}^{\infty} (qm)^{\nu/2} K_{\nu}(a\sqrt{qmx}) \sum_{d|m} \left(\frac{n}{d}\right)^k +\frac{1}{\phi(q)} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{\substack{d|n\\ q\nmid d}} \left(\frac{n}{d}\right)^k \sum_{\chi \ even } \chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=q^{\frac{\nu}{2}}\sum_{m=1}^{\infty} \sigma_k(m)m^{\nu/2} K_{\nu}(a\sqrt{qmx}) - \frac{1}{\phi(q)}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{\substack{d|n\\ q\nmid d}} \left(\frac{n}{d}\right)^k \chi_0(d) \notag\\& \ \ \ \ \ \ \ \ \ +\frac{1}{\phi(q)}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{\substack{d|n\\ q\nmid d}} \left(\frac{n}{d}\right)^k\sum_{\substack{\chi \neq \chi_0\\\chi even}}\chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=q^{\frac{\nu}{2}}\sum_{m=1}^{\infty}\sigma_k(m) m^{\nu/2} K_{\nu}(a\sqrt{qmx}) - \frac{1}{ {\phi(q)}} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \left( \sum_{d|n}\left(\frac{n}{d}\right)^k- \sum_{\substack{d|n\\ q/ d}} \left(\frac{n}{d}\right)^k \right) \notag\\& \ \ \ \ \ \ \ \ \ +\frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{ d|n } \left(\frac{n}{d}\right)^k\chi(d) \notag\\ &=\frac{q^{\frac{\nu}{2}+1}}{\phi(q)}\sum_{m=1}^{\infty} \sigma_k(m)m^{\nu/2} K_{\nu}(a\sqrt{qmx}) - \frac{1}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_k(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\& \ \ \ \ \ \ \ \ \ + \frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \bar{\sigma}_{k,\chi}(n)n^{\nu/2} K_{\nu}(a\sqrt{nx} ). \label{even2.1} \end{align} We first evaluate the first two sums on the right-hand side of \eqref{even2.1}. By Proposition \eqref{first paper}, we have \begin{align} &\frac{q^{\frac{\nu}{2}+1}}{\phi(q)}\sum_{m=1}^{\infty} \sigma_k(m)m^{\nu/2} K_{\nu}(a\sqrt{qmx}) - \frac{1}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_k(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ & =\frac{ \Gamma( k+1) }{2\phi(q)} \Gamma(\upsilon+k+1) \zeta(k+1)X^{\frac{\upsilon}{2}+k+1}\left( \frac{1}{q^k}-1\right) -\frac{\Gamma(\nu) \zeta(-k)}{4 } X^{\frac{\nu}{2}} \notag\\ &+\frac{(-1)^{\frac{k+1}{2}}}{2\phi(q)} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1)\left( 2\pi X\right)^{k+1}\left(\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{q^k\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }-\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2}\frac{n}{x}+1\right)^{\upsilon+k+1} }\right). \label{even2.2} \end{align} Now, we examine the last sum on the right-hand side of \eqref{even2.1}. By Theorem \eqref{even2}, we have \begin{align} &\frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \Bar{\sigma}_{k,\chi}(n)n^{\nu/2} K_{\nu}(a\sqrt{nx}) =\frac{k!}{2\phi(q)}\Gamma(\upsilon+k+1) X^{k+1+\frac{\upsilon}{2}} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)\tau(\bar{\chi})L(1+k,\Bar{\chi}) \notag\\&+ \frac{(-1)^{\frac{k+1}{2}}(2 \pi X)^{k+1}}{2q^k\phi(q)} X^\frac{\upsilon}{2} \Gamma(\upsilon+k+1) \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)\sum_{n=1}^\infty \frac{ \sigma_{k,\Bar{\chi}}(n)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }.\label{even2.3} \end{align} We consider \begin{align} &\frac{1}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\ \chi\ even}} \chi(h)\tau(\bar{\chi})L(1+k,\Bar{\chi})= \frac{(-1)^{\frac{k+1}{2}}\left( 2\pi \right)^{k+1}}{2k!q^k\phi(q)} \sum_{\substack{\chi \neq \chi_0\\ \chi\ even}} \chi(h)L(-k,\Bar{\chi}) \notag\\ &=\frac{(-1)^{\frac{k+1}{2}}\left( 2\pi \right)^{k+1}}{2k!\phi(q)} \sum_{\substack{\chi \neq \chi_0\\ \chi\ even}} \chi(h)\sum_{r=1}^{q-1}\bar{\chi}(r) \zeta(-k,r/q) = \frac{(-1)^{\frac{k+1}{2}}\left( 2\pi \right)^{k+1}}{2k!\phi(q)} \sum_{r=1}^{q-1} \zeta(-k,r/q) \left\{\sum_{ \chi\ even} \chi(h)\bar{\chi}(r) -1\right\}\notag\\ &= \frac{(-1)^{\frac{k+1}{2}}\left( 2\pi \right)^{k+1}}{4k! } \left\{ \zeta(-k,h/q)+\zeta(-k,1-h/q) + \frac{2}{\phi(q)}\left( 1-\frac{1}{q^{k}} \right)\zeta(-k) \right\}, \label{even2.4} \end{align} in the last line, we used \eqref{Hurwitz} and \eqref{prop1}. Now we examine the last expression in \eqref{even2.3}, we obtain \begin{align} & \frac{1}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)\sum_{n=1}^\infty \frac{ \sigma_{k,\Bar{\chi}}(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } = \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{\sum_{d/n}d^k}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \left\{ \sum_{ \chi even} \chi(h) \bar{\chi}(d) - \chi_0(d)\right\} \notag\\ &= \frac{1}{2} \sum_{r=1}^\infty \sum_{\substack{d=1\{\delta}\equiv \pm h(mod q) }}^\infty \frac{ d^k }{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } - \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{ \left(\sigma_k(n)-q^k\sigma_k(n/q)\right)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ &=\frac{q^k}{2} \sum_{r=1}^\infty \sum_{m=0 }^\infty \left\{ \frac{(m+h/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{r(m+h/q)}{x}+1\right)^{\upsilon+k+1} } +\frac{(m+1-h/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{r(m+1-h/q)}{x}+1\right)^{\upsilon+k+1} } \right\} - \frac{1}{\phi(q)} \sum_{n=1}^\infty \frac{ \sigma_k(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} }\notag\\ & +\frac{q^k}{\phi(q)} \sum_{r=1}^\infty \frac{ \sigma_k(r) }{\left(\frac{16\pi^2}{a^2}\frac{r}{x}+1\right)^{\upsilon+k+1} } \label{even2.5} \end{align} Substituting \eqref{even2.4}, \eqref{even2.5} in \eqref{even2.3}, we obtain \begin{align} &\frac{1}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \bar{\sigma}_{k,\chi}(n)n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ &= \frac{(-1)^{\frac{k+1}{2}}\left( 2\pi \right)^{k+1}X^{\frac{\nu}{2}+k+1}}{8 }\Gamma(\upsilon+k+1) \left\{ \zeta(-k,h/q)+\zeta(-k,1-h/q) + \frac{2}{\phi(q)}\left( 1-\frac{1}{q^{k}} \right)\zeta(-k) \right\} \notag\\ &+\frac{(-1)^{\frac{k+1}{2}}(2 \pi X)^{k+1}}{4 } X^\frac{\upsilon}{2} \Gamma(\upsilon+k+1) \sum_{r=1}^\infty \sum_{m=0 }^\infty \left\{ \frac{(m+h/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{r(m+h/q)}{x}+1\right)^{\upsilon+k+1} } +\frac{(m+1-h/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{r(m+1-h/q)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ & - \frac{(-1)^{\frac{k+1}{2}}(2 \pi X)^{k+1}}{2q^k\phi(q)} X^\frac{\upsilon}{2} \Gamma(\upsilon+k+1) \sum_{n=1}^\infty \frac{ \sigma_k(n) }{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ & + \frac{(-1)^{\frac{k+1}{2}}(2 \pi X)^{k+1}}{2\phi(q)} X^\frac{\upsilon}{2} \Gamma(\upsilon+k+1) \sum_{r=1}^\infty \frac{ \sigma_k(r) }{\left(\frac{16\pi^2}{a^2}\frac{r}{x}+1\right)^{\upsilon+k+1} }. \notag\\ \label{even2.6}\end{align} Inserting \eqref{even2.2} and \eqref{even2.6} into \eqref{even2.1}, we get the result. (Theorem \ref{even2_based} $\Rightarrow$ Theorem \ref{even2}) The proof is similar to that of Theorem \ref{M2}. \fi \begin{proof}[Theorem \rm{\ref{botheven_odd1_based}} and Theorem \rm{\ref{botheven_odd2_based}} and Theorem \rm{\ref{botheven_odd}}][] (Theorem \ref{botheven_odd} $\Rightarrow$ Theorem \ref{botheven_odd1_based}) It is sufficient to prove the theorem for rationals $\theta=h_1/p$ and $\psi=h_2/q$ where $p$ and $q$ are primes, $0<h_1<p$ and $0<h_2<q$. Now we multiply both sides of the identity \eqref{THM5} in Theorem \ref{botheven_odd} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$, then sum on odd primitive character $\chi_1$ modulo $p$ and $\chi_2$ modulo $q$. From the left-hand side of the identity \eqref{THM5}, we have \begin{align} &\frac{1}{\phi(p)\phi(q)}\sum_{\chi_2\ odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1\ odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \notag\\ &= \frac{1}{\phi(p)\phi(q)} \sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d|n}d^k \sum_{\chi_2\ odd}\chi_{2}(h_2)\chi_{2}(n/d) \tau(\Bar{\chi_{2}}) \sum_{\chi_1 \ odd}\chi_{1}(h_1)\chi_{1}(d) \tau(\Bar{\chi_{1}}) \notag\\ &=-\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( \frac{2\pi d h_1}{p}\right)\sin \left( \frac{2\pi n h_2}{dq}\right), \label{5.1} \end{align}in the last step we have used the \eqref{sin}. Now we examine the right-hand side of identity \eqref{THM5}. So, we consider \begin{align} &\frac{1}{\phi(p)\phi(q)}\sum_{\chi_2 \ odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1\ odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \tau(\chi_1)\tau(\chi_2) \sum_{n=1}^\infty \sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ &=\frac{1}{\phi(p)\phi(q)}\sum_{\chi_2\ odd}\chi_{2}(h_2)\tau(\chi_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1\ odd}\chi_{1}(h_1)\tau(\chi_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sum_{d|n}d^k \Bar{ \chi_2}(d) \Bar{ \chi_1}(n/d)\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ &=\frac{pq}{\phi(p)\phi(q)}\sum_{n=1}^\infty\sum_{d|n}d^k\frac{\Gamma(\upsilon+k+1)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\Bar{ \chi_2}(d)\sum_{\chi_1\ odd}\chi_{1}(h_1) \Bar{ \chi_1}(n/d) \notag\\ &= \frac{pq\Gamma(\upsilon+k+1)}{\phi(p)\phi(q)}\sum_{d;r\geq 1} \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\Bar{ \chi_2}(d)\sum_{\chi_1\ odd}\chi_{1}(h_1) \Bar{ \chi_1}(r) \notag\\ &= \frac{pq}{4}\Gamma(\upsilon+k+1)\sum_{m;n\geq 0} \left( \frac{ (nq+h_2)^k}{\left(\frac{16\pi^2}{a^2pq}\frac{(nq+h_2)(mp+h_1)}{x}+ 1 \right)^{\upsilon+k+1}} -\frac{ (nq+q-h_2)^k}{\left(\frac{16\pi^2}{a^2pq}\frac{(nq+q-h_2)(mp+h_1)}{x}+ 1 \right)^{\upsilon+k+1}} \right.\notag\\&\left.\ \ - \frac{ (nq+h_2)^k}{\left(\frac{16\pi^2}{a^2pq}\frac{(nq+h_2)(mp+p-h_1)}{x}+ 1 \right)^{\upsilon+k+1}} + \frac{ (nq+q-h_2)^k}{\left(\frac{16\pi^2}{a^2pq}\frac{(nq+h_2)(mp+p-h_1)}{x}+ 1 \right)^{\upsilon+k+1}} \right) \notag\\ &= \frac{pq^{k+1}}{4}\Gamma(\upsilon+k+1)\sum_{m;n\geq 0} \left( \frac{ (n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} -\frac{ (n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-h_2/q)(m+h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} \right.\notag\\&\left.\ \ - \frac{ (n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+1-h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} + \frac{ (n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+1-h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} \right). \label{5.2} \end{align} Substituting \eqref{5.1} and \eqref{5.2} in \eqref{THM5}, we get the result. \iffalse Using \eqref{5.2} and Theorem \ref{botheven_odd}, we obtain \begin{align} &\frac{1}{\phi(p)\phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1 odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \notag\\ &=\frac{(-1)^{\frac{k+1}{2}}}{8 \left( 2\pi \right)^{-k-1}} \sum_{m;n\geq 0} \left( \frac{ (n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} -\frac{ (n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} \right.\notag\\&\left.\ \ - \frac{ (n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+1-h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} + \frac{ (n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+1-h_1/p)}{x}+ 1 \right)^{\upsilon+k+1}} \right) \Gamma(\upsilon+k+1) X^{\frac{\upsilon}{2}+k+1} . \notag\\ \label{5.3} \end{align} Combining \eqref{5.1} and \eqref{5.3}, we get the result. \fi (Theorem \ref{botheven_odd}, Theorem \ref{even1_based} and Theorem \ref{even2_based} and Proposition \ref{first paper} $\Rightarrow$ Theorem \ref{botheven_odd2_based}) We multiply both sides of Theorem \ref{botheven_odd} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$ , then sum on non-principal primitive even character $\chi_1$ modulo $p$ and $\chi_2$ modulo $q$. We examine the left-hand side of \eqref{THM5} of Theorem \ref{botheven_odd}. Using \eqref{sin}, we have \begin{align}\label{big1} &\frac{1}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{k, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ &=\frac{1}{\phi(p)\phi(q)}\sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{d|n}d^k \left\{ \sum_{\substack{\chi_2 \neq \chi_0\\\chi_2\ even}}\chi_{2}(h_2) { \chi_2}(n/d)\tau(\Bar{\chi_{2}}) \right\} \left \{ \sum_{\substack{\chi_1 \neq \chi_0\\\chi_1\ even}}\chi_{1}(h_1) { \chi_1}(d)\tau(\Bar{\chi_{1}}) \right \} \notag\\ &= \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{d|n}d^k \left\{ \frac{1}{\phi(q) }\sum_{ \chi_2\ even}\chi_{2}(h_2) { \chi_2}(n/d)\tau(\Bar{\chi_{2}})+ \frac{\chi_0(n/d)}{\phi(q)} \right\} \notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\left \{\frac{1}{\phi(p) } \sum_{ \chi_1\ even}\chi_{1}(h_1) { \chi_1}(d)\tau(\Bar{\chi_{1}}) + \frac{\chi_0(d)}{\phi(p)} \right \} \notag\\ &= \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{\substack{d|n\\ (p,d)=(q,n/d)=1}}d^k \left\{ \cos \left( \frac{2\pi n h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right)+\frac{1}{\phi(p) } \cos \left( \frac{2\pi nh_2 }{dq}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{\phi(q) } \cos \left( \frac{2\pi dh_1 }{p}\right) +\frac{1}{\phi(p) \phi(q) } \right\} \notag\\ &= \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d|n }d^k \left\{ \cos \left( \frac{2\pi n h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right)+\frac{1}{\phi(p) } \cos \left( \frac{2\pi nh_2 }{dq}\right)+\frac{1}{\phi(q) } \cos \left( \frac{2\pi dh_1 }{p}\right) \right.\notag\\&\left.\ \ \ \ \ +\frac{1}{\phi(p) \phi(q) } \right\} -\frac{p}{\phi(p) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{\substack{d|n\\ p|d}}d^k\left\{ \cos \left( \frac{2\pi nh_2 }{dq}\right)+\frac{1}{\phi(q) } \right\} \notag\\ & \ \ \ \ \ \ -\frac{q}{\phi(q) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{\substack{d|n\\ q| \frac{n}{d}}}d^k\left\{ \cos \left( \frac{2\pi dh_1 }{p}\right)+\frac{1}{\phi(p) } \right\} +\frac{pq}{\phi(p) \phi(q)} \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{\substack{d|n\\ p|d, \ q| \frac{n}{d}}}d^k \notag\\ &= \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d|n }d^k \cos \left( \frac{2\pi n h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right) \notag\\ &+\left\{ \frac{1}{\phi(p) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d|n }d^k \cos \left( \frac{2\pi nh_2 }{dq}\right)- \frac{p^{\frac{\nu}{2}+k+1}}{\phi(p) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d|m }d^k \cos \left( \frac{2\pi mh_2 }{dq}\right) \right\} \notag\\ &+\left\{\frac{1}{\phi(q) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d|n }d^k \cos \left( \frac{2\pi dh_1 }{p}\right) - \frac{q^{\frac{\nu}{2}+1}}{\phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d|m }d^k \cos \left( \frac{2\pi dh_1 }{p}\right) \right\} \notag\\ &+\left\{\frac{1}{\phi(p) \phi(q)} \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{ d|n }d^k-\frac{p^{\frac{\nu}{2}+k+1}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d|m }d^k \right.\notag\\&\left.\ \ \ \ \ - \frac{q^{\frac{\nu}{2}+1}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d|m }d^k + \frac{p^{\frac{\nu}{2}+k+1}q^{\frac{\nu}{2}+1}}{\phi(p) \phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpqx})\sum_{ d|m }d^k \right\}. \end{align} Employing Theorem \ref{even2_based} with $\theta=h_2/q$, we evaluate the second and third terms on the right-hand side of \eqref{big1} as follows: \begin{align}\label{small1} & \frac{1}{\phi(p) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^k \cos \left( \frac{2\pi nh_2 }{dq}\right)- \frac{p^{\frac{\nu}{2}+k+1}}{\phi(p) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d|m }d^k \cos \left( \frac{2\pi mh_2 }{dq}\right) \notag\\ =&\frac{(-1)^{\frac{k+1}{2}}(2 \pi X)^{k+1}}{4q^k \phi(p) } X^\frac{\upsilon}{2} \Gamma(\upsilon+k+1) \left\{ \sum_{r=1}^\infty \sum_{\substack{d=1\\ d\equiv \pm h_2(q) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2q}\frac{rd}{x}+1\right)^{\upsilon+k+1} } - \sum_{r=1}^\infty \sum_{\substack{d=1\\ d\equiv\pm h_2(q) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{rd}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\&+\frac{\Gamma(\nu) \zeta(-k)}{4\phi(p) } X^{\frac{\nu}{2}}\left( p^{k+1}-1 \right). \end{align} Using Theorem \ref{even1_based} with $\theta=h_1/p$, we evaluate the fourth and fifth terms on the right-hand side of \eqref{big1} as follows: \begin{align}\label{small2} &\frac{1}{\phi(q) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d|n }d^k \cos \left( \frac{2\pi dh_1 }{p}\right) - \frac{q^{\frac{\nu}{2}+1}}{\phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d|m }d^k \cos \left( \frac{2\pi dh_1 }{p}\right)\notag\\ =&\frac{(-1)^{\frac{k+1}{2}}k! }{4(2\pi)^{k+1} } X^\frac{\upsilon}{2} \Gamma(\upsilon)\left\{ \zeta(k+1,h_1/p)+\zeta(k+1,1-h_1/p ) \right\} \notag\\ +& \frac{(-1)^{\frac{k+1}{2}}}{4\phi(q) } X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\Gamma(\upsilon+k+1) \left\{ \sum_{d=1}^\infty \sum_{\substack{r=1\\ r\equiv\pm h_1(p) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2p}\frac{rd}{x}+1\right)^{\upsilon+k+1} } - \frac{1}{q^k}\sum_{d=1}^\infty \sum_{\substack{r\equiv1\\ r=\pm h_1(p) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{rd}{x}+1\right)^{\upsilon+k+1} } \right\}. \end{align} Using Proposition \ref{first paper}, we evaluate the last four terms on the right-hand side of \eqref{big1} as follows: \begin{align}\label{small3} &\frac{1}{\phi(p) \phi(q)} \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{ d|n }d^k-\frac{p^{\frac{\nu}{2}+k+1}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d|m }d^k \notag\\ &- \frac{q^{\frac{\nu}{2}+1}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d|m }d^k + \frac{p^{\frac{\nu}{2}+k+1}q^{\frac{\nu}{2}+1}}{\phi(p) \phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpqx})\sum_{ d|m }d^k \notag\\ & \ \ \ \ =\frac{(-1)^{\frac{k+1}{2}}}{2\phi(p)\phi(q)} X^{\frac{\nu}{2}}\Gamma(\nu+k+1) \left( 2\pi X\right)^{k+1} \left\{ \sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2}\frac{n}{x}+1\right)^{\nu+k+1} }-\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2p}\frac{n}{x}+1\right)^{\nu+k+1} } \right.\notag\\&\left.\ \ \ \ \ -\frac{1}{q^k}\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\nu+k+1} }+\frac{1}{q^k}\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\nu+k+1} } \right\} -\frac{\Gamma(\nu)\zeta(-k)}{4\phi(p)}X^\frac {\nu}{2}(p^{k+1}-1). \end{align} Substitute \eqref{small1}, \eqref{small2} and \eqref{small3} into the right-hand side of \eqref{big1}, we deduce \begin{align}\label{mainp1} &\frac{1}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{k, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ &=\sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d|n }d^k \cos \left( \frac{2\pi n h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right) \notag\\ &+\frac{(-1)^{\frac{k+1}{2}}(2 \pi X)^{k+1}}{4q^k \phi(p) } X^\frac{\upsilon}{2} \Gamma(\upsilon+k+1) \left\{ \sum_{r=1}^\infty \sum_{\substack{d=1\\ d\equiv\pm h_2(q) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2q}\frac{rd}{x}+1\right)^{\upsilon+k+1} } - \sum_{r=1}^\infty \sum_{\substack{d=1\\ d\equiv \pm h_2(q) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{rd}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ &+ \frac{(-1)^{\frac{k+1}{2}}k! }{4(2\pi)^{k+1} } X^\frac{\upsilon}{2} \Gamma(\upsilon)\left\{ \zeta(k+1,h_1/p)+\zeta(k+1,1-h_1/p ) \right\} \notag\\ &+ \frac{(-1)^{\frac{k+1}{2}}}{4\phi(q) } X^\frac{\upsilon}{2}(2 \pi X)^{k+1}\Gamma(\upsilon+k+1) \left\{ \sum_{d=1}^\infty \sum_{\substack{r=1\\ r\equiv \pm h_1(p) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2p}\frac{rd}{x}+1\right)^{\upsilon+k+1} } - \frac{1}{q^k}\sum_{d=1}^\infty \sum_{\substack{r=1\\ r\equiv \pm h_1(p) } }^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{rd}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ &+ \frac{(-1)^{\frac{k+1}{2}}}{2\phi(p)\phi(q)} X^{\frac{\nu}{2}}\Gamma(\nu+k+1) \left( 2\pi X\right)^{k+1} \left\{ \sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2}\frac{n}{x}+1\right)^{\nu+k+1} }-\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2p}\frac{n}{x}+1\right)^{\nu+k+1} } \right.\notag\\&\left.\ \ \ \ \ -\frac{1}{q^k}\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2q}\frac{n}{x}+1\right)^{\nu+k+1} }+\frac{1}{q^k}\sum_{n=1}^\infty \frac{ \sigma_{k }(n)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\nu+k+1} } \right\}. \end{align} Next, we examine the right-hand side of \eqref{THM5} of Theorem \ref{botheven_odd}. \begin{align}\label{mainp2} &\frac{1}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \tau(\chi_1)\tau(\chi_2) \sum_{n=1}^\infty \sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)\frac{\sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \notag\\ &=\frac{pq}{\phi(p)\phi(q)}\sum_{n=1}^\infty\sum_{d|n} \frac{d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \left\{\sum_{\chi_2\ even}\chi_{2}(h_2)\Bar{ \chi_2}(d)-\chi_0(d)\right\} \left\{\sum_{\chi_1\ even}\chi_{1}(h_1) \Bar{ \chi_1}(n/d)-\chi_0(n/d) \right\} \notag\\ &=\frac{pq}{\phi(p)\phi(q)}\sum_{d,r\geq 1}^\infty \frac{d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \left\{\sum_{\chi_2\ even}\chi_{2}(h_2)\Bar{ \chi_2}(d)-\chi_0(d)\right\} \left\{\sum_{\chi_1\ even}\chi_{1}(h_1) \Bar{ \chi_1}(r)-\chi_0(r) \right\} \notag\\ &= \frac{pq }{4}\sum_{\substack{d=1\\ d \equiv \pm h_2(q)}}^\infty \sum_{\substack{r=1\\ r \equiv \pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } -\frac{pq }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{\substack{r=1\\ p \nmid r}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \notag\\ & - \frac{pq }{2\phi(q)}\sum_{\substack{d=1\\ q\nmid d}}^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } + \frac{pq\Gamma(\upsilon+k+1)}{\phi(p)\phi(q)}\sum_{\substack{d=1\\ q\nmid d}}^\infty \sum_{\substack{r=1\\ p \nmid r}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \notag\\ &= \frac{pq }{4}\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } -\frac{pq }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{ \substack{r=1\\ p \nmid r}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \notag\\ & - \frac{pq }{2\phi(q)}\sum_{\substack{d=1\\ q \nmid d}}^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } + \frac{pq }{\phi(p)\phi(q)}\sum_{\substack{d=1\\ q \nmid d}}^\infty \sum_{\substack{r=1\\ p \nmid r}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \notag\\ &= \frac{pq }{4}\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \notag\\ & +\left\{ -\frac{pq }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{ r=1 }^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } + \frac{pq }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{ r=1 }^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \right\} \notag\\ \notag\\ & - \frac{pq }{2\phi(q)}\sum_{ d=1 }^\infty \sum_{\substack{r=1\\ r \equiv \pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } + \frac{pq^{k+1} }{2\phi(q)}\sum_{ d=1 }^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2p}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \notag\\ &+ \left\{ \frac{pq }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } - \frac{pq^{k+1} }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2p}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ - \frac{pq }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } + \frac{pq^{k+1} }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty \frac{d^k }{\left(\frac{16\pi^2}{a^2}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \right\}. \end{align} Using \eqref{mainp1}, \eqref{THM5} and \eqref{mainp2}, we get the result.\\ (Theorem \ref{botheven_odd1_based} $\Rightarrow$ Theorem \ref{botheven_odd}) Let $\theta=h_1/p$, $\psi=h_2/q$, and let $\chi_1$ and $\chi_2$ be an odd primitive character modulo $p$ and $q$, respectively. Multiplying the identity \eqref{rr3} in Theorem \ref{botheven_odd1_based} by $\bar{\chi_1}(h_1)\bar{\chi_2}(h_2)/\tau(\bar{\chi_1})\tau(\bar{\chi_2})$, and then summing on $h_1$ and $h_2$ , $ 0<h_1<p$, $ 0<h_2<q$, one can prove that Theorem \ref{botheven_odd1_based} imply Theorem \ref{botheven_odd}. (Theorem \ref{botheven_odd2_based} $\Rightarrow$ Theorem \ref{botheven_odd}) Here we consider $\chi_1$ and $\chi_2$ be non-principal even primitive character modulo $p$ and $q$, respectively. Similarly, as above, one can prove this result. \end{proof} \begin{proof}[Theorem \rm{\ref{even-odd1_based}} and Theorem \rm{\ref{even-odd2_based}} and Theorem \rm{\ref{even-odd}}][] (Theorem \ref{even-odd} and Theorem \ref{odd2_based} $\Rightarrow$ Theorem \ref{even-odd1_based}) Let us multiply the identity \eqref{THM6} of Theorem \ref{even-odd} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /i\phi(q)$, then take sum on non-principal even primitive characters $\chi_1$ modulo $p$ and odd primitive characters $\chi_2$ modulo $q$. So, from the left-hand side, we get \begin{align} &\frac{1}{i\phi(p)\phi(q)}\sum_{\chi_2\ odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) \tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\notag\\ =&\frac{1}{i\phi(p)\phi(q)}\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{d/n}d^k\sum_{\chi_2\ odd}\chi_{2}(h_2)\chi_{2}(n/d)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1)\chi_{1}(d)\tau(\Bar{\chi_{1}}) \notag\\ =&\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{d|n}d^k \sin \left( \frac{2\pi nh_2}{dq}\right) \left\{ \frac{1}{\phi(p)}\sum_{ \chi_1 \ even}\chi_{1}(h_1)\chi_{1}(d)\tau(\Bar{\chi_{1}})+\frac{\chi_0(d)}{\phi(p)} \right\} \notag\\ =&\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{\substack{d|n\\p\nmid d}}d^k \sin \left( \frac{2\pi nh_2}{dq}\right) \left\{ \cos \left( \frac{2\pi dh_1 }{p} \right)+\frac{1}{\phi(p)} \right\} \notag\\ =&\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{d|n}d^k \sin \left( \frac{2\pi nh_2}{dq}\right) \left\{ \cos \left( \frac{2\pi dh_1 }{p} \right)+\frac{1}{\phi(p)} \right\} \notag\\ &-\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{\substack{d|n\\p|d}}d^k \sin \left( \frac{2\pi nh_2}{dq}\right) \left\{ 1+\frac{1}{\phi(p)} \right\} \notag\\ =&\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left( \frac{2\pi dh_1 }{p} \right)\sin \left( \frac{2\pi nh_2}{dq}\right) + \frac{1}{\phi(p)} \sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( \frac{2\pi nh_2}{dq}\right) \notag\\ &-\frac{p^{\frac{\nu}{2}+k+1}}{\phi(p)} \sum_{m=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{pmx}) \sum_{d|m}d^k \sin \left( \frac{2\pi mh_2}{dq}\right), \label{evenodd4.1} \end{align} we used the \eqref{sin} in the second step. Applying the Theorem \ref{odd2_based} for the last two terms of the right-hand side of \eqref{evenodd4.1}. So, we rewrite the right-hand side of \eqref{evenodd4.1} as follows: \begin{align} &\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \cos \left( \frac{2\pi dh_1 }{p} \right)\sin \left( \frac{2\pi nh_2}{dq}\right) \notag \\& +\frac{(-1)^{\frac{k}{2}}}{4\phi(p)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1} \Gamma(\upsilon+k+1)\sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left( \frac{ (m+h_2/q)^k }{(1+\frac{16\pi^2r}{a^2x}(m+h_2/q))^{1+\nu+k} } - \frac{ (m+h_2/q)^k }{(1+\frac{16\pi^2r}{a^2x}(m+1-h_2/q))^{1+\nu+k} } \right)\notag \\&- \frac{(-1)^{\frac{k}{2}}}{4\phi(p)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1} \Gamma(\upsilon+k+1)\sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left( \frac{ (m+h_2/q)^k }{(1+\frac{16\pi^2r}{a^2px}(m+h_2/q))^{1+\nu+k} } -\frac{ (m+h_2/q)^k }{(1+\frac{16\pi^2r}{a^2px}(m+1-h_2/q))^{1+\nu+k} } \right).\label{evenodd4.2} \end{align} Using \eqref{both}, \eqref{prop} and \eqref{prop1} in the right-hand side of \eqref{THM6} of Theorem \ref{even-odd}, we have \begin{align} &-\frac{(-1)^{\frac{k}{2}} X^{\frac{\upsilon}{2}}}{2p\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} \Gamma(\upsilon+k+1) \sum_{n=1}^\infty \frac{\sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2) \tau(\chi_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) \tau(\chi_1) \tau(\Bar{\chi_{1}}) \notag \\ &=\frac{(-1)^{\frac{k}{2}}pq }{2p\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{n=1}^\infty \frac{ \sum_{d/n}d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\bar{\chi}_{2}(d) \notag\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\{ \sum_{ \chi_1 \ even}\chi_{1}(h_1) \bar{\chi}_{1}(n/d)- \bar{\chi}_0(n/d)\right\} \notag \\ &=\frac{(-1)^{\frac{k}{2}}q}{2\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{d=1}^\infty \sum_{r=1}^\infty \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\bar{\chi}_{2}(d) \notag\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\{ \sum_{ \chi_1 \ even}\chi_{1}(h_1) \bar{\chi}_{1}(r)- \bar{\chi}_0(r)\right\} \notag \\ &=\frac{(-1)^{\frac{k}{2}}\left( 2\pi X\right)^{k+1}}{8 } X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{m,n\geq 0}^\infty \left\{\frac{(n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+h_1/p)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{(n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-h_2/q)(m+h_1/p)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ -\frac{(n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-h_2/q)(m+1-h_1/p)}{x}+1\right)^{\upsilon+k+1} } +\frac{(n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+1-h_1/p)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag \\ &-\frac{(-1)^{\frac{k}{2}}q}{2\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{\substack{d,r\geq 1\\p\nmid r}} \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\bar{\chi}_{2}(d). \label{evenodd4.3} \end{align} By \eqref{prop}, we see that the last term of right-hand side of \eqref{evenodd4.3}, we have \begin{align} &\frac{1}{\phi(q)}\sum_{\substack{d,r\geq 1\\p\nmid r}} \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\bar{\chi}_{2}(d) \notag \\ = &\frac{1}{\phi(q)}\sum_{ d,r\geq 1 } \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\bar{\chi}_{2}(d) -\frac{1}{\phi(q)}\sum_{ d,r\geq 1 } \frac{ d^k}{\left(\frac{16\pi^2}{a^2q}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_2\ odd}\chi_{2}(h_2)\bar{\chi}_{2}(d) \notag \\ =&\frac{q^k}{2}\sum_{r=1}^\infty \sum_{m=0}^\infty \left\{\frac{ (m+h_2/q)^k}{\left(\frac{16\pi^2}{a^2p}\frac{(m+h_2/q)r}{x}+1\right)^{\upsilon+k+1} } - \frac{ (m+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2p}\frac{(m+1-h_2/q)r}{x}+1\right)^{\upsilon+k+1} } \right\}\notag \\ &-\frac{q^k}{2}\sum_{r=1}^\infty \sum_{m=0}^\infty \left\{\frac{ (m+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(m+h_2/q)r}{x}+1\right)^{\upsilon+k+1} } - \frac{ (m+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(m+1-h_2/q)r}{x}+1\right)^{\upsilon+k+1} } \right\}. \label{evenodd4.4}\end{align} Equating \eqref{evenodd4.2} \eqref{evenodd4.3}, and using \eqref{evenodd4.4} we get the result. (Theorem \ref{even-odd} and Theorem \ref{odd1_based} $\Rightarrow$ Theorem \ref{even-odd2_based}) The proof of this theorem is almost similar to the proof of the previous theorem. The only difference lies in the fact that we will first multiply the identity \eqref{THM6} of Theorem \ref{even-odd} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /i\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$, then will sum on odd primitive characters $\chi_1$ modulo $p$ and non-principal even primitive characters $\chi_2$ modulo $q$. So we will leave the detail of the proof for the reader. \iffalse \begin{align} &\frac{1}{i\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{ \chi_1\ odd }\chi_{1}(h_1) \tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\notag\\ =&\frac{1}{i\phi(p)\phi(q)}\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{d/n}d^k\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}} \chi_{2}(h_2)\chi_{2}(n/d)\tau(\Bar{\chi_{2}})\sum_{\chi_1\ odd}\chi_{1}(h_1)\chi_{1}(d)\tau(\Bar{\chi_{1}}) \notag\\ =&\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{d/n}d^k \sin \left( \frac{2\pi dh_1}{p}\right) \left\{ \frac{1}{\phi(q)}\sum_{ \chi_2 \ even}\chi_{2}(h_2)\chi_{2}(n/d)\tau(\Bar{\chi_{2}})+\frac{\chi_0(n/d)}{\phi(q)} \right\} \notag\\ =&\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{\substack{d/n\\q\nmid \frac{n}{d}}}d^k \sin \left( \frac{2\pi dh_1}{p}\right) \left\{ \cos \left( \frac{2\pi nh_2 }{dq} \right)+\frac{1}{\phi(q)} \right\} \notag\\ =&\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{d/n}d^k \sin \left( \frac{2\pi dh_1}{p}\right) \left\{\cos \left( \frac{2\pi nh_2 }{dq} \right)+\frac{1}{\phi(q)} \right\}\notag\\& -\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\sum_{\substack{d/n\\q/\frac{n}{d}}}d^k \sin \left( \frac{2\pi dh_1}{p}\right) \left\{ 1+\frac{1}{\phi(q)} \right\} \notag\\ =&\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( \frac{2\pi dh_1 }{p} \right)\cos \left( \frac{2\pi nh_2}{dq}\right) + \frac{1}{\phi(q)} \sum_{n=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( \frac{2\pi dh_1}{p}\right) \notag\\ &-\frac{q^{\frac{\nu}{2}+1}}{\phi(q)} \sum_{m=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{qmx}) \sum_{d|m}d^k \sin \left( \frac{2\pi dh_1}{p}\right), \label{evenodd5.1} \end{align} we used the \eqref{sin} in the second step. Applying the Theorem \ref{odd1_based} with $\theta= h_1/p $ for the last two terms of the right-hand side of \eqref{evenodd5.1}. So, we rewrite the right-hand side of \eqref{evenodd5.1} as follows: \begin{align} &\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^k \sin \left( \frac{2\pi dh_1 }{p} \right)\cos \left( \frac{2\pi nh_2}{dq}\right)+\frac{(-1)^{\frac{k+1}{2}}}{4i\phi(q)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1} \notag \\& \Gamma(\upsilon+k+1)\sum_{d=1}^{\infty} d^k \sum_{ m=0}^{\infty}\left( \frac{ 1 }{(1+\frac{16\pi^2d}{a^2x}(m+h_1/p))^{1+\nu+k} } - \frac{ 1 }{(1+\frac{16\pi^2d}{a^2x}(m+1-h_1/p))^{1+\nu+k} } \right)\notag \\&- \frac{(-1)^{\frac{k+1}{2}}}{4iq^k\phi(q)} X^\frac{\upsilon}{2}(2 \pi X)^{k+1} \Gamma(\upsilon+k+1)\sum_{r=1}^{\infty} \sum_{ m=0}^{\infty}\left( \frac{ (m+h_1/p)^k }{(1+\frac{16\pi^2d}{a^2qx}(m+h_1/p))^{1+\nu+k} } -\frac{ (m+h_1/p)^k }{(1+\frac{16\pi^2d}{a^2qx}(m+1-h_1/p))^{1+\nu+k} } \right)\notag \\ &+\frac{(-1)^{\frac{k+1}{2}}k!}{4(2\pi)^{k+1}i} X^\frac{\upsilon}{2} \Gamma(\upsilon) \left(\zeta(1+k,h_1/p) - \zeta(1+k, 1-h_1/p)\right). \label{evenodd5.2} \end{align} Now we evaluate the right-hand side of Theorem \ref{even-odd}. \begin{align} &\frac{1}{i\phi(p)\phi(q)}\sum_{\chi_1\ odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}})\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2) \tau(\Bar{\chi_{2}}) \sum_{n=1}^\infty \sigma_{k,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx})\notag\\ = &-\frac{(-1)^{\frac{k+1}{2}}}{2ip\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{n=1}^\infty \frac{\sigma_{k,\Bar{\chi_2},\Bar{\chi_1}}(n)}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_1\ odd}\chi_{1}(h_1) \tau(\chi_1)\tau(\Bar{\chi_{1}})\notag \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2) \tau(\chi_2) \tau(\Bar{\chi_{2}}) \notag \\ =&\frac{(-1)^{\frac{k+1}{2}}pq}{2ip\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{n=1}^\infty \frac{ \sum_{d/n}d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{n}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_1\ odd}\chi_{1}(h_1) \bar{\chi}_{1}(n/d) \notag \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\{ \sum_{ \chi_2 \ even}\chi_{2}(h_2) \bar{\chi}_{2}(d)- \bar{\chi}_0(d)\right\} \notag \\ =&\frac{(-1)^{\frac{k+1}{2}}q}{2i\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{d=1}^\infty \sum_{r=1}^\infty \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_1\ odd}\chi_{1}(h_1) \bar{\chi}_{1}(r) \notag \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\{ \sum_{ \chi_2 \ even}\chi_{2}(h_2) \bar{\chi}_{2}(d)- \bar{\chi}_0(d)\right\} \notag \\ =&\frac{(-1)^{\frac{k+1}{2}}\left( 2\pi X\right)^{k+1}}{8i } X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{m,n\geq 0}^\infty \left\{\frac{(n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+h_1/p)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ \ \ \ \ \ \ +\frac{(n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-h_2/q)(m+h_1/p)}{x}+1\right)^{\upsilon+k+1} } \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ -\ \frac{(n+1-h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+1-h_2/q)(m+1-h_1/p)}{x}+1\right)^{\upsilon+k+1} } -\frac{(n+h_2/q)^k}{\left(\frac{16\pi^2}{a^2}\frac{(n+h_2/q)(m+1-h_1/p)}{x}+1\right)^{\upsilon+k+1} } \right\} \notag \\ &-\frac{(-1)^{\frac{k+1}{2}}q}{2i\phi(p)\phi(q)}\left(\frac{2\pi X}{q}\right)^{k+1} X^{\frac{\upsilon}{2}}\Gamma(\upsilon+k+1) \sum_{\substack{d,r\geq 1\\q\nmid d}} \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_1\ odd}\chi_{1}(h_1)\bar{\chi}_{1}(r).\label{evenodd5.3} \end{align} By \eqref{prop}, we see that the last term of right-hand side of \eqref{evenodd5.3}, we have \begin{align} &\frac{1}{\phi(p)}\sum_{\substack{d,r\geq 1\\q\nmid d}} \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_1\ odd}\chi_{1}(h_1)\bar{\chi}_{1}(r) \notag \\ = &\frac{1}{\phi(p)}\sum_{ d,r\geq 1 } \frac{ d^k}{\left(\frac{16\pi^2}{a^2pq}\frac{dr}{x}+1\right)^{\upsilon+k+1} }\sum_{\chi_1\ odd}\chi_{1}(h_1)\bar{\chi}_{1}(r) -\frac{q^k}{\phi(p)}\sum_{ d,r\geq 1 } \frac{ d^k}{\left(\frac{16\pi^2}{a^2p}\frac{dr}{x}+1\right)^{\upsilon+k+1} } \sum_{\chi_1\ odd}\chi_{1}(h_1)\bar{\chi}_{1}(r) \notag \\ =&\frac{1}{2}\sum_{d=1}^\infty \sum_{m=0}^\infty \left\{\frac{ d^k}{\left(\frac{16\pi^2}{a^2q}\frac{(m+h_1/p)d}{x}+1\right)^{\upsilon+k+1} } - \frac{ d^k}{\left(\frac{16\pi^2}{a^2q}\frac{(m+1-h_1/p)d}{x}+1\right)^{\upsilon+k+1} } \right\}\notag \\ &-\frac{q^k}{2}\sum_{d=1}^\infty \sum_{m=0}^\infty \left\{\frac{ d^k}{\left(\frac{16\pi^2}{a^2}\frac{(m+h_1/p)d}{x}+1\right)^{\upsilon+k+1} } - \frac{ d^k}{\left(\frac{16\pi^2}{a^2}\frac{(m+1-h_1/p)d}{x}+1\right)^{\upsilon+k+1} } \right\}. \label{evenodd5.4}\end{align} Equating \eqref{evenodd5.2} with \eqref{evenodd5.3}, and using \eqref{evenodd5.4} we get the result. \fi (Theorem \ref{even-odd1_based} $\Rightarrow$ Theorem \ref{even-odd} ) Let $\theta=h_1/p$, $\psi=h_2/q$, and let $\chi_1$ be an even primitive character modulo $p$ and $\chi_2$ be an odd primitive character modulo $q$. Multiplying the identity in Theorem \ref{even-odd1_based} by $\bar{\chi_1}(h_1)\bar{\chi_2}(h_2)/\tau(\bar{\chi_1})\tau(\bar{\chi_2})$, and then summing on $h_1$ and $h_2$, $ 0<h_1<p$, $0<h_2<q$, one can prove that Theorem \ref{even-odd1_based} imply Theorem \ref{even-odd}. (Theorem \ref{even-odd2_based} $\Rightarrow$ Theorem \ref{even-odd}) Here we consider $\chi_1$ to be an odd primitive character modulo $p$ and $\chi_2$ to be an even primitive character modulo $q$. Similarly, as the above Theorem, one can show this result. To avoid repetitions, we skip the detail of the proof. \end{proof} \section{Proof of Cohen type identities }\label{proof of cohen identities...} \begin{proof}[Theorem \rm{\ref{oddcohen based}} and Theorem \rm{\ref{oddcohen}} ][] (Theorem \ref{oddcohen} $\Rightarrow$ Theorem \ref{oddcohen based}) The double series on the right-hand side of the identity in Theorem \ref{oddcohen based} converges absolutely and uniformly on any compact interval for $\theta\in (0,1)$. Therefore, it is sufficient to prove the theorem for $\theta=h/q$, where $q$ is prime and $0<h<q.$ Now we multiply the identity \eqref{THM7} of Theorem \eqref{oddcohen} by $ \chi(h) \tau(\bar{\chi}) /i\phi(q)$, then take sum on odd primitive character $\chi$ modulo $q$. The left hand side of the identity \eqref{THM7} becomes \begin{align} \frac{8 \pi x^{\nu/2}}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{n=1}^\infty\sigma_{-\nu,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4\pi\sqrt{nx}) = &\frac{8 \pi x^{\nu/2}}{i\phi(q)}\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(4\pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \sum_{\chi \ odd } \chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ =&8 \pi x^{\nu/2} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left(\frac{2\pi d h}{q}\right),\label{ZX1} \end{align} where we have used the identity \eqref{sin}. Now we examine the right-hand side of the identity \eqref{THM7}. Simplifying the expression by using the functional equation \eqref{ll(s)}, we arrive at \begin{align} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \ 8 \pi x^{\nu/2} \sum_{n=1}^\infty\sigma_{-\nu,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4\pi\sqrt{nx})\notag\\ &=- \frac{\pi}{ q^{\nu-1}\sin(\frac{\pi\nu}{2})} \frac{1}{\phi(q)} \sum_{\chi \ odd } \chi(h) L(1-\nu, \bar{\chi}) + \frac{1}{ x\ q^{\nu}\cos(\frac{\pi\nu}{2})} \frac{1}{\phi(q)} \sum_{\chi \ odd } \chi(h) L(-\nu, \bar{\chi}) \notag\\ &+ \frac{q^{1-\nu} }{ \phi(q)}\sum_{\chi \ odd } \chi(h) \left\{ \frac{2\zeta(\nu+1)L(1,{\bar{\chi}})(qx)^{\nu}} {\cos(\frac{\pi\nu}{2}) } - \frac{2}{ \cos \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N} \zeta(2j)\ L(2j-\nu, \bar{\chi})(qx)^{2j-1} \right.\notag\\&\left.\ \ -\frac{2(qx)^{2N+1} }{ \cos\left(\frac{\pi \nu}{2}\right) }\sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi}}(n) \frac{ \left( n^{\nu+1-2N}-(qx)^{\nu+1-2N}\right)}{ n\ (n^2-(qx)^2)} \right\}. \label{X1} \end{align} Next, we observe using \eqref{Hurwitz} and \eqref{prop}, we get \begin{align} \frac{1}{\phi(q)} \sum_{\chi \ odd } \chi(h) L(s, \bar{\chi}) = \frac{1}{2q^{s}}\left(\zeta(s, h/q) - \zeta(s, 1-h/q)\right).\label{X2} \end{align} Lastly, we consider \begin{align} & \frac{1}{\phi(q)} \sum_{\chi \ odd } \chi(h) \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi}}(n) \frac{ n^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ n\ (n^2-(qx)^2)}\notag\\ &= \frac{1}{\phi(q)} \sum_{n=1}^{\infty} \frac{ n^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ n\ (n^2-(qx)^2)} \sum_{d/n} d^{-\nu} \sum_{\chi \ odd } \chi(h) \bar{\chi}\left(\frac{n}{d}\right) \notag\\ &=\frac{1}{2}\sum_{d=1}^{\infty} d^{-\nu} \left\{\sum_{\substack{ m=1 \\ m\equiv h(q)} }^\infty \frac{ (dm)^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ dm\ (d^2m^2-(qx)^2)}-\sum_{\substack{ m=1 \\ m\equiv - h(q)} }^\infty \frac{ (dm)^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ dm\ (d^2m^2-(qx)^2)} \right\}\notag\\ &=\frac{1}{2}\sum_{d=1}^{\infty} d^{-\nu} \sum_{ r=0}^\infty \left\{ \frac{ (d(rq+h))^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ d(rq+h)\ (d^2(rq+h)^2-(qx)^2)} - \frac{ (d(rq+q-h))^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ d(rq+q-h)\ (d^2(rq+q-h)^2-(qx)^2)} \right\} \notag\\ &=\frac{q^{\nu-2-2N}}{2}\sum_{d=1}^{\infty} d^{-\nu-1} \sum_{ r=0}^\infty \left\{ \frac{ (d(r+h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ (r+h/q)\ (d^2(r+h/q)^2-x^2)} - \frac{ (d(r+1-h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ (r+1-h/q)\ (d^2(r+1-h/q)^2-x^2)} \right\}. \label{X3} \end{align} Employing \eqref{X2}, \eqref{X3} in \eqref{X1}, we obtain \begin{align}\label{ZX2} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \ 8 \pi x^{\nu/2} \sum_{n=1}^\infty\sigma_{-\nu,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4\pi\sqrt{nx}) = \frac{1 }{\cos\left(\frac{\pi \nu}{2}\right)} \zeta(\nu+1) \left(\zeta(1,h/q) - \zeta(1, 1-h/q)\right)x^\nu \notag\\ & -\frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,\frac{h}{q}) - \zeta(1-\nu, 1-\frac{h}{q})\right) +\frac{1 }{ 2 x\cos\left(\frac{\pi \nu}{2}\right)} \left(\zeta(-\nu,\frac{h}{q}) - \zeta(-\nu, 1-\frac{h}{q})\right) \notag\\ & -\frac{1 }{ \cos\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j) \left(\zeta(2j-\nu,\frac{h}{q}) - \zeta(2j-\nu, 1-\frac{h}{q})\right)x^{2j-1} \notag\\ &-\frac{ x^{2N+1}}{\cos(\frac{\pi\nu}{2}) }\sum_{d=1}^{\infty} d^{-\nu-1} \sum_{ r=0}^\infty \left\{ \frac{ (d(r+h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ (r+h/q)\ (d^2(r+h/q)^2-x^2)} - \frac{ (d(r+1-h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ (r+1-h/q)\ (d^2(r+1-h/q)^2-x^2)} \right\}. \end{align} Equating \eqref{ZX1} and \eqref{ZX2}, we get the result. (Theorem \ref{oddcohen based} $\Rightarrow$ Theorem \ref{oddcohen} ) Analogous to proof of Theorem \ref{M1}, one can easily prove Theorem \ref{oddcohen}. \end{proof} \iffalse \begin{proof}[Theorem \rm{\ref{oddcohen2 based}}][] First, we consider \begin{align} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{n=1}^\infty\Bar{\sigma}_{-\nu,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) = &\frac{1}{i\phi(q)}\sum_{n=1}^\infty n^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d/n}d^{-\nu} \sum_{\chi \ odd } \chi(n/d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{d|n}d^{-\nu}\sin \left(\frac{2\pi n h}{dq}\right),\label{4.1} \end{align} where we have used the identity \eqref{sin}. Now we multiply both sides of Theorem \eqref{baroddcohen} by $ \chi(h) \tau(\bar{\chi}) /i\phi(q)$, then sum on odd primitive character $\chi$ modulo $q$ and simplifying by making use of the functional equation \eqref{ll(s)}, we arrive at \begin{align}\label{zx3} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \ 8 \pi x^{\nu/2} \sum_{n=1}^\infty \Bar{\sigma}_{-\nu,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4\pi\sqrt{nx})\notag\\ &= \frac{4q}{(2\pi)^\nu \phi(q)} \Gamma(\nu) \zeta(\nu)\sum_{\chi \ odd } \chi(h) L(1, \bar{\chi}) + \frac{q^{\nu}}{\phi(q) \cos(\frac{\pi\nu}{2})} \sum_{\chi \ odd } \chi(h) L(\nu, \bar{\chi})x^{\nu-1}\delta_{(0,1)}^{(\nu)} \notag\\ &+ \frac{q }{ \phi(q)}\sum_{\chi \ odd } \chi(h)\left\{ \frac{\pi L(1+\nu,\Bar{\chi})}{ \sin\left(\frac{\pi \nu}{2}\right)}(qx)^{ \nu} +\frac{2}{ \cos\left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N-1} {\zeta(2j+1-\nu)L(2j+1 ,\Bar{\chi})(qx)^{2j}} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{ L(\nu,\Bar{\chi})}{ \cos \left(\frac{\pi \nu}{2}\right)} (qx)^{\nu-1} + \frac{2 }{ \cos\left(\frac{\pi \nu}{2}\right) }(qx)^{2N}\sum_{n=1}^{\infty} {\sigma}_{-\nu, \Bar{\chi}}(n) \left(\frac{n^{\nu-2N+1}-(qx)^{\nu-2N+1}}{n^2-(qx)^2} \right) \right\}. \end{align} Next, we consider \begin{align}\label{zx4} &\frac{1 }{ \phi(q)}\sum_{\chi \ odd } \chi(h)\sum_{n=1}^{\infty} {\sigma}_{-\nu, \Bar{\chi}}(n) \left(\frac{n^{\nu-2N+1}-(qx)^{\nu-2N+1}}{n^2-(qx)^2} \right)\notag\\ &= \frac{1}{\phi(q)} \sum_{n=1}^{\infty} \frac{ n^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ n^2-(qx)^2} \sum_{d/n} d^{-\nu} \sum_{\chi \ odd } \chi(h) \bar{\chi}\left(d\right) \notag\\ &=\frac{1}{2}\sum_{r=1}^{\infty} \sum_{\substack{ d=1 \\ d\equiv h(q)} }^\infty d^{-\nu} \frac{ (dr)^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ d^2r^2-(qx)^2} -\frac{1}{2}\sum_{r=1}^{\infty}\sum_{\substack{ d=1 \\ d\equiv - h(q)} }^\infty d^{-\nu} \frac{ (dr)^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ d^2r^2-(qx)^2} \notag\\ &=\frac{1}{2}\sum_{r=1}^{\infty} \sum_{ m=0}^\infty (mq+h)^{-\nu} \left\{ \frac{ (r(mq+h))^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ r^2(mq+h)^2-(qx)^2} - \frac{ (r(mq+q-h))^{\nu+1-2N}-(qx)^{\nu+1-2N} }{ r^2(mq+q-h)^2-(qx)^2} \right\} \notag\\ &=\frac{q^{-1-2N}}{2}\sum_{r=1}^{\infty} (m+h/q)^{-\nu} \sum_{ m=0}^\infty \left\{ \frac{ (r(m+h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+h/q)^2-x^2} - \frac{ (r(m+1-h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+1-h/q)^2-x^2} \right\}. \notag\\ \end{align} Employing \eqref{X2}, \eqref{zx4} in \eqref{zx3}, we obtain \begin{align}\label{zx5} & \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \ 8 \pi x^{\nu/2} \sum_{n=1}^\infty\sigma_{-\nu,\chi}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4\pi\sqrt{nx})= \frac{2}{(2\pi)^\nu } \Gamma(\nu) \zeta(\nu) \left(\zeta(1, h/q) - \zeta(1, 1-h/q)\right)\notag\\ &+ \frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} x^\nu \left(\zeta(1+\nu,\frac{h}{q}) - \zeta(1+\nu, 1-\frac{h}{q})\right) +\frac{x^{\nu-1} }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \left(\delta_{(0,1)}^{(\nu)} +1 \right) \left(\zeta(\nu,\frac{h}{q}) - \zeta(\nu, 1-\frac{h}{q})\right)\notag\\ & +\frac{1 }{ \cos\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j+1-\nu) \left(\zeta(2j+1 ,\frac{h}{q}) - \zeta(2j+1 , 1-\frac{h}{q})\right)x^{2j} \notag\\ & + \frac{x^{2N} }{ \cos\left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} (m+h/q)^{-\nu} \sum_{ m=0}^\infty \left\{ \frac{ (r(m+h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+h/q)^2-x^2} - \frac{ (r(m+1-h/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+1-h/q)^2-x^2} \right\}. \notag\\ \end{align} Equating \eqref{zx5} and \eqref{4.1}, we get the result. \end{proof} \begin{proof}[Theorem \rm{\ref{evencohen based}}][] Multiply the equation \eqref{even1.1} by $8\pi x^{\frac{\nu}{2}}$ on both sides, then substitute $k=-\nu,a=4\pi$, we obtain \begin{align} &8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left(\frac{ 2\pi dh }{q}\right)\notag\\ &=\frac{q^{1 -\nu }}{\phi(q)} 8 \pi (qx)^{\nu/2}\sum_{m=1}^{\infty} \sigma_{-\nu}(m)\ m^{\nu/2} K_{\nu}(4 \pi\sqrt{qmx}) - \frac{8 \pi x^{\nu/2}}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \notag\\& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{8 \pi x^{\nu/2}}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \sigma_{{-\nu},\chi}(n)n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx} ). \label{V1} \end{align} Now, we first evaluate the first two sums on the right-hand side of \eqref{V1}. By using Proposition \ref{Cohen-type}, we have \begin{align} &\frac{q^{1 -\nu }}{\phi(q)} 8 \pi (qx)^{\nu/2}\sum_{m=1}^{\infty} \sigma_{-\nu}(m)\ m^{\nu/2} K_{\nu}(4 \pi\sqrt{qmx}) - \frac{8 \pi x^{\nu/2}}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \notag\\ &= -(2\pi)^{1-\nu}\Gamma(\nu)\zeta(\nu)\frac{(q^{1-\nu}-1)}{\phi(q)}+2(2\pi)^{-1-\nu}\Gamma(\nu+1)\zeta(\nu+1)x^{-1}\frac{(q^{-\nu}-1)}{\phi(q)}-\frac{\pi}{\cos(\frac{\pi\nu}{2})}\zeta(\nu+1)x^\nu \notag\\ & \ \ +\ \frac{2}{\sin(\frac{\pi \nu}{2})} \left\{ \sum_{j=1}^\infty \zeta(2j) \zeta(2j-\nu)x^{2j-\nu}\frac{(q^{2j-\nu}-1)}{\phi(q)} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \ \frac{x^{2N+1}}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \left(q^{2N+2-\nu} \ \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{n^2-(qx)^2}- \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right) \right\}\notag\\ &= - \frac{\pi}{\cos(\frac{\pi\nu}{2})}\zeta(1-\nu)\frac{(q^{1-\nu}-1)}{\phi(q)} -\frac{1 }{\sin(\frac{\pi\nu}{2})}\zeta(-\nu) x^{-1}\frac{(q^{-\nu}-1)}{\phi(q)}-\frac{\pi}{\cos(\frac{\pi\nu}{2})}\zeta(\nu+1)x^\nu \notag\\ & \ \ +\ \frac{2}{\sin(\frac{\pi \nu}{2})} \left\{ \sum_{j=1}^\infty \zeta(2j) \zeta(2j-\nu)x^{2j-\nu}\frac{(q^{2j-\nu}-1)}{\phi(q)} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \ \frac{x^{2N+1}}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \left(q^{2N+2-\nu} \ \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{n^2-(qx)^2}- \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right) \right\}, \label{V2} \end{align} in the last step, we used the functional equation of the zeta function. Now, we examine the last sum on the right-hand side of \eqref{V1}. By Theorem \eqref{evencohen}, we have \begin{align} &\frac{8 \pi x^{\nu/2}}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \sigma_{{-\nu},\chi}(n)n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx} )\notag\\ &\ = -\frac{\pi}{q^{\nu-1}\cos(\frac{\pi \nu}{2})}\frac{1}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) L(1-\nu, \bar{\chi}) -\frac{1}{xq^{\nu}\sin(\frac{\pi \nu}{2})}\frac{1}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) L(-\nu, \bar{\chi})\notag\\ &\ +\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\ q^{1-\nu} \left\{ \sum_{j=1}^{N} \zeta(2j)\ \frac{ 1 }{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) L(2j-\nu, \bar{\chi})(qx)^{2j-1} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + (qx)^{2N+1} \frac{ 1 }{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi}}(n) \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \right\}. \label{V3} \end{align} We consider \begin{align} \sum_{\substack{\chi \neq \chi_0\\ \chi\ even}} \chi(h)L(s,\Bar{\chi})&= \sum_{\chi \ even}\chi(h)L(s,\Bar{\chi})-L(s, {\chi_0})\notag\\ &= \frac{\phi(q)}{2q^{s}} \left\{ \zeta(s,h/q)+\zeta(s,1-h/q) \right\} - \left( 1-\frac{1}{q^{s}} \right)\zeta(s). \label{V4} \end{align} Now, we examine the last expression in \eqref{V3}, and we obtain \begin{align} & \frac{ 1 }{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi}}(n) \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{ n^2-(qx)^2} \notag\\ &=\frac{ 1 }{\phi(q)}\sum_{n=1}^{\infty} \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \sum_{d/n} d^{-\nu}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \bar{\chi}\left(\frac{n}{d}\right) \notag\\ &=\frac{ 1 }{\phi(q)}\sum_{n=1}^{\infty} \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \sum_{d/n} d^{-\nu} \left\{ \sum_{\chi even} \chi(h) \bar{\chi}\left(\frac{n}{d}\right) -\chi_0\left(\frac{n}{d}\right) \right\} \notag\\ &=\frac{1}{2}\sum_{d=1}^{\infty} d^{-\nu} \sum_{\substack{ m=1 \\ m\equiv \pm h(q)} }^\infty \frac{ (dm)^{\nu-2N}-(qx)^{\nu-2N} }{ d^2m^2-(qx)^2} -\frac{ 1 }{\phi(q)} \sum_{n=1}^{\infty} \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \left( \sigma_{-\nu} (n)-\sigma_{-\nu} \left(\frac{n}{q}\right)\right) \notag\\ &=\frac{1}{2}\sum_{d=1}^{\infty} d^{-\nu} \sum_{ r=0}^\infty \left\{ \frac{ (d(rq+h))^{\nu-2N}-(qx)^{\nu-2N} }{ d^2(rq+h)^2-(qx)^2} + \frac{ (d(rq+q-h))^{\nu-2N}-(qx)^{\nu-2N} }{ d^2(rq+q-h)^2-(qx)^2} \right\} \notag\\ &-\frac{ 1 }{\phi(q)}\sum_{n=1}^{\infty} \sigma_{-\nu} (n) \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} +\frac{ 1 }{\phi(q)} \sum_{r=1}^{\infty} \sigma_{-\nu} \left(r\right) \frac{ (qr)^{\nu-2N}-(qx)^{\nu-2N} }{(qr)^2-(qx)^2} \notag\\ &=\frac{q^{\nu-2-2N}}{2}\sum_{d=1}^{\infty} d^{-\nu} \sum_{ r=0}^\infty \left( \frac{ (d(r+h/q))^{\nu-2N}-x^{\nu-2N} }{ d^2(r+h/q)^2-x^2} + \frac{ (d(r+1-h/q))^{\nu-2N}-x^{\nu-2N} }{ d^2(r+1-h/q)^2-x^2} \right) \notag\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{1}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \left( \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{n^2-(qx)^2}- q^{\nu-2N-2}\ \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right). \label{V5} \end{align} Employing \eqref{V4}, \eqref{V5} in \eqref{V3}, we get \begin{align} &\frac{8 \pi x^{\nu/2}}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \sigma_{{-\nu},\chi}(n)n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx} )\notag\\ & = -\frac{\pi}{2\cos\left(\frac{\pi \nu}{2} \right)} \left( \zeta(1-\nu, h/q )+ \zeta(1-\nu, 1- h/q ) -2\frac{(q^{1-\nu}-1)}{\phi(q)} \zeta(1-\nu) \right) \notag\\ & \ \ -\frac{1}{2x\sin\left(\frac{\pi \nu}{2} \right)} \left( \zeta(-\nu, h/q )+ \zeta(-\nu, 1- h/q ) -2\frac{(q^{-\nu}-1)}{\phi(q)} \zeta(-\nu) \right) \notag\\ &\ \ +\frac{1}{\sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j) \left( \zeta(2j-\nu, h/q )+ \zeta(2j-\nu, 1- h/q ) -2\frac{(q^{2j-\nu}-1)}{\phi(q)} \zeta(2j-\nu) \right)x^{2j-1} \notag\\ &\ \ +\frac{2}{\sin\left(\frac{\pi \nu}{2} \right)}x^{2N+1} \left\{\frac{1}{2}\sum_{d=1}^{\infty} d^{-\nu} \sum_{ r=0}^\infty \left( \frac{ (d(r+h/q))^{\nu-2N}-x^{\nu-2N} }{ d^2(r+h/q)^2-x^2} + \frac{ (d(r+1-h/q))^{\nu-2N}-x^{\nu-2N} }{ d^2(r+1-h/q)^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{1}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \left( q^{2+2N-\nu}\ \frac{n^{\nu-2N}-(q x)^{\nu-2N}}{n^2-(q x)^2}- \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right) \right \}. \notag\\ \label{V6} \end{align} Combining \eqref{V1}, \eqref{V2} and\eqref{V6} \begin{align*} &8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi\sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left(\frac{ 2\pi dh }{q}\right)= -\ \frac{\pi}{\cos(\frac{\pi\nu}{2})}\zeta(\nu+1)x^\nu \notag\\ &-\frac{\pi}{2\cos\left(\frac{\pi \nu}{2} \right)} \left( \zeta(1-\nu, h/q )+ \zeta(1-\nu, 1- h/q ) \right) -\frac{1}{2x\sin\left(\frac{\pi \nu}{2} \right)} \left( \zeta(-\nu, h/q )+ \zeta(-\nu, 1- h/q ) \right)\notag\\ &\ \ +\frac{1}{\sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j) \left( \zeta(2j-\nu, h/q )+ \zeta(2j-\nu, 1- h/q ) \right)x^{2j-1} \notag\\ &\ \ +\frac{1}{\sin\left(\frac{\pi \nu}{2} \right)}x^{2N+1} \sum_{d=1}^{\infty} d^{-\nu} \sum_{ r=0}^\infty \left( \frac{ (d(r+h/q))^{\nu-2N}-x^{\nu-2N} }{ d^2(r+h/q)^2-x^2} + \frac{ (d(r+1-h/q))^{\nu-2N}-x^{\nu-2N} }{ d^2(r+1-h/q)^2-x^2} \right). \notag\\ & \end{align*} \end{proof} \begin{proof}[Theorem \rm{\ref{evencohen2 based}}][] Multiply the equation \eqref{even2.1} by $8\pi x^{\frac{\nu}{2}}$ on both sides, then substitute $k=-\nu,a=4\pi$, we obtain \begin{align}\label{zx6} &8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left(\frac{ 2\pi nh }{dq}\right) =\frac{q }{\phi(q)}8 \pi (qx)^{\nu/2}\sum_{m=1}^{\infty} \sigma_{-\nu}(m)m^{\nu/2} K_{\nu}(4 \pi \sqrt{qmx}) \notag\\& \ \ \ \ \ \ \ \ \ - \frac{8 \pi x^{\nu/2}}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx}) + \frac{8 \pi x^{\nu/2}}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \bar{\sigma}_{{-\nu},\chi}(n)n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx} ). \end{align} We first evaluate the first two sums on the right-hand side of \eqref{zx6}. By using Proposition \ref{Cohen-type}, we have \begin{align}\label{zx7} &\frac{q }{\phi(q)}8 \pi (qx)^{\nu/2}\sum_{m=1}^{\infty} \sigma_{-\nu}(m)m^{\nu/2} K_{\nu}(4 \pi \sqrt{qmx})- \frac{8 \pi x^{\nu/2}}{ {\phi(q)}}\sum_{n=1}^{\infty}\sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx}) \notag\\ &= -\frac{ \Gamma(\nu) \zeta(\nu )}{(2\pi)^{\nu-1} } -\frac{\pi \zeta(\nu+1)}{\cos(\frac{\pi\nu}{2})} \frac{(q^{\nu+1}-1)}{\phi(q)}x^\nu + \frac{2}{\sin(\frac{\pi \nu}{2})} \left\{ \frac{\zeta(\nu)x^{\nu-1}}{2}\frac{(q^\nu -1)}{\phi(q)} +\sum_{j=1}^\infty \zeta(2j) \zeta(2j-\nu)x^{2j-1}\frac{(q^{2j}-1)}{\phi(q)} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \ \frac{q^{2N+2} \ x^{2N+1}}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \ \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{n^2-(qx)^2}- \frac{x^{2N+1}}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right\}.\notag\\ \end{align} Now, we examine the last sum on the right-hand side of \eqref{zx6}. By Theorem \eqref{barevencohen}, we have \begin{align}\label{z1x8} &\frac{8 \pi x^{\nu/2}}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \bar{\sigma}_{{-\nu},\chi}(n)n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx} ) \notag\\ = & \frac{q^\nu}{ \phi(q)\sin \left(\frac{\pi \nu}{2}\right)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) L(\nu, \bar{\chi}) x^{\nu-1} \ \delta_{(0,1)}^{(\nu)} +\frac{ q }{ \phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \left\{ - \frac{\pi L(1+\nu,\bar{\chi})}{ \cos \left(\frac{\pi \nu}{2}\right)}(qx)^{\nu}+ \frac{ L(\nu,\bar{\chi})}{ \sin \left(\frac{\pi \nu}{2}\right)} (qx)^{\nu-1} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ +\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^N {\zeta(2j-\nu)L(2j ,\bar{\chi})(qx)^{2j-1}} + \frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}(qx)^{2N+1}\sum_{n=1}^{\infty} {\sigma}_{-\nu, \bar{\chi}}(n) \left(\frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{n^2-(qx)^2}\right) \right\}. \end{align} Now, we examine the last expression in \eqref{z1x8}, and we obtain \begin{align}\label{zx9} &\frac{ 1 }{ \phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h)\sum_{n=1}^{\infty} {\sigma}_{-\nu, \bar{\chi}}(n) \left(\frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{n^2-(qx)^2}\right)\notag\\ &=\frac{ 1 }{\phi(q)}\sum_{n=1}^{\infty} \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \sum_{d/n} d^{-\nu}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \bar{\chi}\left( {d}\right) \notag\\ &=\frac{ 1 }{\phi(q)}\sum_{n=1}^{\infty} \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \sum_{d/n} d^{-\nu} \left\{ \sum_{\chi even} \chi(h) \bar{\chi}\left(d \right) -\chi_0\left(d \right) \right\} \notag\\ &=\frac{1}{2}\sum_{r=1}^{\infty} \sum_{\substack{ d=1 \\ d\equiv \pm h(q)} }^\infty d^{-\nu} \frac{ (dr)^{\nu-2N}-(qx)^{\nu-2N} }{ d^2r^2-(qx)^2} -\frac{ 1 }{\phi(q)} \sum_{n=1}^{\infty} \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} \left( \sigma_{-\nu} (n)-q^{-\nu}\sigma_{-\nu} \left(\frac{n}{q}\right)\right) \notag\\ &=\frac{1}{2}\sum_{r=1}^{\infty} \sum_{ m=0}^\infty (mq+h)^{-\nu} \left\{ \frac{ (r(mq+h))^{\nu-2N}-(qx)^{\nu-2N} }{ r^2(mq+h)^2-(qx)^2} + \frac{ (r(mq+q-h))^{\nu-2N}-(qx)^{\nu-2N} }{ r^2(mq+q-h)^2-(qx)^2} \right\} \notag\\ &-\frac{ 1 }{\phi(q)}\sum_{n=1}^{\infty} \sigma_{-\nu} (n) \frac{ n^{\nu-2N}-(qx)^{\nu-2N} }{ n^2-(qx)^2} +\frac{ q^{-\nu} }{\phi(q)} \sum_{r=1}^{\infty} \sigma_{-\nu} \left(r\right) \frac{ (qr)^{\nu-2N}-(qx)^{\nu-2N} }{(qr)^2-(qx)^2} \notag\\ &=\frac{q^{-2-2N}}{2}\sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h/q)^{-\nu} \left( \frac{ (r(m+h/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+h/q)^2-x^2} + \frac{ (r(m+1-h/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+1-h/q)^2-x^2} \right) \notag\\ & \ \ \ \ -\frac{1}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \left( \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{n^2-(qx)^2} \right)+ \frac{q^{-2N-2}}{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \left( \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right). \notag\\ \end{align} Employing \eqref{V4}, \eqref{zx9} in \eqref{z1x8}, we get \begin{align}\label{zx10} &\frac{8 \pi x^{\nu/2}}{\phi(q)}\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi}) \sum_{n=1}^{\infty} \bar{\sigma}_{{-\nu},\chi}(n)n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx} ) \notag\\ = &\frac{x^{\nu-1}}{2\sin \left(\frac{\pi \nu}{2} \right)} \left(\delta_{(0,1)}^{(\nu)} +1 \right)\left\{\left( \zeta(\nu, h/q )+ \zeta(\nu, 1- h/q ) \right)-\frac{2}{\phi(q)} (q^\nu-1) \zeta(\nu) \right\}\notag\\ &\ -\frac{\pi \ x^\nu}{2\cos \left(\frac{\pi \nu}{2} \right)} \left\{ \left( \zeta(1+\nu, h/q )+ \zeta(1+\nu, 1- h/q ) \right)-\frac{2}{\phi(q)} (q^{\nu+1}-1) \zeta(\nu+1) \right\}\notag\\ &\ \ +\frac{1}{\sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j-\nu) x^{2j-1} \left\{ \left( \zeta(2j, h/q )+ \zeta(2j , 1- h/q ) \right) - \frac{2}{\phi(q)} (q^ {2j}-1) \zeta(2j) \right\}\notag\\ &+\frac{ x^{2N+1} }{ \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h/q)^{-\nu} \left( \frac{ (r(m+h/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+h/q)^2-x^2} + \frac{ (r(m+1-h/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+1-h/q)^2-x^2} \right) \notag\\ - & \frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)} \left\{ \frac{q(qx)^{2N+1} }{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \frac{n^{\nu-2N}-(qx)^{\nu-2N}}{n^2-(qx)^2}-\frac{x^{2N+1} }{\phi(q)} \sum_{n=1}^\infty\sigma_{-\nu}(n) \frac{n^{\nu-2N}-x^{\nu-2N}}{n^2-x^2} \right\}. \notag\\ \end{align} Combining \eqref{zx6}, \eqref{zx7} and \eqref{zx10}, we get \begin{align*} &8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left(\frac{ 2\pi nh }{dq}\right)= -\frac{ \Gamma(\nu) \zeta(\nu )}{(2\pi)^{\nu-1} } \notag\\ &+\frac{x^{\nu-1}}{2\sin \left(\frac{\pi \nu}{2} \right)} \left(\delta_{(0,1)}^{(\nu)} +1 \right)\left\{ \zeta(\nu, h/q )+ \zeta(\nu, 1- h/q ) \right\} - \frac{x^{\nu-1}}{\phi(q) \sin \left(\frac{\pi \nu}{2} \right)} (q^\nu-1) \zeta(\nu) \delta_{(0,1)}^{(\nu)} \notag\\ & -\frac{\pi \ x^\nu}{2\cos \left(\frac{\pi \nu}{2} \right)} \left\{ \zeta(1+\nu, h/q )+ \zeta(1+\nu, 1- h/q ) \right\} +\frac{1}{\sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j-\nu) x^{2j-1} \left\{ \zeta(2j, h/q )+ \zeta(2j , 1- h/q ) \right\}\notag\\ &+\frac{ x^{2N+1} }{ \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h/q)^{-\nu} \left( \frac{ (r(m+h/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+h/q)^2-x^2} + \frac{ (r(m+1-h/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+1-h/q)^2-x^2} \right). \notag\\ \end{align*} \end{proof} \begin{proof}[Theorem \rm{\ref{cohen2 even-odd1based}}][] Plugging $k=-\nu,a=4\pi$ in the equation \eqref{5.1}, we arrive at \begin{align}\label{101} &\frac{8 \pi x^{\nu/2}}{\phi(p)\phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1 odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4 \pi \sqrt{nx}) \notag\\ &=-8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4 \pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi d h_1}{p}\right)\sin \left( \frac{2\pi n h_2}{dq}\right). \end{align} Now we multiply both sides of Theorem \ref{cohenoo} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$, then sum on odd primitive characters $\chi_1$ modulo $p$ and $\chi_2$ modulo $q$. So, we observe \begin{align}\label{1112} &\frac{ 8 \pi x^{\nu/2}}{\phi(p)\phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1 odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^{\infty}\sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx})\notag\\ =&-\frac{2p^{1-\nu}q }{\phi(p)\phi(q)\sin \left(\frac{\pi \nu}{2}\right)} \sum_{\chi_1 odd}\chi_{1}(h_1) L(1-\nu, \bar{\chi_1}) \sum_{\chi_2 odd}\chi_{2}(h_1) L(1, \bar{\chi_2}) \notag\\ & -\frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\frac{ p^{1-\nu}q}{ \phi(p)\phi(q)} \sum_{\chi_2 odd}\chi_{2}(h_2) \sum_{\chi_1 odd}\chi_{1}(h_1) \left\{ -{L(\nu+1, \bar{\chi_2})L(1, \bar{\chi_1})(pqx)^{\nu}} \right.\notag\\&\left. \ + \sum_{j=1}^{N-1} L(2j+1, \bar{\chi_2})\ L(2j+1-\nu, \bar{\chi_1})(pqx)^{2j} + (pqx)^{2N} \sum_{n=1}^{\infty}\frac{{\sigma}_{-\nu, \bar{\chi_2}, \bar{\chi_1}}(n)}{n} \left( \frac{n^{\nu-2N+2}-(pqx)^{\nu-2N+2}}{n^2-(pqx)^2)} \right) \right\}. \end{align} Next, we consider \begin{align}\label{1111} &\frac{1}{\phi(p)\phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2) \sum_{\chi_1 odd}\chi_{1}(h_1) \sum_{n=1}^{\infty}\frac{{\sigma}_{-\nu, \bar{\chi_2}, \bar{\chi_1}}(n)}{n} \left( \frac{n^{\nu-2N+2}-(pqx)^{\nu-2N+2}}{n^2-(pqx)^2)} \right) \notag\\ &= \sum_{n=1}^{\infty}\sum_{d|n} d^{-\nu}\frac{1}{n} \left( \frac{n^{\nu-2N+2}-(pqx)^{\nu-2N+2}}{n^2-(pqx)^2)} \right) \frac{1}{ \phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2) \Bar{\chi}_{2}(d) \frac{1}{\phi(p) } \sum_{\chi_1 odd}\chi_{1}(h_1) \Bar{\chi}_{1}(n/d)\notag\\ &= \sum_{d,r\geq 1}^{\infty}\frac{d^{-\nu-1}}{r} \left( \frac{(dr)^{\nu-2N+2}-(pqx)^{\nu-2N+2}}{(dr)^2-(pqx)^2)} \right) \frac{1}{ \phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2) \Bar{\chi}_{2}(d) \frac{1}{\phi(p) } \sum_{\chi_1 odd}\chi_{1}(h_1) \Bar{\chi}_{1}(r)\notag\\ &=\frac{p^{\nu-2N-1}q^{-2N-1} }{4} \sum_{m,n\geq 0}^{\infty} \left\{ \frac{(n+h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+h_2/q)(m+h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{(n+1-h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+1-h_2/q)(m+h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+1-h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{(n+h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+h_2/q)(m+1-h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+h_2/q)(m+1-h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{(n+1-h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+1-h_2/q)(m+1-h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+1-h_2/q)(m+1-h_1/p))^2-x^2 } \right) \right\}. \end{align} Employing \eqref{X2} and \eqref{1111} in \eqref{1112}, we obtain \begin{align}\label{1113} &\frac{ 8 \pi x^{\nu/2} }{\phi(p)\phi(q)}\sum_{\chi_2 odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\chi_1 odd}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^{\infty}\sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx})\notag\\ &=-\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu, h_1/p) - \zeta(1-\nu, 1-h_1/p)\right) \left(\zeta(1, h_2/q) - \zeta(1, 1-h_2/q)\right)\notag\\ &+ \frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)}x^{\nu} \left(\zeta(1, h_1/p) - \zeta(1 , 1-h_1/p)\right) \left(\zeta(\nu+1, h_2/q) - \zeta(\nu+1, 1-h_2/q)\right)\notag\\ &-\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N x^{2j}\left(\zeta(2j+1-\nu, h_1/p) - \zeta(2j+1-\nu, 1-h_1/p)\right) \notag\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left(\zeta(2j+1, h_2/q) - \zeta(2j+1, 1-h_2/q)\right)\notag\\ &-\frac{ 1}{2 \sin \left(\frac{\pi \nu}{2}\right)}x^{2N}\sum_{m,n\geq 0}^{\infty} \left\{ \frac{(n+h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+h_2/q)(m+h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+1-h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+1-h_2/q)(m+h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+1-h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+h_2/q)(m+1-h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+h_2/q)(m+1-h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ \frac{(n+1-h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+1-h_2/q)(m+1-h_1/p))^{\nu-2N+2}-x^{\nu-2N+2}}{ ((n+1-h_2/q)(m+1-h_1/p))^2-x^2 } \right) \right\}. \end{align} Equating \eqref{101}and \eqref{1113}, we get the result. \end{proof} \begin{proof}[Theorem \rm{\ref{cohen2 even-odd2based}}][] Multiply the equation \eqref{big1} by $8\pi x^{\frac{\nu}{2}}$ on both sides, then substitute $k=-\nu,a=4\pi$, we obtain \begin{align}\label{big2} &\frac{8\pi x^{\frac{\nu}{2}}}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(a\sqrt{nx}) \notag\\ &= 8\pi x^{\frac{\nu}{2}}\sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^{-\nu} \cos \left( \frac{2\pi n h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right) \notag\\ +&\left\{ \frac{8\pi x^{\frac{\nu}{2}}}{\phi(p) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^{-\nu}\cos \left( \frac{2\pi nh_2 }{dq}\right) - \frac{p^{- {\nu}+1} 8\pi (px)^{\frac{\nu}{2}}}{\phi(p) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d/m }d^{-\nu} \cos \left( \frac{2\pi mh_2 }{dq}\right) \right\} \notag\\ +&\left\{\frac{8\pi x^{\frac{\nu}{2}}}{\phi(q) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right) - \frac{q\ 8\pi (qx)^{\frac{\nu}{2}}}{\phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d/m }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right) \right\} \notag\\ &+\left\{\frac{8\pi x^{\frac{\nu}{2}}}{\phi(p) \phi(q)} \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{ d/n }d^{-\nu} -\frac{p^{- {\nu}+1}8\pi (px)^{\frac{\nu}{2}}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d/m }d^{-\nu} \right.\notag\\&\left.\ \ \ \ \ - \frac{q\ 8\pi (qx)^{\frac{\nu}{2}}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d/m }d^{-\nu} + \frac{p^{- {\nu}+1}q \ 8\pi (pqx)^{\frac{\nu}{2}}}{\phi(p) \phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpqx})\sum_{ d/m }d^{-\nu} \right\}. \notag\\ \end{align} Using Theorem \ref{evencohen2 based} with $\theta=h_2/q$, we evaluate the second and third terms on the right-hand side of \eqref{big1} as follows: \begin{align}\label{small11} & \frac{8\pi x^{\frac{\nu}{2}}}{\phi(p) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^{-\nu}\cos \left( \frac{2\pi nh_2 }{dq}\right) - \frac{p^{- {\nu}+1} 8\pi (px)^{\frac{\nu}{2}}}{\phi(p) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d/m }d^{-\nu} \cos \left( \frac{2\pi mh_2 }{dq}\right)\notag\\ &= -\frac{ \Gamma(\nu) \zeta(\nu ) }{(2\pi)^{\nu-1} } \frac{(1-p^{-\nu+1}) }{\phi(p)} +\frac{\pi \ x^\nu}{2\cos \left(\frac{\pi \nu}{2} \right)} \left\{ \zeta(1+\nu, h_2/q )+ \zeta(1+\nu, 1- h_2/q) \right\} \notag\\ &+\frac{1}{\phi(p) \sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j-\nu) x^{2j-1} \left\{ \zeta(2j, h_2/q )+ \zeta(2j , 1- h_2/q ) \right\} ( 1-p^{2j-\nu} ) \notag\\ &+\frac{ x^{2N+1} }{ \phi(p) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h_2/q)^{-\nu} \left( \frac{ (r(m+h_2/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+h_2/q)^2-x^2} + \frac{ (r(m+1-h_2/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+1-h_2/q)^2-x^2} \right) \notag\\ &-\frac{ p^{2N+2-\nu} \ x^{2N+1} }{ \phi(p) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h_2/q)^{-\nu} \left( \frac{ (r(m+h_2/q))^{\nu-2N}-(px)^{\nu-2N} }{ r^2(m+h_2/q)^2-(px)^2} + \frac{ (r(m+1-h_2/q))^{\nu-2N}-(px)^{\nu-2N} }{ r^2(m+1-h_2/q)^2-(px)^2} \right) \notag\\ \end{align} Using Theorem \ref{evencohen based} with $\theta=h_1/p$, we evaluate the fourth and fifth terms on the right-hand side of \eqref{big1} as follows: \begin{align}\label{small12} &\frac{8\pi x^{\frac{\nu}{2}}}{\phi(q) } \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right) - \frac{q\ 8\pi (qx)^{\frac{\nu}{2}}}{\phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d/m }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right)\notag\\ &= -\frac{\pi }{\cos\left(\frac{\pi \nu}{2}\right)} \zeta(\nu+1) x^\nu \frac{ 1-q^{\nu+1}}{\phi(q)} + \frac{\pi }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,h_1/p) + \zeta(1-\nu, 1-h_1/p)\right) \notag\\ &+\frac{1 }{ \sin\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j) \left(\zeta(2j-\nu,h_1/p) + \zeta(2j-\nu, 1-h_1/p)\right)x^{2j-1}\frac{ (1-q^{2j} ) }{\phi(q)} \notag\\ &+\frac{1 }{ \phi(q) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{d=1}^{\infty}d^{-\nu } \sum_{ r=0}^{\infty}\left( \frac{ \left( d(r+h_1/p) \right)^{\nu-2N}- \left( x \right)^{\nu-2N} }{ d^2(r+h_1/p)^2-x^2 } - \frac{ \left( d(r+1-h_1/p) \right)^{\nu-2N}- \left( x \right)^{\nu-2N} }{ d^2(r+1-h_1/p)^2-x^2 } \right) \notag\\ &-\frac{q^{2N+2} }{ \phi(q) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{d=1}^{\infty}d^{-\nu } \sum_{ r=0}^{\infty}\left( \frac{ \left( d(r+h_1/p) \right)^{\nu-2N}- \left( qx \right)^{\nu-2N} }{ d^2(r+h_1/p)^2-(qx)^2 } - \frac{ \left( d(r+1-h_1/p) \right)^{\nu-2N}- \left( qx \right)^{\nu-2N} }{ d^2(r+1-h_1/p)^2-(qx)^2 } \right). \notag\\ \end{align} Using Proposition \ref{Cohen-type}, we evaluate the last four terms on the right-hand side of \eqref{big1} as follows: \begin{align}\label{small13} &\frac{8\pi x^{\frac{\nu}{2}}}{\phi(p) \phi(q)} \sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx}) \sum_{ d/n }d^{-\nu} -\frac{p^{- {\nu}+1}8\pi (px)^{\frac{\nu}{2}}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpx})\sum_{ d/m }d^{-\nu}\notag \\ & - \frac{q\ 8\pi (qx)^{\frac{\nu}{2}}}{\phi(p) \phi(q)} \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mqx})\sum_{ d/m }d^{-\nu} + \frac{p^{- {\nu}+1}q \ 8\pi (pqx)^{\frac{\nu}{2}}}{\phi(p) \phi(q) } \sum_{m=1}^\infty m^{\nu/2} K_{\nu}(a\sqrt{mpqx})\sum_{ d/m }d^{-\nu} \notag \\ &=-\frac{ \Gamma(\nu) \zeta(\nu)}{(2\pi)^{\nu-1} } \frac{( p^{1-\nu}-1) }{\phi(p)} + \frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N}x^{2j-1} \zeta(2j)\ \zeta(2j-\nu) \frac{(p^{2j-\nu}-1) (q^{2j }-1 ) }{\phi(p) \phi(p) }\notag \\ &-\pi\frac{\zeta(\nu+1) x^{\nu}} {\cos(\frac{\pi\nu}{2}) } \frac{ (q^{\nu+1}-1) }{\phi(q)} +\left\{ \frac{2 }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)}x^{2N+1}\sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-x^{\nu-2N}} { n^2-x^2 }\right) \right.\notag\\&\left.\ \ - \frac{2 p^{2N+2-\nu} \ x^{2N+1} }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-(px)^{\nu-2N}} { n^2-(px)^2 }\right) - \frac{2 q^{2N+2} \ x^{2N+1} }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-(qx)^{\nu-2N}} { n^2-(qx)^2 }\right) \right.\notag\\&\left.\ \ + \frac{2 p^{2N+2-\nu} q^{2N+2} }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-(pqx)^{\nu-2N}} { n^2-(pqx)^2 }\right) \right\}. \end{align} Substitute \eqref{small11}, \eqref{small12} and \eqref{small13} into the right hand side of \eqref{big2}, we deduce \begin{align}\label{mainp11} &\frac{8\pi x^{\frac{\nu}{2}}}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \notag\\ &= 8\pi x^{\frac{\nu}{2}}\sum_{n=1}^\infty n^{\nu/2} K_{\nu}(a\sqrt{nx})\sum_{ d/n }d^{-\nu} \cos \left( \frac{2\pi n h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right) \notag\\ & +\frac{\pi \ x^\nu}{2\cos \left(\frac{\pi \nu}{2} \right)} \left\{ \zeta(1+\nu, h_2/q )+ \zeta(1+\nu, 1- h_2/q) \right\} \notag\\ &+\frac{1}{\phi(p) \sin\left(\frac{\pi \nu}{2} \right)}\sum_{j=1}^N \zeta(2j-\nu) x^{2j-1} \left\{ \zeta(2j, h_2/q)+ \zeta(2j , 1- h_2/q ) \right\} ( 1-p^{2j-\nu} ) \notag\\ &+\frac{ x^{2N+1} }{ \phi(p) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h_2/q)^{-\nu} \left( \frac{ (r(m+h_2/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+h_2/q)^2-x^2} + \frac{ (r(m+1-h_2/q))^{\nu-2N}-x^{\nu-2N} }{ r^2(m+1-h_2/q)^2-x^2} \right) \notag\\ &-\frac{ p^{2N+2-\nu} \ x^{2N+1} }{ \phi(p) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} \sum_{ m=0}^\infty (m+h_2/q)^{-\nu} \left( \frac{ (r(m+h_2/q))^{\nu-2N}-(px)^{\nu-2N} }{ r^2(m+h_2/q)^2-(px)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ + \frac{ (r(m+1-h_2/q))^{\nu-2N}-(px)^{\nu-2N} }{ r^2(m+1-h_2/q)^2-(px)^2} \right) + \frac{\pi }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,h_1/p) + \zeta(1-\nu, 1-h_1/p)\right) \notag\\ &+\frac{1 }{ \sin\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N \zeta(2j) \left(\zeta(2j-\nu,h_1/p) + \zeta(2j-\nu, 1-h_1/p)\right)x^{2j-1}\frac{ (1-q^{2j} ) }{\phi(q)} \notag\\ &+\frac{1 }{ \phi(q) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{d=1}^{\infty}d^{-\nu } \sum_{ r=0}^{\infty}\left( \frac{ \left( d(r+h_1/p) \right)^{\nu-2N}- \left( x \right)^{\nu-2N} }{ d^2(r+h_1/p)^2-x^2 } - \frac{ \left( d(r+1-h_1/p) \right)^{\nu-2N}- \left( x \right)^{\nu-2N} }{ d^2(r+1-h_1/p)^2-x^2 } \right) \notag\\ &-\frac{q^{2N+2} }{ \phi(q) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{d=1}^{\infty}d^{-\nu } \sum_{ r=0}^{\infty}\left( \frac{ \left( d(r+h_1/p) \right)^{\nu-2N}- \left( qx \right)^{\nu-2N} }{ d^2(r+h_1/p)^2-(qx)^2 } - \frac{ \left( d(r+1-h_1/p) \right)^{\nu-2N}- \left( qx \right)^{\nu-2N} }{ d^2(r+1-h_1/p)^2-(qx)^2 } \right) \notag\\ & + \frac{2}{ \sin \left(\frac{\pi \nu}{2}\right)}\sum_{j=1}^{N}x^{2j-1} \zeta(2j)\ \zeta(2j-\nu) \frac{(p^{2j-\nu}-1) (q^{2j }-1 ) }{\phi(p) \phi(p) }\notag \\ & +\left\{ \frac{2 }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)}x^{2N+1}\sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-x^{\nu-2N}} { n^2-x^2 }\right) \right.\notag\\&\left.\ \ - \frac{2 p^{2N+2-\nu} \ x^{2N+1} }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-(px)^{\nu-2N}} { n^2-(px)^2 }\right) - \frac{2 q^{2N+2} \ x^{2N+1} }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)} \sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-(qx)^{\nu-2N}} { n^2-(qx)^2 }\right) \right.\notag\\&\left.\ \ + \frac{2 p^{2N+2-\nu} q^{2N+2} }{ \phi(p)\phi(q) \sin \left(\frac{\pi \nu}{2}\right)} x^{2N+1}\sum_{n=1}^{\infty}{\sigma}_{-\nu}(n) \left(\frac{n^{\nu-2N}-(pqx)^{\nu-2N}} { n^2-(pqx)^2 }\right) \right\}. \end{align} Now we multiply both sides of Theorem \ref{cohenee} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$, then sum on non-principal even primitive characters $\chi_1$ modulo $p$ and $\chi_2$ modulo $q$. So, we observe \begin{align}\label{mainp12} &\frac{8\pi x^{\frac{\nu}{2}}}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \notag\\ &=\frac{ 2 p^{1-\nu}q }{\phi(p)\phi(q) \sin\left(\frac{\pi \nu}{2}\right) } \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1) \left\{\sum_{j=1}^{N} L(2j,\bar{\chi_2})\ L(2j-\nu, \Bar{\chi_1})(pqx)^{2j-1} \right.\notag\\&\left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +{(pqx)^{2N+1} }\sum_{n=1}^\infty{\sigma}_{-\nu, \bar{\chi_2},\Bar{\chi_1}}(n) { }\left( \frac{n^{\nu-2N}-(pqx)^{\nu-2N}}{n^2-(pqx)^2} \right) \right\}. \end{align} Next, we consider \begin{align}\label{small14} &\frac{1}{\phi(p)\phi(q)} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1) \sum_{n=1}^\infty{\sigma}_{-\nu, \bar{\chi_2},\Bar{\chi_1}}(n) { }\left( \frac{n^{\nu-2N}-(pqx)^{\nu-2N}}{n^2-(pqx)^2} \right) \notag\\ &=\frac{1}{\phi(p)\phi(q)} \sum_{n=1}^\infty \sum_{d|n} d^{-\nu} \left( \frac{n^{\nu-2N}-(pqx)^{\nu-2N}}{n^2-(pqx)^2} \right) \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2) \Bar{ \chi}_{2}(d) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1) \Bar{ \chi}_{1}(n/d) \notag\\ &=\frac{1}{\phi(p)\phi(q)} \sum_{d,r\geq 1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) \left\{ \sum_{ \chi_2 \ even }\chi_{2}(h_2) \Bar{ \chi}_{2}(d)-\Bar{ \chi}_{0}(d) \right\} \left\{ \sum_{ \chi_1 \ even }\chi_{1}(h_1) \Bar{ \chi}_{1}(n/d) - \Bar{ \chi}_{0}(r)\right\} \notag\\ &=\frac{1}{4}\sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }}\sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) -\frac{1}{2\phi(p)} \sum_{\substack{ r=1\\ p \nmid r}} \sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) \notag\\ &-\frac{1}{2\phi(q)} \sum_{\substack{ d=1\\ q \nmid d }}\sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) +\frac{1}{\phi(p)\phi(q)} \sum_{\substack{ d=1\\ q \nmid d }}\sum_{\substack{ r=1\\ p \nmid r }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) \notag\\ &=\frac{1}{4}\sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }}\sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right)\notag\\ & + \left\{-\frac{1}{2\phi(p)} \sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }} \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) +\frac{p^{\nu-2N-2}}{2\phi(p)} \sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }} \sum_{r=1}^\infty d^{-\nu} \frac{(dr)^{\nu-2N}-(qx)^{\nu-2N}}{(dr)^2-(qx)^2} \right\}\notag\\ &+ \left\{-\frac{1}{2\phi(q)} \sum_{d=1}^\infty \sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) +\frac{q^{-2N-2}}{2\phi(q)} \sum_{d=1}^\infty \sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(px)^{\nu-2N}}{(dr)^2-(px)^2} \right) \right\}\notag\\ &+ \left\{ \frac{1}{\phi(p)\phi(q)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N} (pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) -\frac{p^{\nu-2N-2}}{\phi(p)\phi(q)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(qx)^{\nu-2N}}{(dr)^2-(qx)^2} \right) \right.\notag\\&\left. -\frac{q^{-2N-2}}{\phi(p)\phi(q)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(px)^{\nu-2N}}{(dr)^2-(px)^2} \right) + \frac{p^{\nu-2N-2}q^{-2N-2}}{\phi(p)\phi(q)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-x^{\nu-2N}}{(dr)^2-x^2} \right) \right\}. \notag\\ \end{align} Employing \eqref{V4} and \eqref{small14} in \eqref{mainp12}, we obtain \begin{align}\label{mainp13} &\frac{8\pi x^{\frac{\nu}{2}}}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \notag\\ &=\frac{1}{2 \sin\left(\frac{\pi \nu}{2}\right) }\sum_{j=1}^N x^{2j-1} \left\{\zeta(2j,h_2/q)+\zeta(2j,1-h_2/q) -\frac{2}{\phi(q) } (q^{2j }-1) \zeta(2j ) \right\} \notag\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\{\zeta(2j-\nu, h_1/p) + \zeta(2j-\nu, 1-h_1/p) -\frac{2}{\phi(p) } (p^{2j-\nu}-1) \zeta(2j-\nu) \right\} \notag\\ &+\frac{p^{ 2N+2-\nu}q^{2N+2}}{2 \sin\left(\frac{\pi \nu}{2}\right) } x^{2N+1} \sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }}\sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right)\notag\\ &+ \left\{-\frac{p^{ 2N+2-\nu}q^{2N+2}}{\phi(p) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1} \sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }} \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{ q^{2N+2}}{\phi(p) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1} \sum_{\substack{ d=1\\ d \equiv \pm h_2(q) }} \sum_{r=1}^\infty d^{-\nu} \frac{(dr)^{\nu-2N}-(qx)^{\nu-2N}}{(dr)^2-(qx)^2} \right\}\notag\\ &+ \left\{-\frac{p^{ 2N+2-\nu}q^{2N+2}}{\phi(q) \sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1} \sum_{d=1}^\infty \sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{p^{ 2N+2-\nu}}{\phi(q)\sin\left(\frac{\pi \nu}{2}\right)} x^{2N+1} \sum_{d=1}^\infty \sum_{\substack{ r=1\\ r \equiv \pm h_1(p) }} d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(px)^{\nu-2N}}{(dr)^2-(px)^2} \right) \right\}\notag\\ &+ \frac{2p^{ 2N+2-\nu}q^{2N+2}\ x^{2N+1} }{\phi(p)\phi(q)\sin\left(\frac{\pi \nu}{2}\right)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N} (pqx)^{\nu-2N}}{(dr)^2-(pqx)^2} \right) \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{2q^{2N+2}\ x^{2N+1} }{\phi(p)\phi(q)\sin\left(\frac{\pi \nu}{2}\right)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(qx)^{\nu-2N}}{(dr)^2-(qx)^2} \right) \notag \\ -&\frac{2p^{ 2N+2-\nu}\ x^{2N+1} }{\phi(p)\phi(q)\sin\left(\frac{\pi \nu}{2}\right)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-(px)^{\nu-2N}}{(dr)^2-(px)^2} \right) + \frac{2\ x^{2N+1} }{\phi(p)\phi(q)\sin\left(\frac{\pi \nu}{2}\right)} \sum_{d=1}^\infty \sum_{r=1}^\infty d^{-\nu} \left( \frac{(dr)^{\nu-2N}-x^{\nu-2N}}{(dr)^2-x^2} \right). \end{align} Equating \eqref{mainp11} and \eqref{mainp13}, we get the result. \end{proof} \begin{proof}[Theorem \rm{\ref{cohen2 even-odd3based}}][] Multiply the equation \eqref{evenodd4.1} by $8\pi x^{\frac{\nu}{2}}$ on both sides, then substitute $k=-\nu,a=4\pi$, we obtain \begin{align}\label{115} &\frac{8\pi x^{\frac{\nu}{2}}}{i\phi(p)\phi(q)}\sum_{\chi_2\ odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) \tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4 \pi \sqrt{nx})\notag\\ =&8\pi x^{\frac{\nu}{2}}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p} \right)\sin \left( \frac{2\pi nh_2}{dq}\right) + \frac{8\pi x^{\frac{\nu}{2}}}{\phi(p)} \sum_{n=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(4\pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi nh_2}{dq}\right) \notag\\ &-\frac{p^{- \nu+1}}{\phi(p)} 8\pi (px)^{\frac{\nu}{2}}\sum_{m=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(4\pi \sqrt{pmx}) \sum_{d|m}d^{-\nu} \sin \left( \frac{2\pi mh_2}{dq}\right). \end{align} Applying the Theorem \eqref{oddcohen2 based} with $\theta= h_2/q $ then functional equation of Riemann-Zeta function \eqref{1st_use} for the last two terms of right hand side of \eqref{121}, we obtain \begin{align}\label{116} & \frac{8\pi x^{\frac{\nu}{2}}}{\phi(p)} \sum_{n=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(4\pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi nh_2}{dq}\right) -\frac{p^{ -{\nu} +1}8\pi (px)^{\frac{\nu}{2}}}{\phi(p)} \sum_{m=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(4\pi \sqrt{pmx}) \sum_{d|m}d^{-\nu} \sin \left( \frac{2\pi mh_2}{dq}\right) \notag\\ &=-\frac{1}{\cos\left(\frac{\pi \nu}{2}\right) } \zeta(1-\nu) \left(\zeta(1, h_2/q) - \zeta(1, 1-h_2/q)\right)\frac{(p^{1-\nu}-1)}{\phi(p)}- \frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} x^\nu \left(\zeta(1+\nu, h_2/q) - \zeta(1+\nu, 1-h_2/q)\right)\notag\\ &-\frac{1 }{ \phi(p) \cos\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N (p^{2j+1-\nu}-1)\zeta(2j+1-\nu) \left(\zeta(2j+1 ,h_2/q) - \zeta(2j+1 , 1-h_2/q)\right)x^{2j} \notag\\ &+ \frac{x^{2N} }{ \phi(p)\cos\left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} (m+h_2/q)^{-\nu} \sum_{ m=0}^\infty \left\{ \frac{ (r(m+h_2/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+h_2/q)^2-x^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{ (r(m+1-h_2/q))^{\nu+1-2N}-x^{\nu+1-2N} }{ r^2(m+1-h_2/q)^2-x^2} \right\} \notag\\ &-\frac{p^{2N+1-\nu} \ x^{2N} }{ \phi(p)\cos\left(\frac{\pi \nu}{2}\right)} \sum_{r=1}^{\infty} (m+h_2/q)^{-\nu} \sum_{ m=0}^\infty \left\{ \frac{ (r(m+h_2/q))^{\nu+1-2N}-(px)^{\nu+1-2N} }{ r^2(m+ h_2/q)^2-(px)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{ (r(m+1-\theta))^{\nu+1-2N}-(px)^{\nu+1-2N} }{ r^2(m+1-h_2/q)^2-(px)^2} \right\}. \notag\\ \end{align} Now we multiply both sides of Theorem \ref{coheneo} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /i\phi(q)$, then sum on non-principal even primitive characters $\chi_1$ modulo $p$ and odd primitive characters $\chi_2$ modulo $q$. So, we obtain \begin{align}\label{117} &\frac{8\pi x^{\frac{\nu}{2}}}{i\phi(p)\phi(q)}\sum_{\chi_2\ odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) \tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4 \pi \sqrt{nx})\notag\\ &=\frac{ 2qp^{1-\nu} }{ \phi(p)\phi(q)\cos\left(\frac{\pi \nu}{2}\right) } \sum_{\chi_2\ odd} \chi_{2}(h_2) L(1, \bar{\chi_2}) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) L(1-\nu, \bar{\chi_1})\notag\\ &+\frac{ p^{1-\nu}q}{\phi(p)\phi(q)} \sum_{\chi_2\ odd}\chi_{2}(h_2) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) \left\{ \frac{2}{ \cos \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^{N-1} L(2j+1,\bar{\chi_2}) L(2j+1-\nu, \bar{\chi_1})(pqx)^{2j} \ \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ + \frac{2 }{ \cos\left(\frac{\pi \nu}{2}\right) }(pqx)^{2N}\sum_{n=1}^{\infty} {\sigma}_{-\nu, \bar{\chi_2},\bar{\chi_1}}(n) \left(\frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \right\}. \end{align} Next, we observe \begin{align}\label{118} &\frac{1}{\phi(p)\phi(q)} \sum_{\chi_2\ odd}\chi_{2}(h_2) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1)\sum_{n=1}^{\infty} {\sigma}_{-\nu, \bar{\chi_2},\bar{\chi_1}}(n) \left(\frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \notag\\ &=\sum_{n=1}^{\infty} \left(\frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \sum_{d|n}d^{-\nu} \frac{1}{\phi(q)} \sum_{\chi_2\ odd}\chi_{2}(h_2)\Bar{\chi_2} (d)\frac{1}{\phi(p)}\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1)\Bar{\chi_1} (n/d) \notag\\ &=\sum_{d,r\geq 1}^{\infty} \left(\frac{(dr)^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{(dr)^2-(pqx)^2} \right) d^{-\nu} \frac{1}{\phi(q)} \sum_{\chi_2\ odd}\chi_{2}(h_2)\Bar{\chi_2} (d)\frac{1}{\phi(p)}\left\{\sum_{ \chi_1 \ even}\chi_{1}(h_1)\Bar{\chi_1} (r)-\Bar{\chi_0} (r)\right\}\notag\\ &= \frac{p^{\nu-2N-1}q^{-2N-1}}{4}\sum_{m,n\geq 0}^\infty \left\{ {(n+h_2/q)^{-\nu} } \left( \frac{((n+h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ {(n+1-h_2/q)^{-\nu} } \left( \frac{((n+1-h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ {(n+h_2/q)^{-\nu} } \left( \frac{((n+h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+1-h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ {(n+1-h_2/q)^{-\nu} } \left( \frac{((n+1-h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+1-h_1/p))^2-x^2 } \right) \right\}\notag\\ & -\frac{1}{2\phi(p)} \sum_{\substack{ r=1\\ p \nmid r } }^{\infty} \sum_{\substack{ d=1\\ d\equiv \pm h_2(q)} }^{\infty} d^{-\nu} \frac{(dr)^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{(dr)^2-(pqx)^2}\notag\\ \end{align} Let us evaluate the last term of \eqref{118} \begin{align}\label{1181} & -\frac{1}{2\phi(p)} \sum_{\substack{ r=1\\ p \nmid r } }^{\infty} \sum_{\substack{ d=1\\ d\equiv \pm h_2(q)} }^{\infty} d^{-\nu} \frac{(dr)^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{(dr)^2-(pqx)^2}\notag\\ &=-\frac{q^{-2N-1}}{2\phi(p)} \left\{ \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+h_2/q)^{-\nu}\frac{(r(m+h_2/q))^{\nu-2N+1}-(px)^{\nu-2N+1}}{(r(m+h_2/q))^2-(px)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+1-h_2/q)^{-\nu}\frac{(r(m+1-h_2/q))^{\nu-2N+1}-(px)^{\nu-2N+1}}{(r(m+1-h_2/q))^2-(px)^2} \right\}\notag\\ &+\frac{p^{\nu-2N-1}q^{-2N-1}}{2\phi(p)} \left\{ \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+h_2/q)^{-\nu}\frac{(r(m+h_2/q))^{\nu-2N+1}-(x)^{\nu-2N+1}}{(r(m+h_2/q))^2-(x)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+1-h_2/q)^{-\nu}\frac{(r(m+1-h_2/q))^{\nu-2N+1}-(x)^{\nu-2N+1}}{(r(m+1-h_2/q))^2-(x)^2} \right\}.\notag\\ \end{align} Employing \eqref{X2}, \eqref{V4},\eqref{118} and \eqref{1181} in \eqref{117}, we obtain \begin{align}\label{119} &\frac{8\pi x^{\frac{\nu}{2}}}{i\phi(p)\phi(q)}\sum_{\chi_2\ odd}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even}}\chi_{1}(h_1) \tau(\Bar{\chi_{1}}) \sum_{n=1}^\infty \sigma_{-\nu,\chi_1,\chi_2}(n)n^{\frac{\upsilon}{2}}K_\upsilon(4 \pi \sqrt{nx})\notag\\ =&\frac{ 1}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \left(\zeta(1, h_2/q) - \zeta(1, 1-h_2/q)\right) \left\{ \zeta(1-\nu,h_1/p)+\zeta(1-\nu,1-h_1/p)-\frac{2}{\phi(p) } (p^{1-\nu}-1) \zeta(1-\nu) \right\} \notag\\ &+\frac{ 1}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N x^{2j} \left( \zeta(2j+1,h_2/q)-\zeta(2j+1,1-h_2/q) \right) \left\{\zeta(2j+1-\nu, h_1/p) + \zeta(2j+1-\nu, 1-h_1/p) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{2}{\phi(p) } (p^{2j+1-\nu}-1) \zeta(2j+1-\nu) \right\} \notag\\ & +\frac{ x^{2N} }{ 2 \cos\left(\frac{\pi \nu}{2}\right)} \sum_{m,n\geq 0}^\infty \left\{ {(n+h_2/q)^{-\nu} } \left( \frac{((n+h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ {(n+1-h_2/q)^{-\nu} } \left( \frac{((n+1-h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ +\ {(n+h_2/q)^{-\nu} } \left( \frac{((n+h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+1-h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ -\ {(n+1-h_2/q)^{-\nu} } \left( \frac{((n+1-h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+1-h_1/p))^2-x^2 } \right) \right\}\notag\\ & -\frac{p^{2N+1-\nu} \ x^{2N} }{ \phi(p)\cos\left(\frac{\pi \nu}{2}\right)}\left\{ \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+h_2/q)^{-\nu}\frac{(r(m+h_2/q))^{\nu-2N+1}-(px)^{\nu-2N+1}}{(r(m+h_2/q))^2-(px)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+1-h_2/q)^{-\nu}\frac{(r(m+1-h_2/q))^{\nu-2N+1}-(px)^{\nu-2N+1}}{(r(m+1-h_2/q))^2-(px)^2} \right\}\notag\\ &+ \frac{x^{2N} }{ \phi(p)\cos\left(\frac{\pi \nu}{2}\right)}\left\{ \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+h_2/q)^{-\nu}\frac{(r(m+h_2/q))^{\nu-2N+1}-(x)^{\nu-2N+1}}{(r(m+h_2/q))^2-(x)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{r=1 }^{\infty} \sum_{ m=0 }^{\infty} (m+1-h_2/q)^{-\nu}\frac{(r(m+1-h_2/q))^{\nu-2N+1}-(x)^{\nu-2N+1}}{(r(m+1-h_2/q))^2-(x)^2} \right\}.\notag\\ \end{align} Combining \eqref{115}, \eqref{116} and \eqref{119}, we get the result. \end{proof} \begin{proof}[Theorem \rm{\ref{cohen2 even-odd4based}}][] Multiply the equation \eqref{evenodd5.1} by $8\pi x^{\frac{\nu}{2}}$ on both sides, then substitute $k=-\nu,a=4\pi$, we obtain \begin{align}\label{120} &\frac{ 8 \pi x^{\nu/2}}{i \phi(p)\phi(q)} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{ \chi_1\ odd }\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^{\infty}\sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx})\notag \\ =& 8 \pi x^{\nu/2}\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi dh_1 }{p} \right)\cos \left( \frac{2\pi nh_2}{dq}\right) + \frac{ 8 \pi x^{\nu/2}}{\phi(q)} \sum_{n=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi dh_1}{p}\right) \notag\\ &-\frac{q \ 8 \pi (qx)^{\nu/2}}{\phi(q)} \sum_{m=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(4 \pi \sqrt{qmx}) \sum_{d|m}d^{-\nu} \sin \left( \frac{2\pi dh_1}{p}\right). \end{align} Applying the Theorem \eqref{oddcohen based} with $\theta= h_1/p $ for the last two terms of the right-hand side of \eqref{121}. \begin{align}\label{121} & \frac{ 8 \pi x^{\nu/2}}{\phi(q)} \sum_{n=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(a\sqrt{nx}) \sum_{d|n}d^{-\nu} \sin \left( \frac{2\pi dh_1}{p}\right) -\frac{q \ 8 \pi (qx)^{\nu/2}}{\phi(q)} \sum_{m=1}^\infty m^{\frac{\upsilon}{2}}K_\upsilon(4 \pi \sqrt{qmx}) \sum_{d|m}d^{-\nu} \sin \left( \frac{2\pi dh_1}{p}\right) \notag\\ =& - \frac{(q^{\nu+1}-1)} {\phi(q)\cos\left(\frac{\pi \nu}{2}\right)} \zeta(\nu+1) \left(\zeta(1,\theta) - \zeta(1, 1-\theta)\right)x^\nu+\frac{\pi }{ 2 \sin\left(\frac{\pi \nu}{2}\right)} \left(\zeta(1-\nu,\theta) - \zeta(1-\nu, 1-\theta)\right)\notag\\ +& \frac{1 }{ \phi(q) \cos\left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N {(q^{2j}-1)} \zeta(2j) \left(\zeta(2j-\nu,\theta) - \zeta(2j-\nu, 1-\theta)\right)x^{2j-1}\notag\\ - & \frac{x^{2N+1} }{ \phi(q) \cos\left(\frac{\pi \nu}{2}\right)} \sum_{d=1}^{\infty}d^{-\nu-1 } \sum_{ m=0}^{\infty}\left( \frac{ \left( d(m+\theta) \right)^{\nu+1-2N}- x ^{\nu+1-2N} }{ (m+\theta)\left(d^2(m+\theta)^2-x^2 \right)} - \frac{ \left( d(m+1-\theta) \right)^{\nu+1-2N}- x ^{\nu+1-2N} }{ (m+1-\theta)\left(d^2(m+1-\theta)^2-x^2 \right)} \right)\notag\\ +&\frac{q^{2N+2}x^{2N+1} }{ \phi(q) \cos\left(\frac{\pi \nu}{2}\right)} \sum_{d=1}^{\infty}d^{-\nu-1 } \sum_{ m=0}^{\infty}\left( \frac{ \left( d(m+\theta) \right)^{\nu+1-2N}- (qx) ^{\nu+1-2N} }{ (m+\theta)\left(d^2(m+\theta)^2-(qx)^2 \right)} - \frac{ \left( d(m+1-\theta) \right)^{\nu+1-2N}- (qx) ^{\nu+1-2N} }{ (m+1-\theta)\left(d^2(m+1-\theta)^2-(qx)^2 \right)} \right). \end{align} Now we multiply both sides of Theorem \ref{cohenoe} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /i\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$, then sum on odd primitive character $\chi_1$ modulo $p$ and non-principal even primitive characters $\chi_2$ modulo $q$. So, we obtain \begin{align}\label{122} &\frac{ 8 \pi x^{\nu/2}}{i \phi(p)\phi(q)} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{ \chi_1\ odd }\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^{\infty} \sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx})\notag\\ =&\frac{2 p^{1-\nu}q}{ \phi(p)\phi(q) \cos \left(\frac{\pi \nu}{2}\right)} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2) \sum_{\chi_1 odd}\chi_{1}(h_1) \left\{ {L(\nu+1,\bar{\chi_2})L(1,\bar{\chi_1})(pqx)^{\nu}} \right.\notag\\&\left.\ \ - \sum_{j=1}^{N} L(2j,\bar{\chi_2})\ L(2j-\nu, \bar{\chi_1})(pqx)^{2j-1} - (pqx)^{2N+1}\sum_{n=1}^{\infty}\frac{{\sigma}_{-\nu, \bar{\chi_2},\bar{\chi_1}}(n) }{n} \left( \frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \right\}. \end{align} Next, we observe \begin{align}\label{123} &\frac{1}{\phi(p)\phi(q)} \sum_{\chi_1\ odd}\chi_{1}(h_1) \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}} \chi_{2}(h_2)\sum_{n=1}^{\infty} \frac{{\sigma}_{-\nu, \bar{\chi_2},\bar{\chi_1}}(n)}{n} \left(\frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \notag\\ &=\sum_{n=1}^{\infty} \frac{1}{n}\left(\frac{n^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{n^2-(pqx)^2} \right) \sum_{d|n}d^{-\nu} \frac{1}{\phi(q)} \sum_{\chi_1\ odd} \chi_{1}(h_1)\Bar{\chi_1} (n/d) \frac{1}{\phi(p)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}} \chi_{2}(h_2)\Bar{\chi_2} (d) \notag\\ &=\sum_{d,r\geq 1}^{\infty} \left(\frac{(dr)^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{(dr)^2-(pqx)^2} \right) \frac{d^{-\nu-1}}{r} \frac{1}{\phi(p)} \sum_{\chi_1\ odd}\chi_{1}(h_1)\Bar{\chi_1} (r)\frac{1}{\phi(q)}\left\{\sum_{ \chi_1 \ even}\chi_{2}(h_2)\Bar{\chi_2} (d)-\Bar{\chi_0} (d)\right\}\notag\\ &= \frac{p^{ \nu-2N-2}\ q^{-2N-2}}{4}\sum_{m,n\geq 0}^{\infty} \left\{ \frac{(n+h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\ \frac{(n+1-h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+1-h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+1-h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+1-h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+1-h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+1-h_1/p))^2-x^2 } \right) \right\} \notag\\ &-\frac{1}{2\phi(q)} \sum_{\substack{ d=1\\ q \nmid d } }^{\infty} \sum_{\substack{ r=1\\ r\equiv \pm h_1(p)} }^{\infty} \frac{d^{-\nu-1}}{r} \frac{(dr)^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{(dr)^2-(pqx)^2}\notag\\ \end{align} Let us evaluate the last term of the right-hand side of \eqref{123} \begin{align}\label{1231} & -\frac{1}{2\phi(q)} \sum_{\substack{ d=1\\ q \nmid d } }^{\infty} \sum_{\substack{ r=1\\ r\equiv \pm h_1(p)} }^{\infty} \frac{d^{-\nu-1}}{r} \frac{(dr)^{\nu-2N+1}-(pqx)^{\nu-2N+1}}{(dr)^2-(pqx)^2}\notag\\ &=-\frac{p^{\nu-2N-2}}{2\phi(q)} \left\{ \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+h_1/p)} \frac{(d(m+h_1/p))^{\nu-2N+1}-(qx)^{\nu-2N+1}}{(d(m+h_1/p))^2-(qx)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+1-h_1/p)} \frac{(d(m+1-h_1/p))^{\nu-2N+1}-(qx)^{\nu-2N+1}}{(d(m+1-h_1/p))^2-(qx)^2} \right\}\notag\\ &+\frac{p^{\nu-2N-2}q^{-2N-2}}{2\phi(q)} \left\{ \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+h_1/p)} \frac{(d(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{(d(m+h_1/p))^2-x^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+1-h_1/p)} \frac{(d(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{(d(m+1-h_1/p))^2-x^2} \right\}.\notag\\ \end{align} Employing \eqref{X2}, \eqref{V4}, \eqref{123} in \eqref{122}, we obtain \begin{align}\label{124} &\frac{ 8 \pi x^{\nu/2}}{i \phi(p)\phi(q)} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even}}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{ \chi_1\ odd }\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{n=1}^{\infty} \sigma_{-\nu, {\chi_1}, {\chi_2}}(n) n^{\nu/2} K_{\nu}(4\pi\sqrt{nx})\notag\\ =&\frac{ x^{\nu}}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \left(\zeta(1, h_1/p) - \zeta(1, 1-h_1/p)\right) \left\{ \zeta(1+\nu,h_2/q)+\zeta(1+\nu,1-h_2/q)-\frac{2}{\phi(q) } (q^{1+\nu}-1) \zeta(1+\nu) \right\} \notag\\ &-\frac{ 1}{ 2 \cos \left(\frac{\pi \nu}{2}\right)} \sum_{j=1}^N x^{2j-1} \left(\zeta(2j-\nu, h_1/p) - \zeta(2j-\nu, 1-h_1/p)\right) \left\{ \zeta(2j,h_2/q)+\zeta(2j,1-h_2/q) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{2}{\phi(q) } (q^{2j}-1) \zeta(2j) \right\} \notag\\ & - \frac{{x^{2N+1}} }{ 2\cos \left(\frac{\pi \nu}{2}\right)} \sum_{m,n\geq 0}^{\infty} \left\{ \frac{(n+h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\ \frac{(n+1-h_2/q)^{-\nu-1} }{(m+h_1/p)}\left( \frac{((n+1-h_2/q)(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+h_2/q)(m+1-h_1/p))^2-x^2} \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\ \frac{(n+1-h_2/q)^{-\nu-1} }{(m+1-h_1/p)}\left( \frac{((n+1-h_2/q)(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{ ((n+1-h_2/q)(m+1-h_1/p))^2-x^2 } \right) \right\} \notag\\ & + \frac{q^{2N+2}\ }{\phi(q)\cos \left(\frac{\pi \nu}{2}\right)}{x^{2N+1}} \left\{ \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+h_1/p)} \frac{(d(m+h_1/p))^{\nu-2N+1}-(qx)^{\nu-2N+1}}{(d(m+h_1/p))^2-(qx)^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+1-h_1/p)} \frac{(d(m+1-h_1/p))^{\nu-2N+1}-(qx)^{\nu-2N+1}}{(d(m+1-h_1/p))^2-(qx)^2} \right\}\notag\\ &-\frac{ 1 }{\phi(q)\cos \left(\frac{\pi \nu}{2}\right)}{x^{2N+1}}\left\{ \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+h_1/p)} \frac{(d(m+h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{(d(m+h_1/p))^2-x^2} \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \sum_{d=1 }^{\infty} \sum_{ m=0 }^{\infty} \frac{d^{-\nu-1}}{(m+1-h_1/p)} \frac{(d(m+1-h_1/p))^{\nu-2N+1}-x^{\nu-2N+1}}{(d(m+1-h_1/p))^2-x^2} \right\}.\notag\\ \end{align} Combining \eqref{120}, \eqref{121} and \eqref{124}, we get the result. \end{proof} \fi \begin{remark} The other proofs of this section will be almost similar to the proof of the previous section. We skip the proofs to avoid repetitions. \end{remark} \section{Proof of Voronoi summation formulas}\label{proof of voronoi...} In this Section, we prove Theorem \ref{vor1.1}, Theorem \ref{voro2}, Theorem \ref{vor1.3}, Theorem \ref{vore2}, Theorem \ref{vor1.6} and Theorem \ref{voree}. The proof of other theorems in Section \eqref{voronoi identities...} will be similar so we will leave it for the readers. \begin{proof}[Theorem \rm{\ref{vor1.1}}][] (Theorem \ref{voro2} $\Rightarrow$ Theorem \ref{vor1.1} ) It is sufficient to prove the theorem for $\theta=h/q$, where $q$ is prime and $0<h<q.$ Now we multiply both sides of identity \eqref{vv1} in Theorem \ref{voro2} by $ \chi(h) \tau(\bar{\chi}) /i\phi(q)$, then sum on odd primitive character $\chi$ modulo $q$, then the left hand side of \eqref{voro2} becomes \begin{align}\label{vor01.1} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{\alpha<j<\beta} \sigma_{-\nu,\chi}(j) f(j)&= \frac{1}{i\phi(q)}\sum_{\alpha<j<\beta} \sum_{d| j}d^{-\nu} \sum_{\chi \ odd } \chi(d) \chi(h) \tau(\bar{\chi})f(j) \notag\\ &=\sum_{\alpha<j<\beta} \sum_{d|j}d^{-\nu} \sin \left(\frac{2\pi d h}{q}\right)f(j) , \end{align} where we have used the identity \eqref{sin}. The right hand side of \eqref{voro2} becomes \begin{align}\label{vor01.2} &\frac{ 1 }{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) = \frac{ 1}{i\phi(q)}\sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) L(1+\nu,\chi) \int_\alpha^\beta {f(t) } \mathrm{d}t \notag\\ &- \frac{2 \pi }{\phi(q)q^{\frac{\nu}{2}}}\sum_{\chi \ odd } \chi(h) \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_\alpha^\beta {f(t) } t^{-\frac{\nu}{2}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) + Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} Now we consider by \eqref{ll(s)}, \eqref{Hurwitz}, \eqref{both} and \eqref{prop} \begin{align}\label{vor01.3} \frac{ 1}{i\phi(q)}\sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) L(1+\nu,\chi) =- {(2\pi)^{\nu}} \Gamma(-\nu) \sin\left(\frac{\pi \nu}{2}\right)\left\{ \zeta(-\nu,\frac{h}{q})- \zeta(-\nu,1-\frac{h}{q})\right\}. \end{align} We define for $y \in \mathbb{R},$ \begin{align}\label{qq} Z_{\nu}\left(y\ \right)= \left( \frac{2}{\pi} K_{\nu}\left(y \right) + Y_{\nu}\left(y\right) \right) \sin \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(y \right) \cos \left(\frac{\pi \nu}{2}\right). \end{align} Next, we consider \begin{align}\label{vor01.4} &\frac{1}{\phi(q)} \sum_{\chi \ odd } \chi(h) \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} Z_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right)\notag\\ &= \frac{1}{\phi(q)} \sum_{n=1}^{\infty} n^{\nu/2} Z_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \sum_{d|n} d^{-\nu} \sum_{\chi \ odd } \chi(h) \bar{\chi}\left(\frac{n}{d}\right) \notag\\ &=\frac{1}{2}\sum_{d=1}^{\infty} d^{-\nu} \left\{\sum_{\substack{ r=1 \\ r\equiv h(q)} }^\infty (dr)^{\nu/2} Z_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) -\sum_{\substack{ r=1 \\ r\equiv -h(q)} }^\infty (dr)^{\nu/2} Z_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) \right\}\notag\\ &=\frac{q^{\frac{\nu}{2}}}{2}\sum_{d=1}^{\infty} d^{-\frac{\nu}{2} } \sum_{ m=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^\frac{\nu}{2} Z_{\nu}\left(4\pi \sqrt {d\left(m+\frac{h}{q}\right)t} \ \right)- \left(m+1-\frac{h}{q}\right)^\frac{\nu}{2} Z_{\nu}\left(4\pi \sqrt {d\left(m+1-\frac{h}{q}\right)t} \ \right) \right\}. \notag\\ \end{align} Employing \eqref{vor01.3}, \eqref{qq} \eqref{vor01.4}, we deduce the expression for \eqref{vor01.2}. \iffalse \begin{align}\label{vor01.5} &\frac{ 1 }{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) =- {(2\pi)^{\nu}} \Gamma(-\nu) \sin\left(\frac{\pi \nu}{2}\right)\left\{ \zeta(-\nu,\frac{h}{q})- \zeta(-\nu,1-\frac{h}{q})\right\} \int_\alpha^\beta {f(t) } \mathrm{d}t \notag\\ &\ \ -2\int_\alpha^\beta {f(t) } t^{-\frac{\nu}{2}} \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} } \sum_{ r=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^\frac{\nu}{2} K_{\nu}\left(4\pi \sqrt {d\left(m+\frac{h}{q}\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \left(m+1-\frac{h}{q}\right)^\frac{\nu}{2} K_{\nu}\left(4\pi \sqrt {d\left(m+1-\frac{h}{q}\right)t} \ \right) \right\} \sin \left(\frac{\pi \nu}{2}\right)\mathrm{d}t \notag\\ &-\pi \int_\alpha^\beta {f(t) } t^{-\frac{\nu}{2}} \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} } \sum_{ r=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^\frac{\nu}{2} Y_{\nu}\left(4\pi \sqrt {d\left(m+\frac{h}{q}\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\left(m+1-\frac{h}{q}\right)^\frac{\nu}{2} Y_{\nu}\left(4\pi \sqrt {d\left(m+1-\frac{h}{q}\right)t} \ \right) \right\} \sin \left(\frac{\pi \nu}{2}\right) \mathrm{d}t \notag\\ & +\pi \int_\alpha^\beta {f(t) } t^{-\frac{\nu}{2}} \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} } \sum_{ r=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^\frac{\nu}{2} J_{\nu}\left(4\pi \sqrt {d\left(m+\frac{h}{q}\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\left(m+1-\frac{h}{q}\right)^\frac{\nu}{2} J_{\nu}\left(4\pi \sqrt {d\left(m+1-\frac{h}{q}\right)t} \ \right) \right\} \cos \left(\frac{\pi \nu}{2}\right)\mathrm{d}t . \end{align} Equating \eqref{vor01.1} and \eqref{vor01.5}, we get the result. \fi Then equating this expression with \eqref{vor01.1}, we get the result. (Theorem \ref{vor1.1} $\Rightarrow$ Theorem \ref{voro2}) Let $\theta=h/q$, and let $\chi$ be an odd primitive character modulo $q$. Multiplying the identity \eqref{vv} in Theorem \ref{vor1.1} by $\bar{\chi}(h)/\tau(\bar{\chi})$, and then summing on $h$, $ 0<h<q$, the left hand side of the identity \eqref{vv} becomes \begin{align}\label{P1} & \frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{\alpha<j <\beta} f(j) \sum_{d|j}d^{-\nu} \sin \left( 2\pi d h/q \right)\notag\\ &= \frac{1}{2 i \tau(\bar{\chi})} \sum_{\alpha<j <\beta} f(j) \sum_{d|j}d^{-\nu} \sum_{h=1}^{q-1}\bar{\chi}(h) \left( e^{2\pi i d h/q} - e^{-2\pi i d h/q}\right)\notag\\ &= \frac{1}{2 i } \sum_{\alpha<j <\beta} f(j) \sum_{d|j}d^{-\nu} \left( \chi(d)-\chi(-d)\right)\notag\\ &=i^{-1}\sum_{\alpha<j <\beta} { {\sigma}_{-\nu, \chi }(j)} f(j) , \end{align} where in the penultimate step, we used \eqref{gauss}. It remains to evaluate the right-hand side of the identity \eqref{vv}. To do this we first observe by \eqref{Hurwitz} and \eqref{both} \begin{align}\label{P2} \frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\left(\zeta(-\nu,h/q) - \zeta(-\nu, 1- h/q)\right) =-2 q^{-\nu-1}\tau({\chi})L(-\nu,\Bar{\chi}). \end{align} Next, we consider \begin{align}\label{P14} &\frac{1}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{d=1}^{\infty}d^{-\nu/2} \sum_{ m=0}^{\infty}\left\{ \left(m+h/q\right)^\frac{\nu}{2} Z_{\nu}\left(4\pi \sqrt {d\left(m+h/q\right)t} \ \right) - \left(m+1-h/q\right)^\frac{\nu}{2} Z_{\nu}\left(4\pi \sqrt {d\left(m+1-h/q\right)t} \ \right) \right\} \notag\\ &=\frac{q^{-\nu/2}}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{d=1}^{\infty}d^{-\nu/2} \sum_{\substack{r=1 \\ r \equiv h(q)}}^\infty r^{\nu/2} Z_\nu\left(4\pi \sqrt{\frac{drt}{q}}\ \right) -\frac{q^{-\nu/2}}{\tau(\bar{\chi})}\sum_{h=1}^{q-1}\bar{\chi}(h)\sum_{d=1}^{\infty}d^{-\nu/2}\sum_{\substack{r=1 \\ r \equiv -h(q)}}^\infty r^{\nu/2} Z_\nu\left(4\pi \sqrt{\frac{drt}{q}}\ \right)\notag\\ &=\frac{2q^{-\nu/2}}{\tau(\bar{\chi})} \sum_{d=1}^{\infty} \sum_{ r=1 }^\infty d^{-\nu/2} r^{\nu/2} \bar{\chi}(r) Z_\nu\left(4\pi \sqrt{\frac{drt}{q}}\ \right) \notag\\ &=-\frac{2\tau({\chi}) }{q^{1+\nu/2}} \sum_{ n=1 }^\infty \Bar{\sigma}_{-\nu,\Bar{\chi} }(n) n^{\nu/2} Z_\nu\left(4\pi \sqrt{\frac{nt}{q}}\ \right) . \end{align} Combining \eqref{P1}, \eqref{P2}, \eqref{P14} and \eqref{vv}, we get the result. \end{proof} \iffalse \begin{proof}[Theorem \rm{\ref{vor1.2}}][] Let us consider \begin{align}\label{ram1} \frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{\alpha<j<\beta} \frac{\Bar{\sigma}_{-\nu,\chi}(j)}{j} f(j)&= \frac{1}{i\phi(q)}\sum_{\alpha<j<\beta} \sum_{d|j}d^{-\nu} \sum_{\chi \ odd } \chi(n/d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\sum_{\alpha<j<\beta} \sum_{d|j}d^{-\nu} \sin \left(\frac{2\pi n h}{dq}\right)\frac{f(j)}{j} , \end{align} where we have used the identity \eqref{sin}. Now we multiply both sides of Theorem 4.3 of our earlier paper \cite{devika2023} by $ \chi(h) \tau(\bar{\chi}) /i\phi(q)$, then sum on odd primitive character $\chi$ modulo $q.$. Then the left hand side of \eqref{ram1} becomes \begin{align}\label{ramji2} & \frac{ 1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{\alpha<j <\beta} \frac{\bar{\sigma}_{-\nu, \chi }(j)}{j} f(j) = -\frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) L(1-\nu,\chi) \int_\alpha^\beta \frac{f(t) }{t^{\nu+1}} \mathrm{d}t \notag\\ & + \frac{2 \pi q^{\frac{\nu}{2}}}{\phi(q)} \sum_{\chi \ odd } \chi(h) \sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}-1} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \sin \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \cos \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} First we consider by \eqref{ll(s)}, \eqref{Hurwitz}, \eqref{both} and \eqref{X2} \begin{align}\label{ramji2prime} &\frac{1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) L(1-\nu,\chi) =\frac{ \Gamma(\nu)\sin{\left( \frac{\pi \nu}{2} \right)}}{(2\pi )^\nu }\{ \zeta(\nu,h/q)-\zeta(\nu,1-h/q)\}. \end{align} Next, we consider \begin{align}\label{ramji3} &\frac{1}{\phi(q)} \sum_{\chi \ odd } \chi(h) \sum_{n=1}^{\infty} {\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right)\notag\\ &= \frac{1}{\phi(q)} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \sum_{d|n} d^{-\nu} \sum_{\chi \ odd } \chi(h) \bar{\chi}\left(d \right) \notag\\ &=\frac{1}{2}\sum_{r=1}^{\infty} \left\{\sum_{\substack{ d=1 \\ d\equiv h(q)} }^\infty d^{-\nu}(dr)^{\nu/2} K_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) -\sum_{\substack{ d=1 \\ d\equiv -h(q)} }^\infty d^{-\nu}(dr)^{\nu/2} K_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) \right\}\notag\\ &=\frac{q^{-\frac{\nu}{2}}}{2}\sum_{r=1}^{\infty} r^{\frac{\nu}{2}} \sum_{ m=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^{-\frac{\nu}{2}} K_{\nu}\left(4\pi \sqrt {r\left(m+\frac{h}{q}\right)t} \ \right)- \left(m+1-\frac{h}{q}\right)^{-\frac{\nu}{2}} K_{\nu}\left(4\pi \sqrt {r\left(m+1-\frac{h}{q}\right)t} \ \right) \right\}. \end{align} Employing \eqref{ramji2prime}, \eqref{ramji3} in \eqref{ramji2}, we deduce \begin{align}\label{ramji4} & \frac{ 1}{i\phi(q)} \sum_{\chi \ odd } \chi(h) \tau(\bar{\chi}) \sum_{\alpha<j <\beta} \frac{\bar{\sigma}_{-\nu, \chi }(j)}{j} f(j) =- \frac{ \Gamma(\nu)\sin{\left( \frac{\pi \nu}{2} \right)}}{(2\pi )^\nu }\{ \zeta(\nu,h/q)-\zeta(\nu,1-h/q)\}\int_\alpha^\beta \frac{f(t) }{t^{\nu+1}} \mathrm{d}t \notag\\ \notag\\ &+2\int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}-1} \sum_{r=1}^{\infty} r^{\frac{\nu}{2}} \sum_{ m=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^{-\frac{\nu}{2}} K_{\nu}\left(4\pi \sqrt {r\left(m+\frac{h}{q}\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \left(m+1-\frac{h}{q}\right)^{-\frac{\nu}{2}} K_{\nu}\left(4\pi \sqrt {r\left(m+1-\frac{h}{q}\right)t} \ \right) \right\}\sin \left(\frac{\pi \nu}{2}\right) \notag\\ &-\pi\int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}-1} \sum_{r=1}^{\infty} r^{\frac{\nu}{2}} \sum_{ m=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^{-\frac{\nu}{2}} Y_{\nu}\left(4\pi \sqrt {r\left(m+\frac{h}{q}\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \left(m+1-\frac{h}{q}\right)^{-\frac{\nu}{2}} Y_{\nu}\left(4\pi \sqrt {r\left(m+1-\frac{h}{q}\right)t} \ \right) \right\}\sin \left(\frac{\pi \nu}{2}\right) \notag\\ &+\pi\int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}-1} \sum_{r=1}^{\infty} r^{\frac{\nu}{2}} \sum_{ m=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^{-\frac{\nu}{2}} J_{\nu}\left(4\pi \sqrt {r\left(m+\frac{h}{q}\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \left(m+1-\frac{h}{q}\right)^{-\frac{\nu}{2}} J_{\nu}\left(4\pi \sqrt {r\left(m+1-\frac{h}{q}\right)t} \ \right) \right\}\cos \left(\frac{\pi \nu}{2}\right) \end{align} Equating \eqref{ram1} and \eqref{ramji4}, we get the result. \end{proof} \fi \begin{proof}[Theorem \rm{\ref{vor1.3}} and Theorem \rm{\ref{vore2}}][] (Theorem \ref{vore2} and Proposition \ref{vorlemma1} $\Rightarrow$ Theorem \ref{vor1.3}) First, we consider the following \begin{align}\label{fan1} & \sum_{\alpha<j <\beta} \sum_{d|j} {d}^{-\nu} \cos\left(\frac{2 \pi d h }{q}\right)f(j) =\sum_{\alpha<j <\beta} f(j) \left(\sum_{\substack{d|j\\ q|d}} d^{-\nu} +\sum_{\substack{d|j\\ q\nmid d}}d^{-\nu} \cos \left(\frac{ 2\pi dh }{q}\right)\right)\notag\\ &=q^{-\nu}\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} f(qm)\sum_{d|m} d^{-\nu} +\sum_{\alpha<j <\beta} f(j)\sum_{\substack{d|j\\ q\nmid d}} \frac{d^{-\nu}}{\phi(q)}\sum_{\chi \ even } \chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=q^{-\nu}\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} f(qm)\sum_{d|m} d^{-\nu} -\sum_{\alpha<j <\beta} f(j) \sum_{\substack{d|j\\ q\nmid d}} \frac{d^{-\nu}}{\phi(q)}\chi_0(d) + \sum_{\alpha<j <\beta} f(j) \sum_{\substack{d|j\\ q\nmid d}} \frac{d^{-\nu}}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi\ even}}\chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=q^{-\nu}\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} f(qm)\sum_{d|m} d^{-\nu} -\sum_{\alpha<j <\beta} f(j) \frac{1}{\phi(q)} \left( \sum_{d|j}d^{-\nu} -\sum_{\substack{d|j\\ q| d}}d^{-\nu} \right) \notag\\ &\ \ \ \ +\frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi})\sum_{\alpha<j <\beta} \sigma_{-\nu,\chi} (j) f(j) \notag\\ &=\frac{q^{1-\nu}}{\phi(q)} \sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} \sigma_{-\nu } (m)f(qm) -\frac{1}{\phi(q)} \sum_{\alpha<j <\beta} \sigma_{-\nu } (j) f(j) +\frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi\ even}} \chi(h) \tau(\bar{\chi})\sum_{\alpha<j <\beta} \sigma_{-\nu,\chi} (j) f(j). \end{align} We will first estimate the first sum of the right-hand side of \eqref{fan1}. We will take $g(t)=f(qt)$, then $g$ is analytic function inside a closed contour strictly containing $[\alpha/q, \beta/q ]$. Now applying Proposition \ref{vorlemma1} with $f(x)=g(x)$, then simplifying, we obtain \begin{align}\label{fan2} & \frac{q^{1-\nu}}{\phi(q)} \sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} {\sigma}_{-\nu}(m)f(qm) = \frac{1}{\phi(q)} \int_{\alpha} ^{ \beta }f(t) \left\{ \zeta(1-\nu) \ t^{-\nu} + q^{-\nu} \zeta(\nu+1) \ \right\} dt \notag\\ +& \frac{2\pi q^{-\frac{\nu}{2}}}{\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) (t)^{-\frac{\nu}{2}} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{\frac{nt}{q}}) - Y_{\nu}(4\pi \sqrt{\frac{nt}{q}})\right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}(4\pi \sqrt{\frac{nt}{q}}) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} Similarly, one can get the second sum of the right-hand side of \eqref{fan1}. Now we focus on the last sum of the right-hand side of \eqref{fan1}. By \eqref{fann} of Theorem \eqref{vore2}, the last sum of the right hand side of \eqref{fan1} becomes \begin{align}\label{fan3} & \frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi\ even}} \chi(h) \tau(\bar{\chi}) L(1+\nu,\chi) \int_\alpha^\beta {f(t) } \mathrm{d}t+ 2 \pi \frac{q^{-\frac{\nu}{2}} }{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi\ even}} \chi(h) \notag\\ & \times \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} \int_\alpha^\beta \frac{f(t) }{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} We define for $y \in \mathbb{R},$ \begin{align}\label{qw} W_{\nu}\left(y\ \right)= \left( \frac{2}{\pi} K_{\nu}\left(y \right) - Y_{\nu}\left(y\right) \right) \sin \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(y \right) \cos \left(\frac{\pi \nu}{2}\right). \end{align} Now we first observe \begin{align}\label{fan4} & \frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi\ even}} \chi(h) \sum_{n=1}^{\infty}\bar{\sigma}_{-\nu, \bar{\chi }}(n) \ n^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \notag\\ &= \frac{1}{\phi(q) } \sum_{n=1}^{\infty} \sum_{d/n}d^{-\nu} n^{\nu/2}W_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \left(\sum_{ \chi \ even } \chi(h) \Bar{\chi}(n/d)-\chi_0(n/d)\right) \notag\\ &= \frac{1}{\phi(q) } \sum_{d,r=1}^{\infty}d^{-\nu/2} r^{\nu/2}W_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right)\sum_{ \chi \ even } \chi(h) \Bar{\chi}(r)- \frac{1}{\phi(q) } \sum_{n=1}^{\infty} n^{\nu/2}W_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \left(\sigma_{-\nu}(n)- \sigma_{-\nu}( n/q)\right) \notag\\ &= \frac{1}{2}\sum_{d=1}^\infty d^{-\nu/2} \sum_{\substack{ r=1 \\ r\equiv \pm h(q)} }^\infty r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) -\frac{1}{\phi(q) } \sum_{n=1}^{\infty}\sigma_{-\nu}(n) n^{\nu/2}W_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) \notag\\ & \ \ + \frac{q^{\nu/2}}{\phi(q) } \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2}W_{\nu}\left(4\pi \sqrt{ {nt} }\ \right)\notag\\ &= \frac{q^{\nu/2}}{2}\sum_{d=1}^\infty d^{-\nu/2} \sum_{m=0}^\infty \left\{ \left(m+\frac{h}{q}\right)^{\nu/2}W_{\nu}\left(4\pi \sqrt{d\left(m+\frac{h}{q}\right)t} \right) - \left(m+1-\frac{h}{q}\right)^{\nu/2}W_{\nu}\left(4\pi \sqrt{d\left(m+1-\frac{h}{q}\right)t} \right)\right\} \notag\\ & \ \ -\frac{1}{\phi(q) } \sum_{n=1}^{\infty}\sigma_{-\nu}(n) n^{\nu/2}W_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\ \right) + \frac{q^{\nu/2}}{\phi(q) } \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2}W_{\nu}\left(4\pi \sqrt{ {nt} }\ \right) \end{align} where in the penultimate step, we used \eqref{prop1} First we consider by \eqref{1st_use}, \eqref{ll(s)}, \eqref{Hurwitz}, \eqref{both} and \eqref{prop1} \begin{align}\label{fan5} \frac{ 1}{\phi(q)}\sum_{\chi \ even } \chi(h) \tau(\bar{\chi}) L(1+\nu,\chi) = {(2\pi)^{\nu}} \Gamma(-\nu) \cos \left(\frac{\pi \nu}{2}\right)\left\{ \zeta(-\nu,\frac{h}{q})- \zeta(-\nu,1-\frac{h}{q})\right\}+(1-\frac{1}{q^\nu})\frac{\zeta(1+\nu)}{\phi(q)}.\end{align} Inserting \eqref{fan4}, \eqref{fan5} into \eqref{fan3}, we obtain the last sum of the right-hand side of \eqref{fan1}. \iffalse \begin{align}\label{fan6} &{(2\pi)^{\nu}} \Gamma(-\nu) \cos \left(\frac{\pi \nu}{2}\right)\left\{ \zeta(-\nu,\frac{h}{q})- \zeta(-\nu,1-\frac{h}{q})\right\}\int_\alpha^\beta {f(t) } \mathrm{d}t+(1-\frac{1}{q^\nu})\frac{\zeta(1+\nu)}{\phi(q)}\int_\alpha^\beta {f(t) } \mathrm{d}t\notag\\ +2&\sum_{d=1}^{\infty} d^{-\frac{\nu}{2} }\int_\alpha^\beta \frac{f(t)}{t^{\frac{\nu}{2}} } \sum_{ m=0}^\infty \left\{ \left(m+h/q\right)^\frac{\nu}{2} K_{\nu}\left(4\pi \sqrt {d\left(m+ h/q\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \left(m+1- h/q\right)^\frac{\nu}{2} K_{\nu}\left(4\pi \sqrt {d\left(m+1- h/q\right)t} \ \right) \right\} \cos \left(\frac{\pi \nu}{2}\right) \mathrm{d}t \notag\\ -\pi& \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} }\int_\alpha^\beta \frac{f(t)}{t^{\frac{\nu}{2}} } \sum_{ m=0}^\infty \left\{ \left(m+ h/q\right)^\frac{\nu}{2} Y_{\nu}\left(4\pi \sqrt {d\left(m+ h/q\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left(m+1-\theta\right)^\frac{\nu}{2} Y_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t} \ \right) \right\} \cos \left(\frac{\pi \nu}{2}\right) \mathrm{d}t \notag\\ -\pi& \sum_{d=1}^{\infty} d^{-\frac{\nu}{2} }\int_\alpha^\beta \frac{f(t)}{t^{\frac{\nu}{2}} } \sum_{ m=0}^\infty \left\{ \left(m+\theta\right)^\frac{\nu}{2} J_{\nu}\left(4\pi \sqrt {d\left(m+\theta\right)t} \ \right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left(m+1-\theta\right)^\frac{\nu}{2} J_{\nu}\left(4\pi \sqrt {d\left(m+1-\theta\right)t} \ \right) \right\} \sin \left(\frac{\pi \nu}{2}\right) \mathrm{d}t \notag\\ +&\frac{2\pi}{\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}} } \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{nt}) - Y_{\nu}(4\pi \sqrt{nt})\right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}(4\pi \sqrt{nt}) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt\notag\\ -&\frac{2\pi q^{-\nu/2}}{\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta } \frac{f(t)}{t^{\frac{\nu}{2}} } \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} \fi Then using this expression in \eqref{fan1} with \eqref{fan2}, Proposition \ref{vorlemma1}, and \eqref{fan1}, we get the desired result. (Theorem \ref{vor1.3} $\Rightarrow$ Theorem \ref{vore2}) Let $\theta=h/q$, and $\chi$ be an even primitive non-principal character modulo $q$. Multiplying the identity \eqref{vor1234} in Theorem \ref{vor1.3} by $\bar{\chi}(h)/\tau(\bar{\chi})$, and then summing on $h$, $ 0<h<q$, one can show that Theorem \ref{vor1.3} imply Theorem \ref{vore2}. \end{proof} \begin{proof}[Theorem \rm{\ref{vor1.6}}][] (Theorem \ref{voree}, Theorem \ref{vor1.3}, Theorem \ref{vor1.4} and Proposition \ref{vorlemma1} $\Rightarrow$ Theorem \ref{vor1.6}) We multiply both sides of identity \eqref{fann2} in Theorem \ref{voree} by $ \chi_1(h_1) \tau(\bar{\chi_1}) /\phi(p)$ and $ \chi_2(h_2) \tau(\bar{\chi_2}) /\phi(q)$, then sum on non-principal primitive even character $\chi_1$ modulo $p$ and $\chi_2$ modulo $q$. We examine the left-hand side of \eqref{fann2}. Using \eqref{sin}, we have \begin{align}\label{Big1} &\frac{1}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\ \chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{\alpha<j <\beta} \sigma_{-\nu, {\chi_1}, {\chi_2}}(j) f(j) \notag\\ &=\frac{1}{\phi(p)\phi(q)} \sum_{\alpha<j <\beta} f(j) \sum_{d|j}d^{-\nu} \left\{ \sum_{\substack{\chi_2 \neq \chi_0\\\chi_2\ even}}\chi_{2}(h_2) { \chi_2}(j/d)\tau(\Bar{\chi_{2}}) \right\} \left \{ \sum_{\substack{\chi_1 \neq \chi_0\\\chi_1\ even}}\chi_{1}(h_1) { \chi_1}(d)\tau(\Bar{\chi_{1}}) \right \} \notag\\ &= \sum_{\alpha<j <\beta} f(j) \sum_{d|j}d^{-\nu} \left\{ \frac{1}{\phi(q) }\sum_{ \chi_2\ even}\chi_{2}(h_2) { \chi_2}(j/d)\tau(\Bar{\chi_{2}})+ \frac{\chi_0(j/d)}{\phi(q)} \right\} \notag\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\left \{\frac{1}{\phi(p) } \sum_{ \chi_1\ even}\chi_{1}(h_1) { \chi_1}(d)\tau(\Bar{\chi_{1}}) + \frac{\chi_0(d)}{\phi(p)} \right \} \notag\\ &= \sum_{\alpha<j <\beta} f(j)\sum_{\substack{d|j\\ (p,d)=(q,j/d)=1}}d^{-\nu} \left\{ \cos \left( \frac{2\pi j h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right)+\frac{1}{\phi(p) } \cos \left( \frac{2\pi jh_2 }{dq}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{\phi(q) } \cos \left( \frac{2\pi dh_1 }{p}\right) +\frac{1}{\phi(p) \phi(q) } \right\} \notag\\ &= \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} \left\{ \cos \left( \frac{2\pi j h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right)+\frac{1}{\phi(p) } \cos \left( \frac{2\pi jh_2 }{dq}\right)+\frac{1}{\phi(q) } \cos \left( \frac{2\pi dh_1 }{p}\right) \right.\notag\\&\left.\ \ \ \ \ +\frac{1}{\phi(p) \phi(q) } \right\} -\frac{p}{\phi(p) } \sum_{\alpha<j <\beta} f(j)\sum_{\substack{d|j\\ p|d}}d^{-\nu} \left\{ \cos \left( \frac{2\pi jh_2 }{dq}\right)+\frac{1}{\phi(q) } \right\} \notag\\ & \ \ \ \ \ \ -\frac{q}{\phi(q) } \sum_{\alpha<j <\beta} f(j) \sum_{\substack{d|j\\ q| \frac{j}{d}}}d^{-\nu}\left\{ \cos \left( \frac{2\pi dh_1 }{p}\right)+\frac{1}{\phi(p) } \right\} +\frac{pq}{\phi(p) \phi(q)} \sum_{\alpha<j <\beta} f(j) \sum_{\substack{d|j\\ p|d ,\ q| \frac{j}{d}}}d^{-\nu} \notag\\ &= \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} \cos \left( \frac{2\pi j h_2}{dq}\right) \cos \left( \frac{2\pi dh_1 }{p}\right) \notag\\ &+\left\{ \frac{1}{\phi(p) } \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} \cos \left( \frac{2\pi jh_2 }{dq}\right)- \frac{p^{ 1-\nu}}{\phi(p) } \sum_{\frac{\alpha}{p}<n <\frac{\beta}{p} } f(pn)\sum_{ d|n }d^{-\nu} \cos \left( \frac{2\pi nh_2 }{dq}\right) \right\} \notag\\ &+\left\{\frac{1}{\phi(q) } \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right) - \frac{q }{\phi(q) } \sum_{\frac{\alpha}{p}<m <\frac{\beta}{q} } f(qm)\sum_{ d|m }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right) \right\} \notag\\ &+\left\{\frac{1}{\phi(p) \phi(q)} \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} -\frac{p^{ 1-\nu}}{\phi(p) \phi(q)} \sum_{\frac{\alpha}{p}<n <\frac{\beta}{p} } f(pn)\sum_{ d|n }d^{-\nu} \right.\notag\\&\left.\ \ \ \ \ - \frac{q }{\phi(p) \phi(q)} \sum_{\frac{\alpha}{q}<m <\frac{\beta}{q} } f(qm)\sum_{ d|m }d^{-\nu} + \frac{p^{ 1-\nu}q }{\phi(p) \phi(q) } \sum_{\frac{\alpha}{pq}<n <\frac{\beta}{pq} } f(pqn)\sum_{ d|n }d^{-\nu} \right\}. \end{align} Using Theorem \ref{vor1.4} with $\theta=h_2/q$, we evaluate the second and third terms on the right-hand side of \eqref{Big1} as follows: \begin{align}\label{Small1} & \frac{1}{\phi(p) } \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} \cos \left( \frac{2\pi jh_2 }{dq}\right)- \frac{p^{ 1-\nu}}{\phi(p) } \sum_{\frac{\alpha}{p}<n <\frac{\beta}{p} } f(pn)\sum_{ d|n }d^{-\nu} \cos \left( \frac{2\pi nh_2 }{dq}\right) \notag\\ =& \frac{\pi q^{\nu/2}}{\phi(p)}\sum_{r=1}^{\infty} r^{\nu/2} \sum_{\substack{d=1 \\ d\equiv \pm h_2(q)} }^{\infty} d^{-\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt \notag\\ -& \frac{\pi p^{-\nu/2}q^{\nu/2}}{\phi(p)}\sum_{r=1}^{\infty} r^{\nu/2} \sum_{\substack{d=1 \\ d\equiv \pm h_2(q)} }^{\infty} d^{-\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} Using Theorem \ref{vor1.3} with $\theta=h_1/p$, we evaluate the fourth and fifth terms on the right-hand side of \eqref{Big1} as follows: \begin{align}\label{Small2} &\frac{1}{\phi(q) } \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right) - \frac{q }{\phi(q) } \sum_{\frac{\alpha}{p}<m <\frac{\beta}{q} } f(qm)\sum_{ d|m }d^{-\nu} \cos \left( \frac{2\pi dh_1 }{p}\right)\notag\\ =& \frac{\pi p^{-\nu/2}}{\phi(q)}\sum_{d=1}^{\infty} d^{-\nu/2} \sum_{\substack{r=1 \\ r\equiv \pm h_1(p)} }^{\infty} r^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{drt}{p}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{drt}{p}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{drt}{p}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt \notag\\ -& \frac{\pi p^{-\nu/2}q^{\nu/2}}{\phi(q)}\sum_{d=1}^{\infty} d^{-\nu/2} \sum_{\substack{r=1 \\ r\equiv \pm h_1(p)} }^{\infty} r^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} Using Proposition \ref{vorlemma1}, we evaluate the last four terms on the right-hand side of \eqref{Big1} as follows: \begin{align}\label{Small3} &\frac{1}{\phi(p) \phi(q)} \sum_{\alpha<j <\beta} f(j)\sum_{ d|j }d^{-\nu} -\frac{p^{ 1-\nu}}{\phi(p) \phi(q)} \sum_{\frac{\alpha}{p}<n <\frac{\beta}{p} } f(pn)\sum_{ d|n }d^{-\nu} \notag\\ - &\frac{q }{\phi(p) \phi(q)} \sum_{\frac{\alpha}{q}<m <\frac{\beta}{q} } f(qm)\sum_{ d|m }d^{-\nu} + \frac{p^{ 1-\nu}q }{\phi(p) \phi(q) } \sum_{\frac{\alpha}{pq}<n <\frac{\beta}{pq} } f(pqn)\sum_{ d|n }d^{-\nu} \notag\\ =& \frac{2\pi \ }{\phi(p)\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{nt}) - Y_{\nu}(4\pi \sqrt{nt})\right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}(4\pi \sqrt{nt}) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt\notag\\ &-\frac{2\pi \ p^{-\nu/2} }{\phi(p)\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{p}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{p}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{nt}{p}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt\notag\\ &-\frac{2\pi \ q^{\nu/2} }{\phi(p)\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{nt}{q}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt\notag\\ &+\frac{2\pi \ p^{-\nu/2} q^{\nu/2} }{\phi(p)\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\right)\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - J_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} Substitute \eqref{Small1}, \eqref{Small2} and \eqref{Small3} into the right hand side of \eqref{Big1}, we deduce the left-hand side of \eqref{fann2}. Next, from the right-hand side of \eqref{fann2}, we have \begin{align}\label{Mainp2} & \frac{1}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\tau(\Bar{\chi_{2}})\sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1)\tau(\Bar{\chi_{1}}) \sum_{\alpha<j <\beta} {\sigma}_{-\nu, \chi_1,\chi_2}(j) f(j)\notag\\ &=\frac{2\pi p^{-\nu/2 }q^{\nu/2}}{\phi(p)\phi(q)}\sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1) \sum_{n=1}^{\infty}\sigma_{-\nu, \bar{\chi_2},\bar{\chi_1}}(n) \ n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) t^{-\frac{\nu}{2}} \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt\notag\\ &=\frac{2\pi p^{-\nu/2 }q^{\nu/2}}{\phi(p)\phi(q)}\int_{\alpha} ^{ \beta }\frac{f(t)}{t^{\frac{\nu}{2}}} \sum_{n=1}^\infty n^{\nu/2} \sum_{d/n}^\infty d^{-\nu} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\bar{\chi_2}(d) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1) \bar{\chi_1}(n/d) \notag\\ & \times \left\{ \left( \frac{2}{\pi} K_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) - Y_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \right) \cos \left(\frac{\pi \nu}{2}\right) - J_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt. \end{align} We first observe \begin{align}\label{Mai1} &\frac{1}{\phi(p)\phi(q)} \sum_{n=1}^\infty n^{\nu/2} \sum_{d/n}^\infty d^{-\nu} \sum_{\substack{\chi_2\neq \chi_0\\\chi_2 \ even }}\chi_{2}(h_2)\bar{\chi_2}(d) \sum_{\substack{\chi_1\neq \chi_0\\\chi_1 \ even }}\chi_{1}(h_1) \bar{\chi_1}(n/d) W_{\nu}\left(4\pi \sqrt{\frac{nt}{pq}}\ \right) \notag\\ &=\frac{1}{\phi(p)\phi(q)}\sum_{d,r\geq 1}^\infty {d^{-\nu/2}} r^{\nu/2} \left\{\sum_{\chi_2\ even}\chi_{2}(h_2)\Bar{ \chi_2}(d)-\chi_0(d)\right\} \left\{\sum_{\chi_1\ even}\chi_{1}(h_1) \Bar{ \chi_1}(r)-\chi_0(r) \right\} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) \notag\\ &= \frac{1 }{4}\sum_{\substack{d=1\\ d \equiv \pm h_2(q)}}^\infty \sum_{\substack{r=1\\ r \equiv \pm h_1(p)}}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) -\frac{1 }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{\substack{r=1\\ p \nmid r}}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) \notag\\ & - \frac{1 }{2\phi(q)}\sum_{\substack{d=1\\ q\nmid d}}^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) + \frac{1}{\phi(p)\phi(q)}\sum_{\substack{d=1\\ q\nmid d}}^\infty \sum_{\substack{r=1\\ p \nmid r}}^\infty{d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) \notag\\ &= \frac{1 }{4}\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) \notag\\ & +\left\{ -\frac{1 }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{ r=1 }^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) + \frac{p^{\nu/2} }{2\phi(p) }\sum_{\substack{d=1\\ d \equiv\pm h_2(q)}}^\infty \sum_{ r=1 }^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) \right\} \notag\\ & - \frac{1 }{2\phi(q)}\sum_{ d=1 }^\infty \sum_{\substack{r=1\\ r \equiv \pm h_1(p)}}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) + \frac{q^{ -\nu/2} }{2\phi(q)}\sum_{ d=1 }^\infty \sum_{\substack{r=1\\ r \equiv\pm h_1(p)}}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{p}}\ \right) \notag\\ &+ \left\{ \frac{ 1 }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{pq}}\ \right) - \frac{q^{ -\nu/2} }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{p}}\ \right) \right.\notag\\&\left.\ - \frac{p^{ \nu/2} }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{\frac{drt}{q}}\ \right) + \frac{p^{ \nu/2}q^{ -\nu/2} }{\phi(p)\phi(q)}\sum_{ d=1 }^\infty \sum_{ r=1}^\infty {d^{-\nu/2}} r^{\nu/2} W_{\nu}\left(4\pi \sqrt{ {drt} }\ \right) \right\}. \end{align} Substituting \eqref{Mai1} and \eqref{qw} in \eqref{Mainp2}, we obtain the right-hand side of \eqref{fann2}. Hence we get the result. (Theorem \ref{vor1.6} $\Rightarrow$ Theorem \ref{voree}) Let $\theta=h_1/p$ and $\psi=h_2/q$, and $\chi_1$ and $\chi_2$ be even primitive non-principal characters modulo $p$ and $q$, respectively. Now we multiply the identity \eqref{1234} in Theorem \ref{vor1.6} by $\bar{\chi_1}(h_1)\bar{\chi_2}(h_2)/\tau(\bar{\chi_1})\tau(\bar{\chi_2})$, and then summing on $h_1$, $ 0<h_1<p$, and $h_2$, $ 0<h_2<q$, one can show that Theorem \ref{vor1.6} imply Theorem \ref{voree}. \end{proof} \iffalse \begin{proof}[Theorem \rm{\ref{vor1.4}}][] {\color{red} PROOF IS WRONG . NO NED TO CHECK FURTHER } First, we consider by \eqref{sin} \begin{align} &\sum_{\alpha<j <\beta} \sum_{d/j} d^{-\nu}\cos\left(\frac{2 \pi j h }{dq}\right)f(j)\notag\\ &= \sum_{\alpha<j <\beta} j^{-\nu} f(j) \sum_{d/j} d^\nu \cos\left(\frac{2 \pi d h }{q}\right) =\sum_{\alpha<j <\beta} j^{-\nu} f(j) \left(\sum_{\substack{d|j\\ q/d}} d^\nu +\sum_{\substack{d|j\\ q\nmid d}}d^\nu \cos \left(\frac{ 2\pi dh }{q}\right)\right)\notag\\ &=\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} m^{-\nu}f(qm)\sum_{d|m} d^{\nu} +\sum_{\alpha<j <\beta} j^{-\nu}f(j)\sum_{\substack{d|j\\ q\nmid d}} \frac{d^\nu}{\phi(q)}\sum_{\chi \ even } \chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}}m^{-\nu}f(qm)\sum_{d|m} d^{\nu} -\sum_{\alpha<j <\beta} j^{-\nu} f(j) \sum_{\substack{d|j\\ q\nmid d}} \frac{d^\nu}{\phi(q)}\chi_0(d) + \sum_{\alpha<j <\beta} j^{-\nu}f(j) \sum_{\substack{d|j\\ q\nmid d}} \frac{d^\nu}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}}\chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}}m^{-\nu}f(qm)\sum_{d|m} d^{\nu} -\sum_{\alpha<j <\beta} j^{-\nu} f(j) \frac{1}{\phi(q)} \left\{ \sum_{d/j}d^\nu -\sum_{\substack{d|j\\ q/ d}}d^\nu \right\} \notag\\ &\ \ \ \ + \sum_{\alpha<j <\beta} j^{-\nu}f(j) \sum_{\substack{d|j\\ q\nmid d}} \frac{d^\nu}{\phi(q)} \sum_{\substack{\chi \neq \chi_0\\\chi even}}\chi(d) \chi(h) \tau(\bar{\chi}) \notag\\ &=\frac{q}{\phi(q)}\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}}m^{-\nu}f(qm) \sigma_\nu (m) - \frac{1}{\phi(q)} \sum_{\alpha<j <\beta} j^{-\nu} f(j) \sigma_\nu (j) +\frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi even}} \chi(h) \tau(\bar{\chi})\sum_{\alpha<j <\beta} j^{-\nu} f(j) \sigma_{\nu,\chi} (j) \notag\\ &=\frac{q}{\phi(q)}\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} \sigma_{-\nu} (m) \ f(qm) - \frac{1}{\phi(q)} \sum_{\alpha<j <\beta} \sigma_{-\nu } (j) \ f(j) +\frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)}{ \tau(\bar{\chi})} \sum_{\alpha<j <\beta} \bar{ \sigma}_{-\nu,\chi} (j) f(j), \label{Z11} \end{align} in the last step, we used $\sigma_{\nu}(m)=m^\nu \sigma_{-\nu}(m) $. Now, we first evaluate the first two sums on right-hand side of \eqref{Z11}. By Lemma \eqref{vorlemma1} , we have \begin{align} 1 \end{align} \begin{align} 1\label{Z12} \end{align} ..............\\ Using {\color{blue} \begin{align*} & \frac{(2 \pi q)^\nu }{2 \tau(\chi) }\sum_{\alpha<j <\beta} \bar{\sigma}_{-\nu, \chi}(j)f(j) = \frac{\Gamma(\nu)\cos(\frac{\pi \nu}{2})L(\nu, \bar{\chi})}{(2 \pi)^{\nu}}\int_{\alpha} ^{ \beta }f(t) \ t^{-\nu} dt \notag\\ &+\pi \sum_{n=1}^{\infty} \sigma_{-\nu, \bar{\chi}}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) (t)^{-\frac{\nu}{2}} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{nt}) - Y_{\nu}(4\pi \sqrt{nt})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{nt}) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt \end{align*} } Finally, we evaluate the third sum of the right-hand side of \eqref{Z11} by using Theorem \eqref{vor1} \begin{align} & \frac{1}{\phi(q) }\sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)}{ \tau(\bar{\chi})} \sum_{\alpha<j <\beta} \bar{ \sigma}_{-\nu,\chi} (j) f(j) = \frac{2q\Gamma(\nu)\cos(\frac{\pi \nu}{2}) }{((2 \pi)^2q)^{\nu}} \frac{1}{\phi(q) } \sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)} L(\nu, \bar{\chi}) \int_{\alpha} ^{ \beta }f(t) \ t^{-\nu} dt \notag\\ &+\frac{ 1}{\phi(q) } \sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)}{ \tau(\bar{\chi})} \frac{2 \pi \tau (\chi)}{(2 \pi q)^\nu } \sum_{n=1}^{\infty} \sigma_{-\nu, \bar{\chi}}(n) n^{\nu/2} \int_{\alpha} ^{ \beta }f(t) (t)^{-\frac{\nu}{2}} \left\{ \left( \frac{2}{\pi} K_{\nu}(4\pi \sqrt{nt}) - Y_{\nu}(4\pi \sqrt{nt})\right) \cos \left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left.\ \ \ \ \ \ \ \ \ \ - J_{\nu}(4\pi \sqrt{nt}) \sin \left(\frac{\pi \nu}{2}\right) \right\} dt \label{Z13} \end{align} Using the fact $ \frac{1}{\phi(q) } \sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)} L(\nu, \bar{\chi}) =\frac{1}{2q^{\nu}}\{\zeta(\nu,h/q)+ \zeta(\nu,1-h/q) \}-\frac{1}{\phi(q)}(1-q^{-\nu})\zeta(\nu).$ Hence first sum in the right hand side of \eqref{Z13} equals \begin{align} \frac{q\Gamma(\nu)\cos(\frac{\pi \nu}{2}) }{(2 \pi q)^{2\nu}} \{\zeta(\nu,h/q)+ \zeta(\nu,1-h/q) \} \int_{\alpha} ^{ \beta }f(t) \ t^{-\nu} dt - \frac{2q\Gamma(\nu)\cos(\frac{\pi \nu}{2}) }{((2 \pi)^2q)^{\nu}\phi(q)} (1-q^{-\nu})\zeta(\nu) \int_{\alpha} ^{ \beta }f(t) \ t^{-\nu} dt \label{z14} \end{align} Now we consider \begin{align*} \frac{ 1}{\phi(q) } & \sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)}\sum_{n=1}^{\infty} \sigma_{-\nu, \bar{\chi}}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) = \frac{ 1}{\phi(q) }\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) \sum_{d/n}d^{-\nu} \sum_{\substack{\chi \neq \chi_0\\\chi even}} {\chi(h)}\Bar{\chi}(d)\\ &=\frac{ 1}{\phi(q) }\sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) \sum_{d/n}d^{-\nu}\left \{\sum_{ \chi even}{\chi(h)}\Bar{\chi}(d)-\chi_0(d) \right\}\\ &=\frac{ 1}{\phi(q) }\sum_{m=1}^{\infty} m^{\nu/2} \sum_{d=1}^{\infty} d^{-\nu/2} K_{\nu}(4\pi \sqrt{mdt}) \sum_{ \chi even}{\chi(h)}\Bar{\chi}(d) \\ &\ \ \ - \frac{ 1}{\phi(q)} \sum_{n=1}^{\infty} n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) \left(\sigma_{-\nu}(n)-q^{-\nu}\sigma_{-\nu}(\frac{n}{q}) \right) \\ &=\frac{1}{2}\sum_{m=1}^{\infty} m^{\nu/2}\sum_{\substack{d=1\{\rho}\equiv \pm h(mod \ q) }}^\infty \frac{K_{\nu}(4\pi \sqrt{mdt})}{d^{\nu/2}} \\ &\ \ - \frac{ 1}{\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) + \frac{ 1}{\phi(q)} \sum_{n=1}^{\infty} q^{-\nu}\sigma_{-\nu}\left(\frac{n}{q}\right) n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) \\ &=\frac{1}{2}\sum_{m=1}^{\infty} m^{\nu/2}\sum_{ r=0}^\infty \left\{ \frac{K_\nu(4\pi \sqrt{m(rq+h)t})}{(rq+h)^{\nu/2}}+ \frac{K_\nu(4\pi \sqrt{m(rq+q-h)t})}{(rq+h)^{\nu/2}} \right\}\\ &\ \ - \frac{ 1}{\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nt}) + \frac{ q^{-\nu/2}}{\phi(q)} \sum_{r=1}^{\infty} \sigma_{-\nu}(r) r^{\nu/2} K_{\nu}(4\pi \sqrt{qrt})\\ &=\frac{1}{2}\sum_{m=1}^{\infty} m^{\nu/2}\sum_{ r=0}^\infty \left\{ \frac{K_\nu(4\pi \sqrt{m(r+h/q)qt})}{(rq+h)^{\nu/2}}+ \frac{K_\nu(4\pi \sqrt{m(r+1-h/q)qt})}{(rq+h)^{\nu/2}} \right\}\\ &\ \ - \frac{ 1}{\phi(q)} \sum_{n=1}^{\infty} \sigma_{-\nu}(n) n^{\nu/2} K_{\nu}(4\pi \sqrt{nt} ) + \frac{ q^{-\nu/2}}{\phi(q)} \sum_{r=1}^{\infty} \sigma_{-\nu}(r) r^{\nu/2} K_{\nu}(4\pi \sqrt{qrt}). \end{align*} Using above equation , one can evaluate second sum in the right hand side of \eqref{Z13} . Employing \eqref{Z11},\eqref{Z13} ,\eqref{z14} , we have \begin{align*} & \sum_{\alpha<j <\beta} \sum_{d/j} \left(\frac{j}{d}\right)^{-\nu} \cos\left(\frac{2 \pi d h }{q}\right)f(j) = \frac{q}{\phi(q)}\sum_{\frac{\alpha}{q}<m <\frac{\beta}{q}} \sigma_{-\nu} (m) \ f(qm) - \frac{1}{\phi(q)} \sum_{\alpha<j <\beta} \sigma_{-\nu } (j) \ f(j) \\ &+ \frac{q\Gamma(\nu)\cos(\frac{\pi \nu}{2}) }{(2 \pi q)^{2\nu}} \{\zeta(\nu,h/q)+ \zeta(\nu,1-h/q) \} \int_{\alpha} ^{ \beta }f(t) \ t^{-\nu} dt - \frac{2q\Gamma(\nu)\cos(\frac{\pi \nu}{2}) }{((2 \pi)^2q)^{\nu}\phi(q)} (1-q^{-\nu})\zeta(\nu) \int_{\alpha} ^{ \beta }f(t) \ t^{-\nu} dt \\ &+\frac{q^{-\nu/2} }{(2 \pi q)^{\nu-1}}\sum_{m=1}^\infty m^{\nu/2}\sum_{r=0}^\infty\int_{\alpha}^\beta f(t) t^{-\nu/2}\left[ \left\{ \frac{2}{\pi}\left( \frac{K_\nu \left(4\pi \sqrt{m(r+h/q)tq}\right)}{(r+h/q)^{\nu/2}} + \frac{K_\nu\left(4\pi \sqrt{m(r+1-h/q )tq}\right)}{(r+1-h/q)^{\nu/2}} \right) \right.\right.\notag\\&\left.\left.\ -\left( \frac{Y_\nu \left(4\pi \sqrt{m(r+h/q)tq}\right)}{(r+h/q)^{\nu/2}} + \frac{Y_\nu\left(4\pi \sqrt{m(r+1-h/q )tq}\right)}{(r+1-h/q)^{\nu/2}} \right) \right\} \cos\left(\frac{\pi \nu}{2}\right) \right.\notag\\&\left. -\left( \frac{J_\nu \left(4\pi \sqrt{m(r+h/q)tq}\right)}{(r+h/q)^{\nu/2}} + \frac{J_\nu\left(4\pi \sqrt{m(r+1-h/q )tq}\right)}{(r+1-h/q)^{\nu/2}} \right) \sin(\frac{\pi \nu}{2}) \right]dt. \end{align*} \end{proof} \fi \end{document}
arXiv
\begin{definition}[Definition:Arborescence/Also known as] An arborescence of root $r$ can be referred to as an an '''$r$-arborescence''', or just an '''arborescence'''. Some sources, for example {{BookReference|The Art of Computer Programming: Volume 1: Fundamental Algorithms||Donald E. Knuth}}, call an arborescence an '''oriented tree'''. Category:Definitions/Arborescences \end{definition}
ProofWiki
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About The Annals of Mathematical Statistics Ann. Math. Statist. Volume 17, Number 1 (1946), 34-43. The Theory of Unbiased Estimation Paul R. Halmos More by Paul R. Halmos Full-text: Open access PDF File (851 KB) Article info and citation Let $F(P)$ be a real valued function defined on a subset $\mathscr{D}$ of the set $\mathscr{D}^\ast$ of all probability distributions on the real line. A function $f$ of $n$ real variables is an unbiased estimate of $F$ if for every system, $X_1, \cdots, X_n$, of independent random variables with the common distribution $P$, the expectation of $f(X_1 \cdots, X_n)$ exists and equals $F(P)$, for all $P$ in $\mathscr{D}$. A necessary and sufficient condition for the existence of an unbiased estimate is given (Theorem 1), and the way in which this condition applies to the moments of a distribution is described (Theorem 2). Under the assumptions that this condition is satisfied and that $\mathscr{D}$ contains all purely discontinuous distributions it is shown that there is a unique symmetric unbiased estimate (Theorem 3); the most general (non symmetric) unbiased estimates are described (Theorem 4); and it is proved that among them the symmetric one is best in the sense of having the least variance (Theorem 5). Thus the classical estimates of the mean and the variance are justified from a new point of view, and also, from the theory, computable estimates of all higher moments are easily derived. It is interesting to note that for $n$ greater than 3 neither the sample $n$th moment about the sample mean nor any constant multiple thereof is an unbiased estimate of the $n$th moment about the mean. Attention is called to a paradoxical situation arising in estimating such non linear functions as the square of the first moment. Ann. Math. Statist., Volume 17, Number 1 (1946), 34-43. First available in Project Euclid: 28 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177731020 doi:10.1214/aoms/1177731020 Mathematical Reviews number (MathSciNet) Zentralblatt MATH identifier links.jstor.org Halmos, Paul R. The Theory of Unbiased Estimation. Ann. Math. Statist. 17 (1946), no. 1, 34--43. doi:10.1214/aoms/1177731020. https://projecteuclid.org/euclid.aoms/1177731020 The Institute of Mathematical Statistics Turn Off MathJax What is MathJax? Unbiased Estimation in Convex Families Bickel, P. J. and Lehmann, E. L., The Annals of Mathematical Statistics, 1969 A Class of Statistics with Asymptotically Normal Distribution Hoeffding, Wassily, The Annals of Mathematical Statistics, 1948 Some Incomplete and Boundedly Complete Families of Distributions Hoeffding, Wassily, The Annals of Statistics, 1977 Uniform Strong Consistency of Rao-Blackwell Distribution Function Estimators O'Reilly, Federico J. and Quesenberry, C. P., The Annals of Mathematical Statistics, 1972 A Minimax Criterion for Choosing Weight Functions for $L$-Estimates of Location Mason, David M., The Annals of Statistics, 1983 On Unbiased Estimation of Density Functions Seheult, A. H. and Quesenberry, C. P., The Annals of Mathematical Statistics, 1971 On Fisher's Bound for Asymptotic Variances Bahadur, R. R., The Annals of Mathematical Statistics, 1964 Note on Uniformly Best Unbiased Estimates Davis, R. C., The Annals of Mathematical Statistics, 1951 On the Completeness of Order Statistics Bell, C. B., Blackwell, David, and Breiman, Leo, The Annals of Mathematical Statistics, 1960 The Application of Invariance to Unbiased Estimation Eaton, Morris L. and Morris, Carl N., The Annals of Mathematical Statistics, 1970 euclid.aoms/1177731020
CommonCrawl
An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes Positive solutions for critically coupled Schrödinger systems with attractive interactions February 2018, 38(2): 449-484. doi: 10.3934/dcds.2018021 Receding horizon control for the stabilization of the wave equation Behzad Azmi 1,, and Karl Kunisch 1,2, Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences, Altenbergerstraβe 69, A-4040 Linz, Austria Institute for Mathematics and Scientific Computing, University of Graz, Heinrichstraße 36, 8010 Graz, Austria * Corresponding author: Behzad Azmi Received March 2017 Revised August 2017 Published February 2018 Fund Project: This work has been supported by the International Research Training Group IGDK1754, funded by the DFG and FWF, and the ERC advanced grant 668998 (OCLOC) under the EU's H2020 research program Full Text(HTML) Figure(7) / Table(5) Stabilization of the wave equation by the receding horizon framework is investigated. Distributed control, Dirichlet boundary control, and Neumann boundary control are considered. Moreover for each of these control actions, the well-posedness of the control system and the exponential stability of Receding Horizon Control (RHC) with respect to a proper functional analytic setting are investigated. Observability conditions are necessary to show the suboptimality and exponential stability of RHC. Numerical experiments are given to illustrate the theoretical results. Keywords: Receding horizon control, model predictive control, asymptotic stability, observability, optimal control, infinite-dimensional systems. Mathematics Subject Classification: Primary: 49N35, 93C20, 93D20. Citation: Behzad Azmi, Karl Kunisch. Receding horizon control for the stabilization of the wave equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 449-484. doi: 10.3934/dcds.2018021 K. Ammari, Dirichlet boundary stabilization of the wave equation, Asymptot. Anal., 30 (2002), 117-130. Google Scholar K. Ammari and M. Tucsnak, Stabilization of second order evolution equations by a class of unbounded feedbacks, ESAIM Control Optim. Calc. Var., 6 (2001), 361-386 (electronic). doi: 10.1051/cocv:2001114. Google Scholar B. Azmi and K. Kunisch, On the stabilizability of the Burgers equation by receding horizon control, SIAM J. Control Optim., 54 (2016), 1378-1405. doi: 10.1137/15M1030352. Google Scholar L. Bales and I. Lasiecka, Continuous finite elements in space and time for the nonhomogeneous wave equation, Comput. Math. Appl., 27 (1994), 91-102. doi: 10.1016/0898-1221(94)90048-5. Google Scholar ——, Negative norm estimates for fully discrete finite element approximations to the wave equation with nonhomogeneous $L_2$ Dirichlet boundary data, Math. Comp., 64 (1995), 89-115 doi: 10.2307/2153324. Google Scholar W. Bangerth, M. Geiger and R. Rannacher, Adaptive Galerkin finite element methods for the wave equation, Comput. Methods Appl. Math., 10 (2010), 3-48. doi: 10.2478/cmam-2010-0001. Google Scholar W. Bangerth and R. Rannacher, Adaptive finite element techniques for the acoustic wave equation, J. Comput. Acoust., 9 (2001), 575-591. doi: 10.1142/S0218396X01000668. Google Scholar C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary, SIAM J. Control Optim., 30 (1992), 1024-1065. doi: 10.1137/0330055. Google Scholar J. Barzilai and J. M. Borwein, Two-point step size gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148. doi: 10.1093/imanum/8.1.141. Google Scholar R. Becker, D. Meidner and B. Vexler, Efficient numerical solution of parabolic optimization problems by finite element methods, Optim. Methods Softw., 22 (2007), 813-833. doi: 10.1080/10556780701228532. Google Scholar N. Burq and P. Gérard, Condition nécessaire et suffisante pour la contrôlabilité exacte des ondes, C. R. Acad. Sci. Paris Sér. I Math., 325 (1997), 749-752. doi: 10.1016/S0764-4442(97)80053-5. Google Scholar G. Chen, Control and stabilization for the wave equation in a bounded domain, SIAM Journal on Control and Optimization, 17 (1979), 66-81. doi: 10.1137/0317007. Google Scholar H. Chen and F. Allgöwer, A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability, Automatica J. IFAC, 34 (1998), 1205-1217. doi: 10.1016/S0005-1098(98)00073-9. Google Scholar N. Cîndea and A. Münch, A mixed formulation for the direct approximation of the control of minimal $L^2$-norm for linear type wave equations, Calcolo, 52 (2015), 245-288. doi: 10.1007/s10092-014-0116-x. Google Scholar J. -M. Coron, Control and Nonlinearity, vol. 136 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 2007. Google Scholar R. Curtain and K. Morris, Transfer functions of distributed parameter systems: A tutorial, Automatica J. IFAC, 45 (2009), 1101-1116. doi: 10.1016/j.automatica.2009.01.008. Google Scholar Y.-H. Dai and H. Zhang, Adaptive two-point stepsize gradient algorithm, Numer. Algorithms, 27 (2001), 377-385. doi: 10.1023/A:1013844413130. Google Scholar A. Ern and J. -L. Guermond, Theory and Practice of Finite Elements, vol. 159 of Applied Mathematical Sciences, Springer-Verlag, New York, 2004. doi: 10.1007/978-1-4757-4355-5. Google Scholar F. Flandoli, Invertibility of Riccati operators and controllability of related systems, Systems Control Lett., 9 (1987), 65-72. doi: 10.1016/0167-6911(87)90010-7. Google Scholar G. Grimm, M. J. Messina, S. E. Tuna and A. R. Teel, Model predictive control: For want of a local control Lyapunov function, all is not lost, IEEE Trans. Automat. Control, 50 (2005), 546-558. doi: 10.1109/TAC.2005.847055. Google Scholar L. Grüne, Analysis and design of unconstrained nonlinear MPC schemes for finite and infinite dimensional systems, SIAM J. Control Optim., 48 (2009), 1206-1228. doi: 10.1137/070707853. Google Scholar L. Grüne and A. Rantzer, On the infinite horizon performance of receding horizon controllers, IEEE Trans. Automat. Control, 53 (2008), 2100-2111. doi: 10.1109/TAC.2008.927799. Google Scholar M. Gugat, Exponential stabilization of the wave equation by Dirichlet integral feedback, SIAM J. Control Optim., 53 (2015), 526-546. doi: 10.1137/140977023. Google Scholar M. Gugat, G. Leugering and G. Sklyar, $L^p$-optimal boundary control for the wave equation, SIAM J. Control Optim., 44 (2005), 49-74. doi: 10.1137/S0363012903419212. Google Scholar M. Gugat, E. Trélat and E. Zuazua, Optimal Neumann control for the 1D wave equation: Finite horizon, infinite horizon, boundary tracking terms and the turnpike property, Systems Control Lett., 90 (2016), 61-70. doi: 10.1016/j.sysconle.2016.02.001. Google Scholar W. Guo and B.-Z. Guo, Adaptive output feedback stabilization for one-dimensional wave equation with corrupted observation by harmonic disturbance, SIAM J. Control Optim., 51 (2013), 1679-1706. doi: 10.1137/120873212. Google Scholar E. Hendrickson and I. Lasiecka, Numerical approximations of solutions to Riccati equations arising in boundary control problems for the wave equation, in Optimal control of differential equations (Athens, OH, 1993), vol. 160 of Lecture Notes in Pure and Appl. Math., Dekker, New York, 1994,111-132.Google Scholar K. Ito and K. Kunisch, Receding horizon optimal control for infinite dimensional systems, ESAIM Control Optim. Calc. Var., 8 (2002), 741-760 (electronic). A tribute to J. L. Lions. doi: 10.1051/cocv:2002032. Google Scholar A. Jadbabaie and J. Hauser, On the stability of receding horizon control with a general terminal cost, IEEE Trans. Automat. Control, 50 (2005), 674-678. doi: 10.1109/TAC.2005.846597. Google Scholar C. Johnson, Discontinuous Galerkin finite element methods for second order hyperbolic problems, Comput. Methods Appl. Mech. Engrg., 107 (1993), 117-129. doi: 10.1016/0045-7825(93)90170-3. Google Scholar O. Karakashian and C. Makridakis, Convergence of a continuous Galerkin method with mesh modification for nonlinear wave equations, Math. Comp., 74 (2005), 85-102. doi: 10.1090/S0025-5718-04-01654-0. Google Scholar A. Kröner, K. Kunisch and B. Vexler, Semismooth Newton methods for optimal control of the wave equation with control constraints, SIAM J. Control Optim., 49 (2011), 830-858. doi: 10.1137/090766541. Google Scholar K. Kunisch, P. Trautmann and B. Vexler, Optimal control of the undamped linear wave equation with measure valued controls, SIAM J. Control Optim., 54 (2016), 1212-1244. doi: 10.1137/141001366. Google Scholar J. E. Lagnese, Note on boundary stabilization of wave equations, SIAM J. Control Optim., 26 (1988), 1250-1256. doi: 10.1137/0326068. Google Scholar I. Lasiecka and R. Triggiani, Uniform exponential energy decay of wave equations in a bounded region with $L_2(0, ∞; L_2(Γ))$-feedback control in the Dirichlet boundary conditions, J. Differential Equations, 66 (1987), 340-390. doi: 10.1016/0022-0396(87)90025-8. Google Scholar ——, Sharp regularity theory for second order hyperbolic equations of Neumann type. I. L2 nonhomogeneous data, Ann. Mat. Pura Appl. (4), 157 (1990), 285-367. doi: 10.1007/BF01765322. Google Scholar ——, Differential and Algebraic Riccati Equations with Application to Boundary/Point Control Problems: Continuous Theory and Approximation Theory, vol. 164 of Lecture Notes in Control and Information Sciences, Springer-Verlag, Berlin, 1991. doi: 10.1007/BFb0006880. Google Scholar ——, Uniform stabilization of the wave equation with Dirichlet or Neumann feedback control without geometrical conditions, Appl. Math. Optim., 25 (1992), 189-224. doi: 10.1007/BF01182480. Google Scholar ——, Algebraic Riccati equations arising from systems with unbounded input-solution operator: applications to boundary control problems for wave and plate equations, Nonlinear Anal., 20 (1993), 659-695. doi: 10.1016/0362-546X(93)90026-O. Google Scholar J. -L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Translated from the French by S. K. Mitter. Die Grundlehren der mathematischen Wissenschaften, Band 170, Springer-Verlag, New York-Berlin, 1971. Google Scholar ——, Contrôle des Systémes Distribués Singuliers, vol. 13 of Méthodes Mathématiques de l'Informatique [Mathematical Methods of Information Science], Gauthier-Villars, Montrouge, 1983. Google Scholar J. -L. Lions and E. Magenes, Non-homogeneous Boundary Value Problems and Applications. Vol. I, Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 181. Google Scholar ——, Non-homogeneous Boundary Value Problems and Applications. Vol. II, Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 182. Google Scholar D. Q. Mayne, J. B. Rawlings, C. V. Rao and P. O. M. Scokaert, Constrained model predictive control: Stability and optimality, Automatica J. IFAC, 36 (2000), 789-814. doi: 10.1016/S0005-1098(99)00214-9. Google Scholar B. S. Mordukhovich and J. -P. Raymond, Neumann boundary control of hyperbolic equations with pointwise state constraints, SIAM J. Control Optim., 43 (2004/05), 1354-1372 (electronic). doi: 10.1137/S0363012903431177. Google Scholar B. S. Mordukhovich and J.-P. Raymond, Dirichlet boundary control of hyperbolic equations in the presence of state constraints, Appl. Math. Optim., 49 (2004), 145-157. doi: 10.1007/BF02638149. Google Scholar A. Münch and A. F. Pazoto, Uniform stabilization of a viscous numerical approximation for a locally damped wave equation, ESAIM Control Optim. Calc. Var., 13 (2007), 265-293 (electronic). doi: 10.1051/cocv:2007009. Google Scholar M. Reble and F. Allgöwer, Unconstrained model predictive control and suboptimality estimates for nonlinear continuous-time systems, Automatica J. IFAC, 48 (2012), 1812-1817. doi: 10.1016/j.automatica.2012.05.067. Google Scholar D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: recent progress and open questions, SIAM Rev., 20 (1978), 639-739. doi: 10.1137/1020095. Google Scholar L. T. Tebou and E. Zuazua, Uniform boundary stabilization of the finite difference space discretization of the $1-d$ wave equation, Adv. Comput. Math., 26 (2007), 337-365. doi: 10.1007/s10444-004-7629-9. Google Scholar R. Triggiani, Exact boundary controllability on $L_2(Ω)× H^{-1}(Ω)$ of the wave equation with Dirichlet boundary control acting on a portion of the boundary $\partialΩ$, and related problems, Appl. Math. Optim., 18 (1988), 241-277. doi: 10.1007/BF01443625. Google Scholar Figure 2. Control domains Figure Options Download as PowerPoint slide Figure 1. Snapshots of the uncontrolled state corresponding to Example 5.1 Figure 3. Evolution of $L^2(\omega)$-norm for RHC corresponding to Example 5.1 with different prediction horizons $T$ Figure 4. Evolution of $\|\mathcal{Y}_{rh}(t)\|_{\mathcal{H}}$ for different choices of $T$ Figure 5. Snapshots of receding horizon state for the choice of $T = 1.5$ corresponding to Example 5.1 Figure 7. Numerical results corresponding to Example 5.3 Table4 Algorithm 1 Receding Horizon Algorithm Require: Let the prediction horizon $T$, the sampling time $\delta<T$, and the initial point $(y^1_0, y^2_0)\in \mathcal{H}$ be given. Then we proceed through the following steps: 1: $k := 0, t_0 := 0$ and $\mathcal{Y}_{rh}(t_0):=(y^1_0, y^2_0)$. 2: Find the optimal pair $(\mathcal{Y}_T^*(\cdot;\mathcal{Y}_{rh}(t_k), t_k), u^*_T(\cdot;\mathcal{Y}_{rh}(t_k), t_k))$ over the time horizon $[t_k, t_k +T]$ by solving the finite horizon open-loop problem $ \begin{split} &\min_{u\in L^2(t_k, t_k+T; \mathcal{U})}J_T(u; \mathcal{Y}_{rh}(t_k)):= \min_{u\in L^2(t_k, t_k+T; \mathcal{U})} \int^{t_k+T}_{t_k} \ell(\mathcal{Y}(t), u(t))dt, \\ \text{ s.t } & \begin{cases} \dot{\mathcal{Y}} = \mathcal{A}\mathcal{Y}+\mathcal{B}u t \in (t_k, t_k+T) \\ \mathcal{Y}(t_k) = \mathcal{Y}_{rh}(t_k) \end{cases} \end{split} $ 3: Set $ \begin{split} u_{rh}(\tau)&:=u^*_T(\tau; y_{rh}(t_k), t_k) \text{ for all } \tau \in [t_k, t_k+\delta), \\ \mathcal{Y}_{rh}(\tau)&:=\mathcal{Y}^*_T(\tau; y_{rh}(t_k), t_k) \text{ for all } \tau \in [t_k, t_k+\delta], \\ t_{k+1} &:= t_k +\delta, \\ k &:= k+1. \end{split} $ 4: Go to step 2. Download as excel Algorithm 2 RHC($\mathcal{Y}_0, T_{\infty}$) Input: Let a final computational time horizon $T_{\infty}$, and an initial state $\mathcal{Y}_0:=(y^1_0, y^2_0) \in \mathcal{H}$ be given. 1: Choose a prediction horizon $T < T_{\infty}$ and a sampling time $\delta \in (0, T]$. 2: Consider a grid $0 = t_0 < t_1 < \cdots <t_r = T_{\infty}$ on the interval $[0, T_{\infty}]$ where $t_i = i\delta$ for $i = 0, \dots, r$. 3: for $i = 0, \dots, r-1$ do Solve the open-loop subproblem on $[t_i, t_i + T]$ $ \begin{split} \min & \frac{1}{2}\int_{t_i}^{t_i + T} \|\mathcal{Y}(t)\|^2_{\mathcal{H}}dt + \frac{\beta}{2}\int_{t_i}^{t_i + T}\|u(t)\|^2_{\mathcal{U}}dt, \\ \text{ subject to } & \begin{cases} \dot{\mathcal{Y}} = \mathcal{A}\mathcal{Y}+\mathcal{B}u t \in (t_k, t_k+T), \\ \mathcal{Y}(t_i) = \mathcal{Y}_T^*(t_i) \mbox{ if } i \geq 1 \mbox{ or } \mathcal{Y}(t_i) = (y^1_0, y^2_0) \mbox{ if } i = 0, \end{cases} \end{split} $ where $\mathcal{Y}_T^*(\cdot)$ is the solution to the previous subproblem on $[t_{i-1}, t_{i-1}+T]$. 4: The model predictive pair $\left( \mathcal{Y}_{rh}^*(\cdot), u_{rh}^*(\cdot)\right)$ is the concatenation of the optimal pairs $\left( \mathcal{Y}_T^*(\cdot), u_T^*(\cdot)\right)$ on the finite horizon intervals $[t_i, t_{i+1}]$ with $i = 0, \dots, r-1$. Table 1. Numerical results for Example 5.1 Prediction Horizon $J_{T_{\infty}}$ $\|\mathcal{Y}_{rh}\|_{L^2(0, T_{\infty};\mathcal{H}_1)}$ $\|\mathcal{Y}_{rh}(T_{\infty})\|_{\mathcal{H}_1}$ iter $T = 1.5$ $8.20\times 10^2$ $ 40.19$ $ 2.62 \times 10^{-8}$ $1515$ $T = 1$ $1.13\times 10^3$ $47.40$ $ 3.03\times10^{-6}$ $847$ $T = 0.5$ $3.13\times 10^{3}$ $79.10$ $2.00 \times 10^{-3}$ $550$ $T = 0.25$ $1. 94\times 10^{4}$ $197.43$ $3.79 \times 10^{-1}$ $373$ $T = 1.5$ $2.20$ $ 1.93$ $2.11 \times 10^{-6}$ $715$ $T = 1$ $2.75$ $2.23$ $ 3.42\times10^{-5}$ $599$ $T = 0.5$ $6.77$ $3.64$ $6.00 \times 10^{-3}$ $445$ $T = 0.25$ $33.75$ $8.20$ $2.36\times 10^{-1}$ $359$ $T = 1.5$ $1.30 \times 10^{4}$ $161.47$ $3.85 \times 10^{-6}$ $5348$ $T = 1$ $1.67 \times 10^{4}$ $182.97$ $7.08\times10^{-5}$ $3303$ $T = 0.25$ $2.41 \times 10^{5}$ $694.40$ $9.26$ $823$ Didier Georges. Infinite-dimensional nonlinear predictive control design for open-channel hydraulic systems. Networks & Heterogeneous Media, 2009, 4 (2) : 267-285. doi: 10.3934/nhm.2009.4.267 Fabio Bagagiolo. An infinite horizon optimal control problem for some switching systems. Discrete & Continuous Dynamical Systems - B, 2001, 1 (4) : 443-462. doi: 10.3934/dcdsb.2001.1.443 Vincent Renault, Michèle Thieullen, Emmanuel Trélat. Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via Optogenetics. Networks & Heterogeneous Media, 2017, 12 (3) : 417-459. doi: 10.3934/nhm.2017019 Georg Vossen, Torsten Hermanns. On an optimal control problem in laser cutting with mixed finite-/infinite-dimensional constraints. Journal of Industrial & Management Optimization, 2014, 10 (2) : 503-519. doi: 10.3934/jimo.2014.10.503 Valery Y. Glizer, Oleg Kelis. Asymptotic properties of an infinite horizon partial cheap control problem for linear systems with known disturbances. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 211-235. doi: 10.3934/naco.2018013 Naïla Hayek. Infinite-horizon multiobjective optimal control problems for bounded processes. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1121-1141. doi: 10.3934/dcdss.2018064 Vincenzo Basco, Piermarco Cannarsa, Hélène Frankowska. Necessary conditions for infinite horizon optimal control problems with state constraints. Mathematical Control & Related Fields, 2018, 8 (3&4) : 535-555. doi: 10.3934/mcrf.2018022 Rudy R. Negenborn, Peter-Jules van Overloop, Tamás Keviczky, Bart De Schutter. Distributed model predictive control of irrigation canals. Networks & Heterogeneous Media, 2009, 4 (2) : 359-380. doi: 10.3934/nhm.2009.4.359 Lars Grüne, Marleen Stieler. Multiobjective model predictive control for stabilizing cost criteria. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3905-3928. doi: 10.3934/dcdsb.2018336 Torsten Trimborn, Lorenzo Pareschi, Martin Frank. Portfolio optimization and model predictive control: A kinetic approach. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-30. doi: 10.3934/dcdsb.2019136 N. U. Ahmed. Existence of optimal output feedback control law for a class of uncertain infinite dimensional stochastic systems: A direct approach. Evolution Equations & Control Theory, 2012, 1 (2) : 235-250. doi: 10.3934/eect.2012.1.235 Jianhui Huang, Xun Li, Jiongmin Yong. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Mathematical Control & Related Fields, 2015, 5 (1) : 97-139. doi: 10.3934/mcrf.2015.5.97 Vladimir Gaitsgory, Alex Parkinson, Ilya Shvartsman. Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3821-3838. doi: 10.3934/dcdsb.2017192 Alexander Tarasyev, Anastasia Usova. Application of a nonlinear stabilizer for localizing search of optimal trajectories in control problems with infinite horizon. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 389-406. doi: 10.3934/naco.2013.3.389 Vladimir Gaitsgory, Alex Parkinson, Ilya Shvartsman. Linear programming based optimality conditions and approximate solution of a deterministic infinite horizon discounted optimal control problem in discrete time. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1743-1767. doi: 10.3934/dcdsb.2018235 Luca Galbusera, Sara Pasquali, Gianni Gilioli. Stability and optimal control for some classes of tritrophic systems. Mathematical Biosciences & Engineering, 2014, 11 (2) : 257-283. doi: 10.3934/mbe.2014.11.257 Judy Day, Jonathan Rubin, Gilles Clermont. Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation. Mathematical Biosciences & Engineering, 2010, 7 (4) : 739-763. doi: 10.3934/mbe.2010.7.739 Luís Tiago Paiva, Fernando A. C. C. Fontes. Sampled–data model predictive control: Adaptive time–mesh refinement algorithms and guarantees of stability. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2335-2364. doi: 10.3934/dcdsb.2019098 H. O. Fattorini. The maximum principle for linear infinite dimensional control systems with state constraints. Discrete & Continuous Dynamical Systems - A, 1995, 1 (1) : 77-101. doi: 10.3934/dcds.1995.1.77 Björn Augner, Birgit Jacob. Stability and stabilization of infinite-dimensional linear port-Hamiltonian systems. Evolution Equations & Control Theory, 2014, 3 (2) : 207-229. doi: 10.3934/eect.2014.3.207 PDF downloads (41) HTML views (74) Behzad Azmi Karl Kunisch Figures and Tables
CommonCrawl
Romberg's method In numerical analysis, Romberg's method[1] is used to estimate the definite integral $\int _{a}^{b}f(x)\,dx$ by applying Richardson extrapolation[2] repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangular array. Romberg's method is a Newton–Cotes formula – it evaluates the integrand at equally spaced points. The integrand must have continuous derivatives, though fairly good results may be obtained if only a few derivatives exist. If it is possible to evaluate the integrand at unequally spaced points, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are generally more accurate. The method is named after Werner Romberg (1909–2003), who published the method in 1955. Method Using $ h_{n}={\frac {(b-a)}{2^{n}}}$, the method can be inductively defined by ${\begin{aligned}R(0,0)&=h_{1}(f(a)+f(b))\\R(n,0)&={\tfrac {1}{2}}R(n-1,0)+h_{n}\sum _{k=1}^{2^{n-1}}f(a+(2k-1)h_{n})\\R(n,m)&=R(n,m-1)+{\tfrac {1}{4^{m}-1}}(R(n,m-1)-R(n-1,m-1))\\&={\frac {1}{4^{m}-1}}(4^{m}R(n,m-1)-R(n-1,m-1))\end{aligned}}$ where $n\geq m$ and $m\geq 1\,$. In big O notation, the error for R(n, m) is:[3] $O\left(h_{n}^{2m+2}\right).$ The zeroeth extrapolation, R(n, 0), is equivalent to the trapezoidal rule with 2n + 1 points; the first extrapolation, R(n, 1), is equivalent to Simpson's rule with 2n + 1 points. The second extrapolation, R(n, 2), is equivalent to Boole's rule with 2n + 1 points. The further extrapolations differ from Newton-Cotes formulas. In particular further Romberg extrapolations expand on Boole's rule in very slight ways, modifying weights into ratios similar as in Boole's rule. In contrast, further Newton-Cotes methods produce increasingly differing weights, eventually leading to large positive and negative weights. This is indicative of how large degree interpolating polynomial Newton-Cotes methods fail to converge for many integrals, while Romberg integration is more stable. By labelling our $ O(h^{2})$ approximations as $ A_{0}{\big (}{\frac {h}{2^{n}}}{\big )}$ instead of $ R(n,0)$, we can perform Richardson extrapolation with the error formula defined below: $\int _{a}^{b}f(x)\,dx=A_{0}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}+a_{0}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}^{2}+a_{1}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}^{4}+a_{2}{\bigg (}{\frac {h}{2^{n}}}{\bigg )}^{6}+\cdots $ Once we have obtained our $ O(h^{2(m+1)})$ approximations $ A_{m}{\big (}{\frac {h}{2^{n}}}{\big )}$, we can label them as $ R(n,m)$. When function evaluations are expensive, it may be preferable to replace the polynomial interpolation of Richardson with the rational interpolation proposed by Bulirsch & Stoer (1967). A geometric example To estimate the area under a curve the trapezoid rule is applied first to one-piece, then two, then four, and so on. After trapezoid rule estimates are obtained, Richardson extrapolation is applied. • For the first iteration the two piece and one piece estimates are used in the formula (4 × (more accurate) − (less accurate))/3 The same formula is then used to compare the four piece and the two piece estimate, and likewise for the higher estimates • For the second iteration the values of the first iteration are used in the formula (16(more accurate) − less accurate))/15 • The third iteration uses the next power of 4: (64 (more accurate) − less accurate))/63 on the values derived by the second iteration. • The pattern is continued until there is one estimate. Number of piecesTrapezoid estimatesFirst iterationSecond iterationThird iteration (4 MA − LA)/3*(16 MA − LA)/15(64 MA − LA)/63 10(4×16 − 0)/3 = 21.333...(16×34.667 − 21.333)/15 = 35.556...(64×42.489 − 35.556)/63 = 42.599... 216(4×30 − 16)/3 = 34.666...(16×42 − 34.667)/15 = 42.489... 430(4×39 − 30)/3 = 42 839 • MA stands for more accurate, LA stands for less accurate Example As an example, the Gaussian function is integrated from 0 to 1, i.e. the error function erf(1) ≈ 0.842700792949715. The triangular array is calculated row by row and calculation is terminated if the two last entries in the last row differ less than 10−8. 0.77174333 0.82526296 0.84310283 0.83836778 0.84273605 0.84271160 0.84161922 0.84270304 0.84270083 0.84270066 0.84243051 0.84270093 0.84270079 0.84270079 0.84270079 The result in the lower right corner of the triangular array is accurate to the digits shown. It is remarkable that this result is derived from the less accurate approximations obtained by the trapezium rule in the first column of the triangular array. Implementation Here is an example of a computer implementation of the Romberg method (in the C programming language): #include <stdio.h> #include <math.h> void print_row(size_t i, double *R) { printf("R[%2zu] = ", i); for (size_t j = 0; j <= i; ++j) { printf("%f ", R[j]); } printf("\n"); } /* INPUT: (*f) : pointer to the function to be integrated a : lower limit b : upper limit max_steps: maximum steps of the procedure acc : desired accuracy OUTPUT: Rp[max_steps-1]: approximate value of the integral of the function f for x in [a,b] with accuracy 'acc' and steps 'max_steps'. */ double romberg(double (*f)(double), double a, double b, size_t max_steps, double acc) { double R1[max_steps], R2[max_steps]; // buffers double *Rp = &R1[0], *Rc = &R2[0]; // Rp is previous row, Rc is current row double h = b-a; //step size Rp[0] = (f(a) + f(b))*h*0.5; // first trapezoidal step print_row(0, Rp); for (size_t i = 1; i < max_steps; ++i) { h /= 2.; double c = 0; size_t ep = 1 << (i-1); //2^(n-1) for (size_t j = 1; j <= ep; ++j) { c += f(a + (2*j-1) * h); } Rc[0] = h*c + .5*Rp[0]; // R(i,0) for (size_t j = 1; j <= i; ++j) { double n_k = pow(4, j); Rc[j] = (n_k*Rc[j-1] - Rp[j-1]) / (n_k-1); // compute R(i,j) } // Print ith row of R, R[i,i] is the best estimate so far print_row(i, Rc); if (i > 1 && fabs(Rp[i-1]-Rc[i]) < acc) { return Rc[i]; } // swap Rn and Rc as we only need the last row double *rt = Rp; Rp = Rc; Rc = rt; } return Rp[max_steps-1]; // return our best guess } Here is an example of a computer implementation of the Romberg method in the Javascript programming language. /** * INPUTS * func = integrand, function to be integrated * a = lower limit of integration * b = upper limit of integration * nmax = number of partitions, n = 2^nmax * tol_ae = maximum absolute approximate error acceptable (should be >= 0) * tol_rae = maximum absolute relative approximate error acceptable (should be >= 0) * OUTPUTS * integ_value = estimated value of integral */ function auto_integrator_trap_romb_hnm(func, a, b, nmax, tol_ae, tol_rae) { if (typeof a !== 'number') { throw new TypeError('<a> must be a number'); } if (typeof b !== 'number') { throw new TypeError('<b> must be a number'); } if (!Number.isInteger(nmax) || nmax<1) { throw new TypeError('<nmax> must be an integer greater than or equal to one.'); } if ((typeof tol_ae !== 'number') || tol_ae < 0) { throw new TypeError('<tole_ae> must be a number greater than or equal to zero'); } if ((typeof tol_rae !== 'number') || tol_rae <= 0) { throw new TypeError('<tole_ae> must be a number greater than or equal to zero'); } var h = b - a; // initialize matrix where the values of integral are stored var Romb = []; // rows for (var i = 0; i < nmax+1; i++) { Romb.push([]); for (var j = 0; j < nmax+1; j++) { Romb[i].push(math.bignumber(0)); } } // calculating the value with 1-segment trapezoidal rule Romb[0][0] = 0.5 * h * (func(a)+func(b)); var integ_val = Romb[0][0]; for (var i = 1; i <= nmax; i++) { // updating the value with double the number of segments // by only using the values where they need to be calculated // See https://autarkaw.org/2009/02/28/an-efficient-formula-for-an-automatic-integrator-based-on-trapezoidal-rule/ h = h / 2; var integ = 0; for (var j=1; j<=2**i-1; j+=2) { var integ=integ+func(a+j*h) } Romb[i][0] = 0.5*Romb[i-1][0] + integ*h; // Using Romberg method to calculate next extrapolatable value // See https://young.physics.ucsc.edu/115/romberg.pdf for (k = 1; k <= i; k++) { var addterm = Romb[i][k-1] - Romb[i-1][k-1] addterm = addterm/(4**k-1.0) Romb[i][k] = Romb[i][k-1] + addterm //Calculating absolute approximate error var Ea = math.abs(Romb[i][k] - Romb[i][k-1]) //Calculating absolute relative approximate error var epsa = math.abs(Ea/Romb[i][k]) * 100.0; //Assigning most recent value to the return variable integ_val = Romb[i][k]; // returning the value if either tolerance is met if (epsa < tol_rae || Ea < tol_ae) { return integ_val; } } } // returning the last calculated value of integral whether tolerance is met or not return integ_val; } References 1. Romberg 1955 2. Richardson 1911 3. Mysovskikh 2002 • Richardson, L. F. (1911), "The Approximate Arithmetical Solution by Finite Differences of Physical Problems Involving Differential Equations, with an Application to the Stresses in a Masonry Dam", Philosophical Transactions of the Royal Society A, 210 (459–470): 307–357, doi:10.1098/rsta.1911.0009, JSTOR 90994 • Romberg, W. (1955), "Vereinfachte numerische Integration", Det Kongelige Norske Videnskabers Selskab Forhandlinger, Trondheim, 28 (7): 30–36 • Thacher Jr., Henry C. (July 1964), "Remark on Algorithm 60: Romberg integration", Communications of the ACM, 7 (7): 420–421, doi:10.1145/364520.364542 • Bauer, F.L.; Rutishauser, H.; Stiefel, E. (1963), Metropolis, N. C.; et al. (eds.), "New aspects in numerical quadrature", Experimental Arithmetic, high-speed computing and mathematics, Proceedings of Symposia in Applied Mathematics, AMS (15): 199–218 • Bulirsch, Roland; Stoer, Josef (1967), "Handbook Series Numerical Integration. Numerical quadrature by extrapolation", Numerische Mathematik, 9: 271–278, doi:10.1007/bf02162420 • Mysovskikh, I.P. (2002) [1994], "Romberg method", in Hazewinkel, Michiel (ed.), Encyclopedia of Mathematics, Springer-Verlag, ISBN 1-4020-0609-8 • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 4.3. Romberg Integration", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 External links • ROMBINT – code for MATLAB (author: Martin Kacenak) • Free online integration tool using Romberg, Fox–Romberg, Gauss–Legendre and other numerical methods • SciPy implementation of Romberg's method • Romberg.jl — Julia implementation (supporting arbitrary factorizations, not just $2^{n}+1$ points)
Wikipedia
Snookered In a snooker game the brown ball was on the lip of the pocket but it could not be hit directly as the black ball was in the way. How could it be potted by playing the white ball off a cushion? Peeling the Apple or the Cone That Lost Its Head How much peel does an apple have? There's Always One Isn't There Take any pair of numbers, say 9 and 14. Take the larger number, fourteen, and count up in 14s. Then divide each of those values by the 9, and look at the remainders. Robot Camera Try this next Think higher Read: mathematics Read: science Nanotechnology is the technology of objects whose dimensions are in the range $0.1$ to $100$ nanometre (nm), ie. $0.1\times 10^{-9}$ to $100\times 10^{-9}$ metres. Try these preliminary questions to help you get some idea just how small that is and how it relates to other units of length: An average human hair has a diameter of about 50 microns. How many nanometres is this? Measure the height of a new pack of printer paper. Packs come in reams which contain 500 sheets of paper: estimate the thickness of one piece of paper. What is this in nanometres? What is it in microns? Do you think that it will be possible in principle to make a robot camera using nanotechnology which could be injected into a person's arteries in order to see if they are becoming blocked? You will need to consider what size a robot would need to be to move along an artery in the blood stream. Here is some information to get you started: the interior diameter of the coronary arteries is about 1-2 mm blood contains red and white blood cells and platelets red blood cells (erythrocytes) are about 6-8 microns in diameter white blood cells (leukocytes) are about 15 microns in diameter platelets (thrombocytes) are about 2-3 microns in diameter Here is an image of blood components taken from http://en.wikipedia.org/wiki/File:Red_White_Blood_cells.jpg
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Abelian Group Group Homomorphism Sylow's Theorem Field Theory Module Theory Ring Theory LaTex/MathJax Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Tagged: inconsistent system by Yu · Published 02/13/2017 · Last modified 07/27/2017 The Possibilities For the Number of Solutions of Systems of Linear Equations that Have More Equations than Unknowns Problem 295 Determine all possibilities for the number of solutions of each of the system of linear equations described below. (a) A system of $5$ equations in $3$ unknowns and it has $x_1=0, x_2=-3, x_3=1$ as a solution. (b) A homogeneous system of $5$ equations in $4$ unknowns and the rank of the system is $4$. (The Ohio State University, Linear Algebra Midterm Exam Problem) Read solution Click here if solved 15 Add to solve later Summary: Possibilities for the Solution Set of a System of Linear Equations In this post, we summarize theorems about the possibilities for the solution set of a system of linear equations and solve the following problems. Determine all possibilities for the solution set of the system of linear equations described below. (a) A homogeneous system of $3$ equations in $5$ unknowns. (b) A homogeneous system of $5$ equations in $4$ unknowns. (c) A system of $5$ equations in $4$ unknowns. (d) A system of $2$ equations in $3$ unknowns that has $x_1=1, x_2=-5, x_3=0$ as a solution. (e) A homogeneous system of $4$ equations in $4$ unknowns. (f) A homogeneous system of $3$ equations in $4$ unknowns. (g) A homogeneous system that has $x_1=3, x_2=-2, x_3=1$ as a solution. (h) A homogeneous system of $5$ equations in $3$ unknowns and the rank of the system is $3$. (i) A system of $3$ equations in $2$ unknowns and the rank of the system is $2$. (j) A homogeneous system of $4$ equations in $3$ unknowns and the rank of the system is $2$. Find Values of $a$ so that Augmented Matrix Represents a Consistent System Suppose that the following matrix $A$ is the augmented matrix for a system of linear equations. \[A= \left[\begin{array}{rrr|r} 1 & 2 & 3 & 4 \\ 2 &-1 & -2 & a^2 \\ -1 & -7 & -11 & a \end{array} \right],\] where $a$ is a real number. Determine all the values of $a$ so that the corresponding system is consistent. A Condition that a Linear System has Nontrivial Solutions For what value(s) of $a$ does the system have nontrivial solutions? \begin{align*} &x_1+2x_2+x_3=0\\ &-x_1-x_2+x_3=0\\ & 3x_1+4x_2+ax_3=0. \end{align*} Possibilities For the Number of Solutions for a Linear System Determine whether the following systems of equations (or matrix equations) described below has no solution, one unique solution or infinitely many solutions and justify your answer. (a) \[\left\{ \begin{array}{c} ax+by=c \\ dx+ey=f, \end{array} \right. \] where $a,b,c, d$ are scalars satisfying $a/d=b/e=c/f$. (b) $A \mathbf{x}=\mathbf{0}$, where $A$ is a singular matrix. (c) A homogeneous system of $3$ equations in $4$ unknowns. (d) $A\mathbf{x}=\mathbf{b}$, where the row-reduced echelon form of the augmented matrix $[A|\mathbf{b}]$ looks as follows: \[\begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 &1 & 2 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.\] (The Ohio State University, Linear Algebra Exam) Quiz: Possibilities For the Solution Set of a Homogeneous System of Linear Equations Problem 93 4 multiple choice questions about possibilities for the solution set of a homogeneous system of linear equations. The solutions will be given after completing all problems. True or False Quiz About a System of Linear Equations The level of the quiz* 1 Too Easy 2 Easy 3 Normal 4 Hard 5 Too Hard You must fill out this field. Determine whether the following sentence is True or False. Click View questions button below to see the answers. True or False. A linear system of four equations in three unknowns is always inconsistent. Good! For example, the homogeneous system \[\left\{ x+y+z=0 \\ 2x+2y+2z=0 \\ 3x+3y+3z=0 \] has the solution $(x,y,z)=(0,0,0)$. So the system is consistent. the homogeneous system True or False. A linear system with fewer equations than unknowns must have infinitely many solutions. Good! For example, consider the system of one equation with two unknowns \[0x+0y=1.\] This system has no solution at all. For example, consider the system of one equation with two unknowns True or False. If the system $A\mathbf{x}=\mathbf{b}$ has a unique solution, then $A$ must be a square matrix. Good! For example, consider the matrix $A=\begin{bmatrix} 1 \\ \end{bmatrix}$. Then the system \end{bmatrix}[x]=\begin{bmatrix} \end{bmatrix}\] has the unique solution $x=0$ but $A$ is not a square matrix. For example, consider the matrix $A=\begin{bmatrix} (Purdue University Linear Algebra Exam) This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Math-Magic (1) Module Theory (13) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Find the Conditional Probability About Math Exam Experiment What is the Probability that Selected Coin was Two-Headed? If a Smartphone is Defective, Which Factory Made It? If At Least One of Two Coins Lands Heads, What is the Conditional Probability that the First Coin Lands Heads? Probability of Having Lung Cancer For Smokers Find a Basis and Determine the Dimension of a Subspace of All Polynomials of Degree $n$ or Less Companion Matrix for a Polynomial If Column Vectors Form Orthonormal set, is Row Vectors Form Orthonormal Set? True or False. The Intersection of Bases is a Basis of the Intersection of Subspaces Determine Whether Given Matrices are Similar How to Diagonalize a Matrix. Step by Step Explanation. How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Determine Whether Each Set is a Basis for $\R^3$ Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Eigenvalues of a Matrix and its Transpose are the Same Find an Orthonormal Basis of the Given Two Dimensional Vector Space Express a Vector as a Linear Combination of Other Vectors Find a Basis for the Subspace spanned by Five Vectors The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems If you are a member, Login here. Problems in Mathematics © 2019. All Rights Reserved.
CommonCrawl
EECT Home On an inverse problem for fractional evolution equation March 2017, 6(1): 135-154. doi: 10.3934/eect.2017008 The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework Jing Zhang Department of Mathematics and Economics, Virginia State University, Petersburg, VA 23806, USA Received October 2016 Revised August 2016 Published December 2016 In this paper, we study a fluid-structure interaction model of Stokes-wave equation coupling system with Kelvin-Voigt type of damping. We show that this damped coupling system generates an analyticity semigroup and thus the semigroup solution, which also satisfies variational framework of weak solution, decays to zero at exponential rate. Keywords: Fluid-Structure Interaction, stokes equation, wave equation, Kelvin-Voigt damping, analyticity, uniform stabilization. Mathematics Subject Classification: Primary: 35M10, 35B35; Secondary: 35A01. Citation: Jing Zhang. The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework. Evolution Equations & Control Theory, 2017, 6 (1) : 135-154. doi: 10.3934/eect.2017008 G. Avalos and R. Triggiani, The coupled PDE system arising in fluid-structure interaction. Part Ⅰ: Explicit semigroup generator and its spectral properties, AMS Contemporary Mathematics, Fluids and Waves, 440 (2007), 15-55. doi: 10.1090/conm/440/08475. Google Scholar G. Avalos, I. Lasiecka and R. Triggiani, Higher regularity of a coupled parabolic-hyperbolic fluid-structure interactive system, Georgian Math. J. , Special issue dedicated to the memory of J. L. Lions, 15 (2008), 403-437. Google Scholar G. Avalos and R. Triggiani, Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface, Discr. Cont. Dynam. Sys., 22 (2008), 817-833. doi: 10.3934/dcds.2008.22.817. Google Scholar G. Avalos and R. Triggiani, Boundary feedback stabilization of a coupled parabolic-hyperbolic Stokes-Lamé PDE system, J. Evol. Eqns., 9 (2009), 341-370. doi: 10.1007/s00028-009-0015-9. Google Scholar G. Avalos and R. Triggiani, Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability, Evolution Equations and Control Theory, 2 (2013), 563-598. doi: 10.3934/eect.2013.2.563. Google Scholar W. Arendt and C. J. K. Batty, Tauberian theorems and stability of one-parameter semigroups, Transactions of the American Mathematical Society, 306 (1988), 837-852. doi: 10.1090/S0002-9947-1988-0933321-3. Google Scholar V. Barbu, Nonlinear Semigroup and Differential Equations in Banach Spaces, Springer, 1976. Google Scholar V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha, Smoothness of weak solutions to a nonlinear fluid-structure interaction model, Indiana Univ. Math. J., 57 (2008), 1773-1207. doi: 10.1512/iumj.2008.57.3284. Google Scholar V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha, Existence of the energy-level weak solutions for a nonlinear fluid-structure interaction model, Contemporary Mathematics, 440 (2007), 55-82. doi: 10.1090/conm/440/08476. Google Scholar S. Canic, B. Muha and M. Bukac, Stability of the Kinematically Coupled $β$-Scheme for fluid-structure interaction problems in hemodynamics, International Journal for Numerical Analysis and Modeling, 12 (2015), 54-80. Google Scholar S. Chen and R. Triggiani, Proof of the extensions of two conjectures on structural damping for elastic system, Pacific Journal of Mathematics, 36 (1989), 15-55. doi: 10.2140/pjm.1989.136.15. Google Scholar S. Chen and R. Triggiani, Gevrey class semigroups arising from elastic systems with gentle dissipation: the case $ 0 < \alpha < \frac{1}{2}$, Proc. Amer. Math. Soc., 110 (1990), 401-415. doi: 10.2307/2048084. Google Scholar C. Clason, B. Kaltenbacher and S. Veljović, Boundary optimal control of the Westervelt and the Kuznetsov equation, J. Math. Anal. Appl., 356 (2009), 738-751. doi: 10.1016/j.jmaa.2009.03.043. Google Scholar D. Coutand and S. Shkoller, Motion of an elastic inside an incompressible viscous fluid, Arch. Rational Mech. Anal., 176 (2005), 25-102. doi: 10.1016/j.jmaa.2009.03.043. Google Scholar R. Denk, M. Hieber and J. Prüss, $\mathcal{R}$-boundedness, Fourier multipliers and problems of elliptic and parabolic type Memoirs Amer. Math. Soc. 166 (2003), viii+114 pp. doi: 10.1090/memo/0788. Google Scholar W. Desch, M. Hieber and J. Pruss, $L_p$ theory of the Stokes equation in a half space, J. Evolution Eqns, 1 (2001), 115-142. doi: 10.1007/PL00001362. Google Scholar W. Desch and W. Schappacher, Some perturbation results for analytic semigroups, Mathematische Annalen, 281 (1988), 157-162. doi: 10.1007/BF01449222. Google Scholar Q. Du, M. D. Gunzburger, L. S. Hou and J. Lee, Analysis of a linear-fluid structure interaction model, Discr. Dynam. Sys., 9 (2003), 633-650. doi: 10.3934/dcds.2003.9.633. Google Scholar Y. Giga, Analyticity of the semigroup generated by the Stokes operator in $L_r$ space, Mathematische Zeiscrift, 178 (1981), 297-329. doi: 10.1007/BF01214869. Google Scholar Y. Giga, Weak and strong solutions of the Navier-Stokes initial value problem, Publ. RIMS, Tokyo Univ., 19 (1983), 887-910. doi: 10.2977/prims/1195182014. Google Scholar M. Hieber and J. Prüss, Heat kernels and maximal $L^p-L^q$ estimates for parabolic evolution equations, Comm. Partial Differential Equations, 22 (1997), 1647-1669. doi: 10.1080/03605309708821314. Google Scholar B. Kaltenbacher and I. Lasiecka, Global existence and exponential decay rates for the Westervelt equation, Discr. Cont. Dynam. Sys., Series S, 2 (2009), 503-523. doi: 10.3934/dcdss.2009.2.503. Google Scholar B. Kaltenbacher, Boundary observability and stabilization for Westervelt type wave equations, Appl. Math. & Opti., 62 (2010), 381-410. doi: 10.1007/s00245-010-9108-7. Google Scholar I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions to a nonlinear fluid structure interaction system, J. Diff. Eq., 247 (2009), 1452-1478. doi: 10.1016/j.jde.2009.06.005. Google Scholar I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions to a nonlinear fluid structure interaction system, Adv. Diff. Eq., 15 (2010), 231-254. doi: 10.1016/j.jde.2009.06.005. Google Scholar I. Lasiecka, Mathematical Control Theory of Coupled PDEs SIAM, 2002. doi: 10.1137/1.9780898717099. Google Scholar I. Lasiecka and Y. Lu, Asymptotic stability of finite energy in Navier Stokes-elastic wave interaction, Semigroup Forum, 82 (2011), 61-82. doi: 10.1007/s00233-010-9281-7. Google Scholar I. Lasiecka and Y. Lu, Interface feedback control stabilization to a nonlinear fluid-structure interaction model, Nonlinear Anal., 75 (2012), 1449-1460. doi: 10.1016/j.na.2011.04.018. Google Scholar I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, I: Abstract Parabolic Systems, Encyclopedia of Mathematics and its Applications, 74 Cambridge University Press, 2000. Google Scholar I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers, Communications on Pure and Applied Analysis, 15 (2016). doi: 10.3934/cpaa.2016001. Google Scholar K. Liu and Z. Liu, Analyticity and differentiability of semigroups associated with elastic systems with damping and Gyroscopitc forces, J. Diff. Eq., 141 (1997), 340-355. doi: 10.1006/jdeq.1997.3331. Google Scholar Z. Liu and S. Zheng, Semigroups Associated with Dissipative Systems Chapman & Hall/ CRC Research Notes in Mathematics, 1999. Google Scholar S. Meyer and M. Wilke, Optimal regularity and long-time behavior of solutions for the Westervelt equation, Appl. Math. and Opti., 64 (2011), 257-271. doi: 10.1007/s00245-011-9138-9. Google Scholar B. Muha and S. Canic, Existence of a weak solution to a nonlinear fluid-structure interaction problem modeling the flow of an incompressible, viscous fluid in a cylinder with deformable walls, Archives for Rational Mechanics and Analysis, 207 (2013), 919-296871. doi: 10.1007/s00205-012-0585-5. Google Scholar B. Muha and S. Canic, Existence of a solution to a fluid-multi-layered-structure interaction problem, Journal of Differential Equations, 256 (2014), 658-706. doi: 10.1016/j.jde.2013.09.016. Google Scholar N. $\ddot{O}$zkaya, M. Nordin, D. Goldsheyder and D. Leger, Fundamentals of Biomechanics-Equilibrium, Motion, and Deformation Springer-Verlag, New York, 2012. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations Springer Verlag, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar J. Pruss, On the spectrum of $C_0$ semigroup, Transactions of American Mathematics Society, 284 (1984), 847-857. doi: 10.2307/1999112. Google Scholar G. Simonett and M. Wilke, Well-posedness and long-time behaviour for the Westervelt equation with absorbing boundary conditions of order zero, To appear in in J. of Evol. Eqns. Google Scholar R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications, Applied Mathematics and Optimization, special issue in memory of A. V. Balakrishnan, 73(3) (2016), 571-594. doi: 10.1007/s00245-016-9348-2. Google Scholar X. Zhang and E. Zuazua, Long-time behavior of a coupled heat-wave system in fluid-structure interaction, Arch. Rat. Mech. Anal., 184 (2007), 49-120. doi: 10.1007/s00205-006-0020-x. Google Scholar Figure 1. THE FLUID–STRUCTURE INTERACTION Fathi Hassine. Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1757-1774. doi: 10.3934/dcdsb.2016021 Serge Nicaise, Cristina Pignotti. Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 791-813. doi: 10.3934/dcdss.2016029 Louis Tebou. Stabilization of some elastodynamic systems with localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7117-7136. doi: 10.3934/dcds.2016110 George Avalos, Roberto Triggiani. Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 817-833. doi: 10.3934/dcds.2008.22.817 Mohammad Akil, Ibtissam Issa, Ali Wehbe. Energy decay of some boundary coupled systems involving wave\ Euler-Bernoulli beam with one locally singular fractional Kelvin-Voigt damping. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021059 Mehdi Badra, Takéo Takahashi. Feedback boundary stabilization of 2d fluid-structure interaction systems. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2315-2373. doi: 10.3934/dcds.2017102 Qiang Du, M. D. Gunzburger, L. S. Hou, J. Lee. Analysis of a linear fluid-structure interaction problem. Discrete & Continuous Dynamical Systems, 2003, 9 (3) : 633-650. doi: 10.3934/dcds.2003.9.633 Robert E. Miller. Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence. Discrete & Continuous Dynamical Systems, 1995, 1 (4) : 485-502. doi: 10.3934/dcds.1995.1.485 Andro Mikelić, Giovanna Guidoboni, Sunčica Čanić. Fluid-structure interaction in a pre-stressed tube with thick elastic walls I: the stationary Stokes problem. Networks & Heterogeneous Media, 2007, 2 (3) : 397-423. doi: 10.3934/nhm.2007.2.397 Henry Jacobs, Joris Vankerschaver. Fluid-structure interaction in the Lagrange-Poincaré formalism: The Navier-Stokes and inviscid regimes. Journal of Geometric Mechanics, 2014, 6 (1) : 39-66. doi: 10.3934/jgm.2014.6.39 Miroslav Bulíček, Josef Málek, K. R. Rajagopal. On Kelvin-Voigt model and its generalizations. Evolution Equations & Control Theory, 2012, 1 (1) : 17-42. doi: 10.3934/eect.2012.1.17 Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2021, 11 (4) : 885-904. doi: 10.3934/mcrf.2020050 George Avalos, Roberto Triggiani. Semigroup well-posedness in the energy space of a parabolic-hyperbolic coupled Stokes-Lamé PDE system of fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 417-447. doi: 10.3934/dcdss.2009.2.417 Grégoire Allaire, Alessandro Ferriero. Homogenization and long time asymptotic of a fluid-structure interaction problem. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 199-220. doi: 10.3934/dcdsb.2008.9.199 Serge Nicaise, Cristina Pignotti. Asymptotic analysis of a simple model of fluid-structure interaction. Networks & Heterogeneous Media, 2008, 3 (4) : 787-813. doi: 10.3934/nhm.2008.3.787 Igor Kukavica, Amjad Tuffaha. Solutions to a fluid-structure interaction free boundary problem. Discrete & Continuous Dynamical Systems, 2012, 32 (4) : 1355-1389. doi: 10.3934/dcds.2012.32.1355 George Avalos, Roberto Triggiani. Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability. Evolution Equations & Control Theory, 2013, 2 (4) : 563-598. doi: 10.3934/eect.2013.2.563 Xiaorui Wang, Genqi Xu. Uniform stabilization of a wave equation with partial Dirichlet delayed control. Evolution Equations & Control Theory, 2020, 9 (2) : 509-533. doi: 10.3934/eect.2020022 Irena Lasiecka, Roberto Triggiani. Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1515-1543. doi: 10.3934/cpaa.2016001 Oualid Kafi, Nader El Khatib, Jorge Tiago, Adélia Sequeira. Numerical simulations of a 3D fluid-structure interaction model for blood flow in an atherosclerotic artery. Mathematical Biosciences & Engineering, 2017, 14 (1) : 179-193. doi: 10.3934/mbe.2017012
CommonCrawl
Cyclic and BCH codes whose minimum distance equals their maximum BCH bound AMC Home Coherence of sensing matrices coming from algebraic-geometric codes May 2016, 10(2): 437-457. doi: 10.3934/amc.2016017 A class of $p$-ary cyclic codes and their weight enumerators Long Yu 1, and Hongwei Liu 1, School of Mathematics and Statistics, Central China Normal University, Wuhan, Hubei 430079, China Received September 2014 Revised October 2015 Published April 2016 Let $\mathbb{F}_{p^m}$ be a finite field with $p^m$ elements, where $p$ is an odd prime, and $m$ is a positive integer. Let $h_1(x)$ and $h_2(x)$ be minimal polynomials of $-\pi^{-1}$ and $\pi^{-\frac{p^k+1}{2}}$ over $\mathbb{F}_p$, respectively, where $\pi $ is a primitive element of $\mathbb{F}_{p^m}$, and $k$ is a positive integer such that $\frac{m}{\gcd(m,k)}\geq 3$. In [23], Zhou et al. obtained the weight distribution of a class of cyclic codes over $\mathbb{F}_p$ with parity-check polynomial $h_1(x)h_2(x)$ in the following two cases: • $k$ is even and $\gcd(m,k)$ is odd; • $\frac{m}{\gcd(m,k)}$ and $\frac{k}{\gcd(m,k)}$ are both odd. In this paper, we further investigate this class of cyclic codes over $\mathbb{F}_p$ in other cases. We determine the weight distribution of this class of cyclic codes. Keywords: exponential sum, Cyclic code, weight distribution., quadratic form. Mathematics Subject Classification: Primary: 11T71; Secondary: 94B1. Citation: Long Yu, Hongwei Liu. A class of $p$-ary cyclic codes and their weight enumerators. Advances in Mathematics of Communications, 2016, 10 (2) : 437-457. doi: 10.3934/amc.2016017 P. Delsarte, On subfield subcodes of modified Reed-Solomon codes,, IEEE Trans. Inf. Theory, 21 (1975), 575. Google Scholar C. Ding, Y. Liu, C. Ma and L. Zeng, The weight distributions of the duals of cyclic codes with two zeros,, IEEE Trans. Inf. Theory, 57 (2011), 8000. doi: 10.1109/TIT.2011.2165314. Google Scholar C. Ding and J. Yang, Hamming weight in irrecducible codes,, Discrete Math., 313 (2013), 434. doi: 10.1016/j.disc.2012.11.009. Google Scholar K. Feng and J. Luo, Weight distribution of some reducible cyclic codes,, Finite Fields Appl., 14 (2008), 390. doi: 10.1016/j.ffa.2007.03.003. Google Scholar T. Feng, On cyclic codes of length $2^{2^r}-1$ with two zeros whose dual codes have three weights,, Des. Codes Crypt., 62 (2012), 253. doi: 10.1007/s10623-011-9514-0. Google Scholar R. Lidl and H. Niederreiter, Finite Fields,, Cambridge Univ. Press, (1983). Google Scholar J. Luo and K. Feng, Cyclic codes and sequences from generalized Coulter-Matthews function,, IEEE Trans. Inf. Theory, 54 (2008), 5345. doi: 10.1109/TIT.2008.2006394. Google Scholar J. Luo and K. Feng, On the weight distributions of two classes of cyclic codes,, IEEE Trans. Inf. Theory, 54 (2008), 5332. doi: 10.1109/TIT.2008.2006424. Google Scholar C. Ma, L. Zeng, Y. Liu, D. Feng and C. Ding, The weight enumerator of a class of cyclic codes}},, IEEE Trans. Inf. Theory, 57 (2011), 397. doi: 10.1109/TIT.2010.2090272. Google Scholar F. MacWilliams and N. Sloane, The Theory of Error-Correcting Codes,, North-Holland, (1997). Google Scholar A. Rao and N. Pinnawala, A family of two-weight irreducible cyclic codes,, IEEE Trans. Inf. Theory, 56 (2010), 2568. doi: 10.1109/TIT.2010.2046201. Google Scholar G. Vega, The weight distribution of an extended class of reducible cyclic codes,, IEEE Trans. Inf. Theory, 58 (2012), 4862. doi: 10.1109/TIT.2012.2193376. Google Scholar G. Vega and J. Wolfmann, New classes of $2$-weight cyclic codes,, Des. Codes Crypt., 42 (2007), 327. doi: 10.1007/s10623-007-9038-9. Google Scholar B. Wang, C. Tang, Y. Qi, Y. Yang and M. Xu, The weight distributions of cyclic codes and elliptic curves,, IEEE Trans. Inf. Theory, 58 (2012), 7253. doi: 10.1109/TIT.2012.2210386. Google Scholar M. Xiong, The weight distributions of a class of cyclic codes,, Finite Fields Appl., 18 (2012), 933. doi: 10.1016/j.ffa.2012.06.001. Google Scholar M. Xiong, The weight distributions of a class of cyclic codes II,, Des. Codes Crypt., 72 (2014), 511. doi: 10.1007/s10623-012-9785-0. Google Scholar M. Xiong, The weight distributions of a class of cyclic codes III,, Finite Fields Appl., 21 (2013), 84. doi: 10.1016/j.ffa.2013.01.004. Google Scholar L. Yu and H. Liu, The weight distribution of a family of p-ary cyclic codes,, Des. Codes Crypt., 78 (2016), 731. doi: 10.1007/s10623-014-0029-3. Google Scholar X. Zeng, L. Hu, W. Jiang, Q. Yue and X. Cao, The weight distribution of a class of $p$-ary cyclic codes,, Finite Fields Appl., 16 (2010), 56. doi: 10.1016/j.ffa.2009.12.001. Google Scholar X. Zeng, J. Shan and L. Hu, A triple-error-correcting cyclic code from the Gold and Kasami-Welch APN power functions,, Finite Fields Appl., 18 (2012), 70. doi: 10.1016/j.ffa.2011.06.005. Google Scholar D. Zheng, X. Wang, L. Yu and H. Liu, The weight enumerators of several classes of $p$-ary cyclic codes,, Discrete Math., 338 (2015), 1264. doi: 10.1016/j.disc.2015.02.005. Google Scholar D. Zheng, X. Wang, X. Zeng and L. Hu, The weight distribution of a family of $p$-ary cyclic codes,, Des. Codes Crypt., 75 (2015), 263. doi: 10.1007/s10623-013-9908-2. Google Scholar Z. Zhou and C. Ding, A class of three-weight cyclic codes,, Finite Fields Appl., 25 (2014), 79. doi: 10.1016/j.ffa.2013.08.005. Google Scholar Z. Zhou, C. Ding, J. Luo and A. Zhang, A family of five-weight cyclic codes and their weight enumerators,, IEEE Trans. Inf. Theory, 59 (2013), 6674. doi: 10.1109/TIT.2013.2267722. Google Scholar Gerardo Vega, Jesús E. Cuén-Ramos. The weight distribution of families of reducible cyclic codes through the weight distribution of some irreducible cyclic codes. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020059 Nigel Boston, Jing Hao. The weight distribution of quasi-quadratic residue codes. Advances in Mathematics of Communications, 2018, 12 (2) : 363-385. doi: 10.3934/amc.2018023 Lanqiang Li, Shixin Zhu, Li Liu. The weight distribution of a class of p-ary cyclic codes and their applications. Advances in Mathematics of Communications, 2019, 13 (1) : 137-156. doi: 10.3934/amc.2019008 Masaaki Harada, Ethan Novak, Vladimir D. Tonchev. The weight distribution of the self-dual $[128,64]$ polarity design code. Advances in Mathematics of Communications, 2016, 10 (3) : 643-648. doi: 10.3934/amc.2016032 Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433 Eimear Byrne. On the weight distribution of codes over finite rings. Advances in Mathematics of Communications, 2011, 5 (2) : 395-406. doi: 10.3934/amc.2011.5.395 Sergio R. López-Permouth, Steve Szabo. On the Hamming weight of repeated root cyclic and negacyclic codes over Galois rings. Advances in Mathematics of Communications, 2009, 3 (4) : 409-420. doi: 10.3934/amc.2009.3.409 Denis S. Krotov, Patric R. J. Östergård, Olli Pottonen. Non-existence of a ternary constant weight $(16,5,15;2048)$ diameter perfect code. Advances in Mathematics of Communications, 2016, 10 (2) : 393-399. doi: 10.3934/amc.2016013 Yuk L. Yung, Cameron Taketa, Ross Cheung, Run-Lie Shia. Infinite sum of the product of exponential and logarithmic functions, its analytic continuation, and application. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 229-248. doi: 10.3934/dcdsb.2010.13.229 Pankaj Kumar, Monika Sangwan, Suresh Kumar Arora. The weight distributions of some irreducible cyclic codes of length $p^n$ and $2p^n$. Advances in Mathematics of Communications, 2015, 9 (3) : 277-289. doi: 10.3934/amc.2015.9.277 Chengju Li, Qin Yue, Ziling Heng. Weight distributions of a class of cyclic codes from $\Bbb F_l$-conjugates. Advances in Mathematics of Communications, 2015, 9 (3) : 341-352. doi: 10.3934/amc.2015.9.341 Valery Y. Glizer, Oleg Kelis. Singular infinite horizon zero-sum linear-quadratic differential game: Saddle-point equilibrium sequence. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 1-20. doi: 10.3934/naco.2017001 Libin Mou, Jiongmin Yong. Two-person zero-sum linear quadratic stochastic differential games by a Hilbert space method. Journal of Industrial & Management Optimization, 2006, 2 (1) : 95-117. doi: 10.3934/jimo.2006.2.95 Jiyoung Han, Seonhee Lim, Keivan Mallahi-Karai. Asymptotic distribution of values of isotropic here quadratic forms at S-integral points. Journal of Modern Dynamics, 2017, 11: 501-550. doi: 10.3934/jmd.2017020 Kai Liu, Zhi Li. Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3551-3573. doi: 10.3934/dcdsb.2016110 Tyrone E. Duncan. Some partially observed multi-agent linear exponential quadratic stochastic differential games. Evolution Equations & Control Theory, 2018, 7 (4) : 587-597. doi: 10.3934/eect.2018028 Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1 Irene Márquez-Corbella, Edgar Martínez-Moro, Emilio Suárez-Canedo. On the ideal associated to a linear code. Advances in Mathematics of Communications, 2016, 10 (2) : 229-254. doi: 10.3934/amc.2016003 Serhii Dyshko. On extendability of additive code isometries. Advances in Mathematics of Communications, 2016, 10 (1) : 45-52. doi: 10.3934/amc.2016.10.45 Eunju Hwang, Kyung Jae Kim, Bong Dae Choi. Delay distribution and loss probability of bandwidth requests under truncated binary exponential backoff mechanism in IEEE 802.16e over Gilbert-Elliot error channel. Journal of Industrial & Management Optimization, 2009, 5 (3) : 525-540. doi: 10.3934/jimo.2009.5.525 Long Yu Hongwei Liu
CommonCrawl
Almost flat manifold In mathematics, a smooth compact manifold M is called almost flat if for any $\varepsilon >0$ there is a Riemannian metric $g_{\varepsilon }$ on M such that ${\mbox{diam}}(M,g_{\varepsilon })\leq 1$ and $g_{\varepsilon }$ is $\varepsilon $-flat, i.e. for the sectional curvature of $K_{g_{\varepsilon }}$ we have $|K_{g_{\epsilon }}|<\varepsilon $. Given n, there is a positive number $\varepsilon _{n}>0$ such that if an n-dimensional manifold admits an $\varepsilon _{n}$-flat metric with diameter $\leq 1$ then it is almost flat. On the other hand, one can fix the bound of sectional curvature and get the diameter going to zero, so the almost-flat manifold is a special case of a collapsing manifold, which is collapsing along all directions. According to the Gromov–Ruh theorem, M is almost flat if and only if it is infranil. In particular, it is a finite factor of a nilmanifold, which is the total space of a principal torus bundle over a principal torus bundle over a torus. Notes References • Hermann Karcher. Report on M. Gromov's almost flat manifolds. Séminaire Bourbaki (1978/79), Exp. No. 526, pp. 21–35, Lecture Notes in Math., 770, Springer, Berlin, 1980. • Peter Buser and Hermann Karcher. Gromov's almost flat manifolds. Astérisque, 81. Société Mathématique de France, Paris, 1981. 148 pp. • Peter Buser and Hermann Karcher. The Bieberbach case in Gromov's almost flat manifold theorem. Global differential geometry and global analysis (Berlin, 1979), pp. 82–93, Lecture Notes in Math., 838, Springer, Berlin-New York, 1981. • Gromov, M. (1978), "Almost flat manifolds", Journal of Differential Geometry, 13 (2): 231–241, doi:10.4310/jdg/1214434488, MR 0540942. • Ruh, Ernst A. (1982), "Almost flat manifolds", Journal of Differential Geometry, 17 (1): 1–14, doi:10.4310/jdg/1214436698, MR 0658470.
Wikipedia
Institute of Mathematical Sciences (Spain) The Institute of Mathematical Sciences (Spanish: Instituto de Ciencias Matemáticas – ICMAT) is a mixed institute affiliated to the Spanish National Research Council (CSIC) in partnership with three public universities: the Autonomous University of Madrid (UAM), the Charles III University of Madrid (UC3M) and the Complutense University of Madrid (UCM). Founded in 2010,[1] the ICMAT headquarters is located in the Cantoblanco Campus in Northern Madrid. Institute of Mathematical Sciences Instituto de Ciencias Matemáticas AbbreviationICMAT Formation2010 Location • Cantoblanco Campus, Madrid, Spain FieldsMathematics Parent organization CSIC, UAM, UCM, UC3M The ICMAT is composed of the mathematicians belonging to the CSIC and researchers from the three Madrid universities. The structure and composition of the center was based on an initial selection carried out with the assistance of the National Agency for Public Education (Agencia Nacional de Evaluación y Prospectiva – ANEP) after a public call issued to all interested parties. Research at the ICMAT The ICMAT is an institute where work on a broad range of mathematics is conducted, including the transfer of knowledge and results. Initially the main fields of research are as follows: Mathematical Analysis, Differential Geometry, Algebraic Geometry, Partial Differential Equations, Fluid Mechanics, Dynamical Systems, Geometric Mechanics and Mathematical Physics. In addition to the inclusion of research lines in Number Theory, Group Theory and Combinatorics in 2011. Among the research results obtained by researchers from the Institute that have so far had the greatest impact, it is worth mentioning the solution to the Nash problem in Singularity Theory by Javier Fernández de Bobadilla and María Pe;[2] the [solution to the Sidon problem in Number Theory by Javier Cilleruelo and Carlos Vinuesa;[3] the solution to the Arnold conjecture in Hydrodynamics by Daniel Peralta[4] Salas and Alberto Enciso and the construction of the mathematical model for explaining how water waves break by the research team led by Diego Córdoba.[5] History of the ICMAT Background to the ICMAT The ICMAT emerged from the Department of Mathematics at the CSIC Institute of Mathematics and Fundamental Physics, to which mathematicians belonging to the CSIC were assigned. The agreement for the creation of the center was signed in November, 2007, after assessment through the 2006–2009 Strategic Plan, to which all the centers connected with the CSIC are subject and in which the international commission recommended the establishment of a separate institute. A look back over the long history of the CSIC reveals what may be regarded as forerunners of the ICMAT: the Laboratorio Seminario Matemático of the Junta de Ampliación de Estudios (JAE) created in 1915, and the Instituto Jorge Juan de Matemáticas, created in 1939 by the CSIC. The ICMAT, Severo Ochoa Center of Excellence In 2011 the ICMAT was chosen as one of the eight centers of excellence in the Severo Ochoa Program call for submissions issued by the then Spanish Ministry of Science and Technology (MICYT). Among other distinctions, six ICMAT researchers have obtained European Research Council (ERC) Starting Grants, which places the Institute in the foremost position in Europe in mathematics. Outreach at the ICMAT Every year the ICMAT participates in the Science and Technology Week, the Science in Action competition (the ICMAT being one of the organizing institutions) and since 2012 has participated in Researchers’ Night (in collaboration with the UAM). Furthermore, the ICMAT has launched the Matemáticas en la Residencia program[6] (in collaboration with the Residencia de Estudiantes and the CSIC) in which international figures for the public understanding of science such as Marcus du Sautoy, Jesús María Sanz-Serna, Pierre Cartier, Guillermo Martínez, Edward Frenkel, Christiane Rousseau, Antonio Durán and John Allen Paulos have taken part, as well as the Graffiti and Maths competition,[7] which is aimed at secondary school students. The blog Mathematics and its Frontiers[8] has an annual readership of 150,000, in addition to the presence of the Institute in social networks: Facebook [9] and Twitter. [10] In March, 2013, the ICMAT officially became a Scientific Culture Unit recognized by the FECYT (Spanish Foundation for Science and Technology), and is the only mathematical institution to enjoy this distinction. Location and facilities ICMAT facilities The ICMAT is located in a new building on the UAM campus and has one auditorium with a capacity for 270 people, a lecture hall that holds 140 and another that holds 80, as well as three more with a capacity of 50, 40 and 30 people, respectively. It also has a library of 1,100 square meters, a large computation area and offices for some 200 researchers. These facilities are ideal for the celebration of thematic programs, congresses, schools and seminars. The ICMAT on the UAM+CSIC International Campus of Excellence The ICMAT forms part of the Theoretical Physics and Mathematics strategic axis on the UAM+CSIC International Campus of Excellence. References 1. "El Instituto de Ciencias Matemáticas, en la elite de la matemática europea". La Vanguardia. 24 May 2016. 2. de Bobadilla, Javier; Pe Pereira, María (1 November 2012). "The Nash problem for surfaces". Annals of Mathematics. 176 (3): 2003–2029. doi:10.4007/annals.2012.176.3.11. S2CID 53455905. 3. Cilleruelo, Javier; Ruzsa, Imre Z.; Vinuesa, Carlos (28 September 2009). "Generalized Sidon sets". arXiv:0909.5024. {{cite journal}}: Cite journal requires |journal= (help) 4. Enciso, Alberto; Peralta-Salas, Daniel (1 January 2012). "Knots and links in steady solutions of the Euler equation". Annals of Mathematics. 175 (1): 345–367. doi:10.4007/annals.2012.175.1.9. S2CID 115180867. 5. Castro, Angel; Córdoba, Diego; Fefferman, Charles L.; Gancedo, Francisco; Gómez-Serrano, Javier (17 January 2012). "Splash singularity for water waves". Proceedings of the National Academy of Sciences. 109 (3): 733–738. arXiv:1106.2120. Bibcode:2012PNAS..109..733C. doi:10.1073/pnas.1115948108. PMC 3271900. PMID 22219372. 6. "Mathematics at the Residencia | Instituto de Ciencias Matemáticas". Icmat.es. Archived from the original on 2013-08-26. Retrieved 2013-08-15. 7. "Grafiti y Matemáticas | Instituto de Ciencias Matemáticas". Icmat.es. Archived from the original on 2013-08-26. Retrieved 2013-08-15. 8. "Matemáticas y sus fronteras". Madrimasd.org. Retrieved 2013-08-15. 9. "Instituto de Ciencias Matemáticas (ICMAT)". Facebook. Retrieved 2013-08-15. 10. "Inst Cc. Matemáticas (_ICMAT) on Twitter". Twitter.com. 2013-07-31. Retrieved 2013-08-15. 40.5494°N 3.6871°W / 40.5494; -3.6871 Authority control International • ISNI • VIAF National • Germany Other • IdRef
Wikipedia
\begin{document} \title{Resonant Multilevel Amplitude Damping Channels} \author{Stefano Chessa} \orcid{0000-0003-2771-8330} \affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy} \affiliation{Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois, 61801, USA} \email{[email protected]} \author{Vittorio Giovannetti} \orcid{0000-0002-7636-9002} \affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy} \date{16 Jan 2023} \begin{abstract} We introduce a new set of quantum channels: resonant multilevel amplitude damping (ReMAD) channels. Among other instances, they can describe energy dissipation effects in multilevel atomic systems induced by the interaction with a zero-temperature bosonic environment. At variance with the already known class of multilevel amplitude damping (MAD) channels, this new class of maps allows the presence of an environment unable to discriminate transitions with identical energy gaps. After characterizing the algebra of their composition rules, by analyzing the qutrit case, we show that this new set of channels can exhibit degradability and antidegradability in vast regions of the allowed parameter space. There we compute their quantum capacity and private classical capacity. We show that these capacities can be computed exactly also in regions of the parameter space where the channels aren't degradable nor antidegradable. \end{abstract} \maketitle \section{Intro} While two-level quantum systems (qubit) represent the fundamental building block for any Quantum Information processing, there are indications that working with qudits (i.e. quantum systems with an Hilbert space of dimension $d>2$) may bring in some advantages, both in terms of communication and cryptography (see e.g. \cite{QUDIT_COMM} and references therein) and of computation (see e.g. \cite{QUDIT_COMP} and references therein), with qutrits that recently made their first appearance on commercial quantum devices \cite{QUTRIT_RIGETTI}. Despite this fact the landscape of mathematical models describing the physical noises affecting these systems is still relatively unexplored especially in terms of their associated information capacities, see \cite{QUDIT_DEP, QUDIT_PAULI, DEGRADABLE, d_arrigo_2007, MAD, PCDS, PLATYPUS, STR_DEG_CH, DET_POS_CAP} for results on basic noise models in higher dimensions. Capacities are figures of merit developed in the context of Quantum Shannon Theory \cite{HOLEVOBOOK, WILDEBOOK, WATROUSBOOK, HAYASHIBOOK, NC, ADV_Q_COMM, HOLEGIOV, SURVEY}, which allow one to quantitatively measure the level of deterioration that a given noise process induces on the quantum system it acts upon. A formal definition of such quantities is properly constructed in a specific communication scenario. There one describes the effect of the noise as an information loss during a signaling process that connects a sender (Alice) who is controlling the state of the system before the action of the noise, and a receiver (Bob), who instead has access to the deteriorated version of the qudit. In this context, depending on the type of messages one is considering (e.g. classical, private classical, or quantum) and on the type of side resources one allocates to the task (e.g. shared entanglement, two-way communication), the capacity of the noise channel is defined. Formally it corresponds to the optimal rate which gauges the maximum number of bits, secret bits, or qubtis that Alice can reliably transfer to Bob per use of the channel in the communication setting. Unfortunately for the vast majority of models such optimal rates are not computable neither analytically nor algorithmically \cite{Huang_2014, Cubitt2015, Elkouss2018} due to superadditivity and superactivation effects~\cite{SUPERADD, SUPERACT, SUPERADD_PRIV, SUPERADD_TRAD, SUPERADD_ENT}. In this sense the literature has evolved to find capacities bounds and to find channel properties to be leveraged in order to overcome the hurdles of the direct computation or at least provide meaningful upper bounds. Among others, concerning the unassisted quantum and private classical capacities, we find: degradability \cite{DEGRADABLE}, antidegradability \citep{ANTIDEGRADABLE}, weak degradability \cite{ANTIDEGRADABLE}, additive extensions \cite{ADD_EXT}, conjugate degradability \cite{CONJ_DEG}, less noisy or more capable channels \cite{LESS_N_MORE_C}, partial degradability \cite{PARTIAL_DEG}, approximate degradability \cite{APPROX_DEG}, teleportation-covariant channels \cite{PLOB}, unital channels \cite{UNITAL}, low noise approximations \cite{LOW_NOISE}. In this sense a corpus of literature is being built with the aim to produce efficiently computable bounds and approximations of these capacities, see e.g. \cite{7586115, 7807212, Christandl2017, 8482492, 2011.05949, Fang2021, 2202.11688, 2203.02127}. From this perspective, this paper approaches the evaluation of the quantum capacity $Q$ and the private classical capacity $C_{\text{p}}$ of a new class of qudit channels, providing their exact expression for a wide range of noise parameters. Specifically, we introduce a family of noisy transformations that mimic energy loss in a multilevel quantum system $\text{S}$ (e.g. an atom or a molecule coupled with a low temperature e.m. field). Our construction builds up from the Multilevel Amplitude Damping (MAD) proposal of \cite{MAD} where a $d$-dimensional generalization of the qubit Amplitude Damping Channel (ADC)~\cite{NC} is modeled as a collection of independent two-level processes that induce transitions from higher energy levels of the system to the lower ones. These transitions are mediated by an exchange interaction of the type $\sigma_{\text{S}}^-\otimes \sigma_{\text{E}}^+ + \sigma_{\text{S}}^+\otimes \sigma_{\text{E}}^-$ between $\text{S}$ and its environment $\text{E}$ ($\sigma_{\text{X}}^{\pm}$ representing raising and lowering operators for the system $\text{X}$). The major difference between the transformations we discuss here and the ones analyzed in \cite{MAD}, is that we now allow for the possibility that the transitions of S involving identical energy gaps will couple with the same type of excitations in the environment. We dub this class of the processes Resonant Multilevel Amplitude Damping (ReMAD) channels: they behave as usual MAD channels on the populations of each energy level of $\text{S}$, but exhibit different effects on the coherence terms of the system. Specifically due to the peculiar nature of the selected $\text{S}-\text{E}$ coupling, under the action of a ReMAD channel the system $\text{S}$ will be typically slightly less prone to dephasing than under the action of a MAD channel characterized by the same transition probabilities. At the mathematical level this corresponds to a net reduction of the minimal number of Kraus operators~\cite{HOLEVOBOOK} required to describe the effect of the associated noise. This leads to some simplification in the characterization of their quantum capacities. What makes this noise model interesting is its simplicity and the fact that it can emerge in a variety of scenarios. To enumerate some of the ones relevant in quantum information processing and quantum communications we can mention: atomic systems in quantum memories and quantum repeaters \cite{Q_MEMORY_ATOMS,Q_REPEATERS_ATOMS,Q_NETWORK_ATOMS}, optical qudits transmitted through lines such as optical fibers with polarization dependent losses \cite{POLARIZATION_LOSS}, optical qudits interacting with beamsplitters and qudits encoded in harmonic oscillators \cite{QUBIT_OSC,ENC_SPIN_OSC}, see \cite{BOS_CODES, BOS_CODES1} and references therein for applications and implementations of bosonic codes, quantum computation and simulation with molecular spins \cite{MOLECULAR_SPINS} and references therein.\\ The article is structured as follows: in Sec.~\ref{sec: Definition} we introduce ReMAD channels, their complementary channels and their composition rules; in Sec.~\ref{sec: Deg, Qcap&Prcap} we briefly review the issues of degradability, antidegradability and the computation of quantum and private classical capacities $Q$ and $C_{\text{p}}$; in Sec.~\ref{sec: Degradable chan} we provide an analysis of $Q$ and $C_{\text{p}}$ for qutrit ReMAD channels; conclusions are drawn in Sec.~\ref{sec: Conclusions}. \section{Definitions}\label{sec: Definition} Let $\mathcal{H}_{\text{S}}$ be the Hilbert space associated with a $d$-dimensional quantum system ${\text{S}}$ characterized by an energetically ordered canonical basis. This basis is represented by a collection of orthonormal levels $\{ \ket{0}_\text{S}, \ket{1}_\text{S}, \cdots, \ket{d-1}_\text{S}\}$ with $E_j < E_{j+1}$, being $E_j$ the energy associated with the $j$-th level. In this context we can formally describe energy damping processes by specifying a lower-triangular $d\times d$ transition matrix \begin{equation}\label{transM} \Gamma:=\begin{pmatrix} \gamma_{0,0} & 0 & 0 &0 & \cdots & 0\\ \gamma_{1,0} & \gamma_{1,1} & 0 &0 &\cdots & 0\\ \gamma_{2,0} & \gamma_{2,1} & \gamma_{2,2} &0 & \cdots & 0 \\ \gamma_{3,0} & \gamma_{3,1} & \gamma_{3,2} & \gamma_{3,3}& \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ \gamma_{d-1,0} & \gamma_{d-1,1} & \gamma_{d,2} & \gamma_{d-1,3} & \cdots & \gamma_{d-1,d-1} \end{pmatrix} ,\end{equation} whose elements $\gamma_{j,k}$ for $j\leq k\in\{0,\cdots, d-1\}$ are positive semidefinite quantities that define the transition probabilities, i.e. the probability for the energy level $|j\rangle$ to be mapped into the lower energy level $|k\rangle$. Consistency requirements are such that \begin{align} \sum_{k=0}^{j}\gamma_{j,k}= 1 \; , \qquad \forall j\in\{ 0, 1,\cdots, d-1\} \;, \label{probcons} \end{align} that for $j\geq 1$ identify $\gamma_{j,j} = 1- \sum_{k=0}^{j-1}\gamma_{j,k}$ as the survival probability of the level $j$ (notice that by construction $\gamma_{0,0}=1$, being $|0\rangle_{\text{S}}$ a fixed point of the channels model). The MAD channels introduced in Ref.~\cite{MAD} assign to each matrix $\Gamma$ a special Linear, Completely Positive, Trace Preserving (LCPTP) transformation $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ that, at the level of Stinespring representation~\cite{STINE}, can be seen as a coupling with a zero-temperature external bath ${\text{B}}$ absorbing each individual energy jump $E_{j} \rightarrow E_{k}$ into a distinct (orthogonal) degree of freedom (see left panel of Fig.~\ref{fig: qutritMADvsReMAD}). \begin{figure} \caption{\textbf{Left:} depiction of the MAD excitations exchange between the system S and its environment E. \textbf{Right:} depiction of the ReMAD excitations exchange between the system S and the environment E, notice the environment size. In both examples we set the total number of energy level equal to $d=3$.} \label{fig: qutritMADvsReMAD} \end{figure} Specifically, given $\hat{\rho}\in\mathfrak{S}({\cal H}_{\text{S}})$ a generic density operator of ${\text{S}}$, we express its evolution under the action of $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ as, \begin{equation} \label{STINE1} \Phi^{({\text{\tiny MAD}})}_{\Gamma}(\hat{\rho}) = \mbox{Tr}_{\text{B}}[ \hat{V}_{\Gamma} ( \hat{\rho} \otimes \ket{0}\!\!\bra{0}_\text{B} ) \hat{V}^\dag_{\Gamma}] \;, \end{equation} with $\ket{0}_\text{B}$ being the ground state of the bath, $\mbox{Tr}_{\text{B}}[\cdots]$ the partial trace over the environment, and $\hat{V}_{\Gamma}$ the unitary transformation that induces the mappings \begin{eqnarray}\nonumber \hat{V}_{\Gamma} \ket{0}_\text{S}\ket{0}_\text{B} &:=& \ket{0}_\text{S}\ket{0}_\text{B}\; , \\ \hat{V}_{\Gamma} \ket{j}_\text{S}\ket{0}_\text{B} &:=& \sqrt{\gamma_{j,j}}\ket{j}_\text{S}\ket{0}_\text{B} + \sum_{k=1}^j \sqrt{\gamma_{j,j-k}}\ket{j-k}_\text{S}\ket{j,k}_\text{B}\; , \qquad \forall j\in\{ 1,\cdots, d-1\} \label{defV} \;. \end{eqnarray} In the above expression for $j\in \{1,\cdots, d-1\}$ and $k\in \{ 1,\cdots, j\}$, kets $\ket{j,k}_\text{B}$ describe a collection of vectors that are mutually orthogonal and orthogonal with $\ket{0}_\text{B}$ as well. Since they represent the states where the environment stores the energy $E_j-E_k$ lost by $\text{S}$ when moving from level $\ket{j}_\text{S}$ to level $\ket{k}_\text{S}$, the construction in Eq.~(\ref{defV}) implicitly assumes that the environment of the model is capable to discriminate among the different energy jumps. This condition is physically realized e.g. when the energy spectrum of $\text{S}$ is composed by incommensurable levels. Notice also that this construction fixes the dimension of the Hilbert space $\mathcal{H}_{\text{B}}$ of $\text{B}$, as well as the minimum number of Kraus operators needed to represent $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ in the operator-sum representation~\cite{KRAUS} equal to $d(d-1)/2+1$, i.e. \begin{equation} \label{operatorsum} \Phi^{({\text{\tiny MAD}})}_{\Gamma}(\hat{\rho}) = \hat{M}_{\Gamma}^{(0)}\; \hat{\rho}\; \hat{M}_{\Gamma}^{(0)\dagger} + \sum_{j=1}^{d-1}\sum_{k=1}^j \hat{M}_{\Gamma}^{(j,k)} \; \hat{\rho} \; \hat{M}_{\Gamma}^{(j,k)\dagger} \; , \end{equation} with \begin{eqnarray}\label{eq: Kraus general MAD} \hat{M}_{\Gamma}^{(0)}&:=&{_{\text{B}}\!\bra{0}} \hat{V}_{\Gamma} \ket{0}_{\text{B}} = \sum_{l=0}^{d-1}\sqrt{\gamma_{l,l} } \ket{l}\!\!\bra{l}_{\text{S}}\; ,\nonumber \\ \hat{M}_{\Gamma}^{(j,k)} &:=&{_{\text{B}}\!\bra{j,k}} \hat{V}_{\Gamma} \ket{0}_{\text{B}} = \sqrt{\gamma_{j,j-k}}\; \ket{j-k}\!\!\bra{j}_{\text{S}} \; , \qquad \forall j\in\{ 1,\cdots, d-1\}, \forall k\in \{ 1, \cdots,j\} \;. \end{eqnarray} By introducing ReMAD channels $\Phi_{\Gamma}$ we now allow for the possibility that some of the transition events of $\text{S}$ will excite the same internal degrees of freedom of the bath, a condition which can be achieved e.g. when the energy levels of the system are equally spaced, i.e. $E_j =j \Delta E$ for all $j\in \{0,\cdots, d-1\}$. In this case we can replace the unitary coupling in Eq.~(\ref{defV}) with the new interaction \begin{eqnarray}\label{eq: transitions general} \hat{U}_{\Gamma} \ket{j}_\text{S}\ket{0}_\text{E} &:=& \sum_{k=0}^j \sqrt{\gamma_{j,j-k}}\ket{j-k}_\text{S}\ket{k}_\text{E}\; , \qquad \forall j\in\{ 0,\cdots, d-1\} \end{eqnarray} where $\{ \ket{0}_\text{E}, \ket{1}_\text{E}, \cdots, \ket{d-1}_\text{E}\}$ are a (possibly energetically ordered) orthonormal basis of the environment ${\text{E}}$. The resulting LCPTP transformation is hence obtained by replacing Eqs.~(\ref{STINE1}) and (\ref{operatorsum}) with \begin{equation} \label{REMAD} \Phi_{\Gamma}(\hat{\rho}) = \mbox{Tr}_{\text{E}}[ \hat{U}_{\Gamma} ( \hat{\rho} \otimes \ket{0}\!\!\bra{0}_\text{E} \hat{U}^\dag_{\Gamma}] = \sum_{i=0}^{d-1}\hat{K}_{\Gamma}^{(i)} \hat{\rho} \hat{K}_{\Gamma}^{(i)^\dagger} \; , \end{equation} where \begin{eqnarray}\label{eq: Kraus general} \hat{K}_{\Gamma}^{(i)} &:=&{_{\text{E}}\!\bra{i}} \hat{U}_{\Gamma} \ket{0}_{\text{E}} = \sum_{l=0}^{d-i-1}\sqrt{\gamma_{i+l,l}}\; \ket{l}\!\!\bra{i+l}_{\text{S}} \; , \qquad \forall i\in\{ 0,\cdots, d-1\}\;, \end{eqnarray} is a Kraus set characterized by at most $d$ non-zero elements. It is easy to check that wile for $d=2$, the ReMAD channel $\Phi_{\Gamma}$ coincides with its corresponding MAD counterpart $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ (indeed both reduce to the conventional qubit ADC). Also one can easily verify that for arbitrary $d$, both $\Phi_{\Gamma}$ and $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ produce the same diagonal output states per diagonal input states $\hat{\rho}^{(\text{diag})} := \sum_{j=0}^{d-1} \rho_{j,j} \ket{j}\!\!\bra{j}_{\text{S}}$, indeed \begin{equation} \Phi_{\Gamma} \left(\hat{\rho}^{(\text{diag})} \right)=\Phi^{({\text{\tiny MAD}})}_{\Gamma} \left(\hat{\rho}^{(\text{diag})} \right) = \sum_{j=0}^{d-1} \sum_{k=0}^{j} \rho_{j,j}\gamma_{j,k} \ket{k}\!\!\bra{k}_{\text{S}} \;. \label{DIACG} \end{equation} However for $d\geq 3$ the two sets of transformations have rather different impact on the off-diagonal term of the input state. Consider for instance the qutrit ($d=3$) scenario where, thanks to the constraint~(\ref{probcons}) the transition matrix $\Gamma$ can be parametrized by 3 non-negative terms, e.g. $\gamma_{10}$, $\gamma_{21}$, and $\gamma_{20}$, forming a 3-dimensional vector $(\gamma_{10},\gamma_{21},\gamma_{20})$ spanning the domain \begin{eqnarray}{\mathbb D}_3: = \{ (\gamma_{10},\gamma_{21},\gamma_{20})\in \mathbb{R}^{3}: \label{DOMAIN3} \gamma_{10},\gamma_{20}, \gamma_{21} \in[0,1] \; \mbox{and}\; \gamma_{20}+ \gamma_{21} \leq 1\} \;, \end{eqnarray} represented in Fig.~\ref{fig: BS_ADC} by the green rectangular right wedge delimited by the vertexes ${\bf A}$, ${\bf B}$, ${\bf C}$, ${\bf D}$, ${\bf E}$, and ${\bf F}$. In this case Eq.~(\ref{eq: transitions general}) reduces to \begin{align} \hat{U}_{\Gamma} \ket{0}_\text{S}\ket{0}_\text{E}&= \ket{0}_\text{S}\ket{0}_\text{E}\;, \nonumber \\ \hat{U}_{\Gamma} \ket{1}_\text{S}\ket{0}_\text{E}&= \sqrt{1-\gamma_{10}}\ket{1}_\text{S}\ket{0}_\text{E} + \sqrt{\gamma_{10}}\ket{0}_\text{S}\ket{1}_\text{E} \;, \nonumber \\ \hat{U}_{\Gamma} \ket{2}_\text{S}\ket{0}_\text{E}&= \sqrt{1-\gamma_{21} - \gamma_{20}}\ket{2}_\text{S}\ket{0}_\text{E}+ \sqrt{\gamma_{20}}\ket{0}_\text{S}\ket{2}_\text{E} + \sqrt{\gamma_{21}}\ket{1}_\text{S}\ket{1}_\text{E}\;, \end{align} while, identifying the canonical basis states $|0\rangle_{\text{S}}$, $|1\rangle_{\text{S}}$, and $|2\rangle_{\text{S}}$ with the column vectors $(1,0,0)^T$, $(0,1,0)^T$, and $(0,0,1)^T$, we can express the associated Kraus operators in Eq.~(\ref{eq: Kraus general}) as \begin{equation} \begin{split} &\hat{K}_{\Gamma}^{(0)} =\begin{pmatrix} 1 & 0 & 0\\ 0 & \sqrt{1-\gamma_{10}} & 0\\ 0 & 0 & \sqrt{1-\gamma_{21}-\gamma_{20}} \end{pmatrix} ,\quad \hat{K}_{\Gamma}^{(1)} =\begin{pmatrix} 0 & \sqrt{\gamma_{10}} & 0\\ 0 & 0 & \sqrt{\gamma_{21}}\\ 0 & 0 & 0 \end{pmatrix},\quad \hat{K}_{\Gamma}^{(2)} =\begin{pmatrix} 0 & 0 & \sqrt{\gamma_{20}}\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}. \end{split} \end{equation} Accordingly the action of $\Phi_{{\Gamma}}$ on a generic density matrix $\hat{\rho}$ of $\text{S}$ produces the output state of the form \begin{widetext} \begin{equation}\label{eq: channel action} \Phi_{{\Gamma}}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00} + \gamma_{10} \rho_{11}+\gamma_{20} \rho_{22} & \sqrt{1-\gamma_{10}} \rho_{01} + \sqrt{\gamma_{10} \gamma_{21}}\rho_{12} & \sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02} \\ \sqrt{1-\gamma_{10}} \rho_{01}^* + \sqrt{\gamma_{10} \gamma_{21}} \rho_{12}^* & (1-\gamma_{10}) \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})} \rho_{12} \\ \sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02}^* & \sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})}\rho_{12}^* & (1-\gamma_{21}-\gamma_{20}) \rho_{22} \\ \end{array} \right) \; , \end{equation} \end{widetext} to be compared with the associated transformation induced by the MAD counterpart \begin{widetext} \begin{equation}\label{eq: channel actionMAD} \hspace*{-0.3cm} \Phi^{({\text{\tiny MAD}})}_{\Gamma}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00} + \gamma_{10} \rho_{11}+\gamma_{20} \rho_{22} & \sqrt{1-\gamma_{10}} \rho_{01} & \sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02} \\ \sqrt{1-\gamma_{10}} \rho_{01}^* & (1-\gamma_{10}) \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})} \rho_{12} \\ \sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02}^* & \sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})}\rho_{12}^* & (1-\gamma_{21}-\gamma_{20}) \rho_{22} \\ \end{array} \right) \; , \end{equation} \end{widetext} with $\rho_{ij} := {_{\text{S}}\langle} i| \hat{\rho} |j\rangle_{\text{S}}$. As one can observe, while on the diagonal elements in both cases we have the usual decay of populations predicted by Eq.~(\ref{DIACG}), the ReMAD output state in Eq.~(\ref{eq: channel action}) exhibits a transfer of coherence that mixes the terms $\rho_{12}$ and $\rho_{10}$ which is not contemplated in the MAD output of Eq.~(\ref{eq: channel actionMAD}). It is also worth noticing that in case either $\gamma_{21}=0$ or $\gamma_{10}=0$ the qutrit map $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ and the qutrit map $\Phi_{\Gamma}$ describe the same physical process. As a final remark we notice that a particular example of ReMAD channel is provided by the so called \textit{beamsplitter type} ADC ${\Psi}_{\eta}$ introduced in Ref.~\cite{BS_ADC}. This subclass of ReMAD channels describes the evolution of a qudit encoded in the first $d$ states of the Fock basis of an harmonic oscillator passing through a beamsplitter of transmittance $\eta$. It's straightforward to verify that such mappings are a special instance of the ReMAD class characterized by a transition matrix $\Gamma[\eta]$ whose elements can be parametrized by the formula \begin{eqnarray} \gamma_{j,k}[\eta]:=\binom{j}{k}\eta^{j-k}(1-\eta)^k\;, \qquad \label{fdsf} \forall j\in \{ 0,\cdots, d-1\}\;, \forall k\in \{0,\cdots, j\} \;,\end{eqnarray} so that ${\Psi}_{\eta} = \Phi_{\Gamma[\eta]}$. For $d=3$ this corresponds to having $\gamma_{10}[\eta]=\eta$, $\gamma_{21}[\eta]= 2 \eta(1-\eta)$, and $\gamma_{20}[\eta]= \eta^2$: a plot of the parameter region spanned by beamsplitter type ADC for $d=3$ is reported in Fig.~\ref{fig: BS_ADC}\;. \begin{figure} \caption{Plot of the domain ${\mathbb D}_3$ of Eq.~(\ref{DOMAIN3}) which identifies the set of ReMAD channels for $d=3$ (green region, the grey volume corresponds to the inaccessible parameter region); the beamsplitter type ADC of Ref.~\cite{BS_ADC} corresponds to the blue line that connects the vertexes ${\bf D}$ and ${\bf B}$. } \label{fig: BS_ADC} \end{figure} \subsection{Complementary maps} At variance with what happens with MAD channels, the complementary map~\cite{HOLEVOBOOK} $\tilde{\Phi}_{\Gamma}$ of a generic ReMAD transformation ${\Phi}_{\Gamma}$ is also a ReMAD channel (up to an isometry). This can be seen by recalling that $\tilde{\Phi}_{\Gamma}$ can be obtained from Eq.~(\ref{REMAD}) by replacing the partial trace over $\text{E}$ with a partial trace over $\text{S}$, i.e. \begin{equation} \label{REMADcomp} \tilde{\Phi}_{\Gamma}(\hat{\rho}) = \mbox{Tr}_{\text{S}}[ \hat{U}_{\Gamma} ( \hat{\rho} \otimes \ket{0}\!\!\bra{0}_\text{E} \hat{U}^\dag_{\Gamma}] = \sum_{i=0}^{d-1}\hat{Q}_{\Gamma}^{(i)} \hat{\rho} \hat{Q}_{\Gamma}^{(i)\dagger} \; , \end{equation} where now $\hat{Q}_{\Gamma}^{(i)}:{\cal H}_{\text{S}} \rightarrow {\cal H}_{\text{E}}$ are the operators \begin{eqnarray}\label{eq: Kraus general complementary} \hat{Q}_{\Gamma}^{(i)} &:=&{_{\text{S}}\!\bra{i}} \hat{U}_{\Gamma} \ket{0}_{\text{E}} = \sum_{l=0}^{d-i-1}\sqrt{\gamma_{i+l,i}}\; \ket{l}_{\text{E\;S}}\!\bra{i+l} \; , \qquad \forall i\in\{ 0,\cdots, d-1\}\;. \end{eqnarray} Notice next that up to an isometry $\hat{V}_{SE}$ mapping the energy levels of E into the corresponding levels of S (i.e. $\hat{V}_{SE} |l\rangle_E \rightarrow |l\rangle_S$, $\forall l$), $\hat{Q}_{\Gamma}^{(i)}$ has exactly the same form of the operators in Eq.~(\ref{eq: Kraus general}) computed with a new lower-triangular transition matrix $\tilde{\Gamma}$. The non-zero elements of $\tilde{\Gamma}$, row by row, are obtained by a simple reordering of those of $\Gamma$, i.e. \begin{eqnarray} \label{Defgammadelta} \tilde{\gamma}_{j,k} := \gamma_{j,j-k} \;, \qquad \forall j\in\{ 1,\cdots, d-1\}, \forall k\in \{ 1, \cdots,j\}\;. \end{eqnarray} Specifically we can write \begin{eqnarray} \hat{V}_{SE} \hat{Q}_{\Gamma}^{(i)} = \hat{K}_{\tilde{\Gamma}}^{(i)} \;, \qquad \forall i\in\{ 0,\cdots, d-1\}\;. \end{eqnarray} and hence \begin{eqnarray}\label{impoIDE} \hat{V}_{SE} \tilde{\Phi}_{\Gamma}(\cdots) \hat{V}^\dag_{SE}= {\Phi}_{\tilde{\Gamma}}(\cdots)\;. \end{eqnarray} Again, as an example we provide the explicit form of the complementary channel $\tilde{\Phi}_{{\Gamma}}$ of the qutrit ReMAD map $\Phi_{\Gamma}$ of Eq.~(\ref{eq: channel action}), i.e. \begin{equation}\label{eq: compl channel action} \tilde{\Phi}_{{\Gamma}}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00}+(1-\gamma_{10}) \rho_{11}+ (1-\gamma_{21}-\gamma_{20})\rho_{22} & \sqrt{\gamma_{10}} \rho_{01} + \sqrt{(1-\gamma_{10}) \gamma_{21}}\rho_{12} & \sqrt{\gamma_{20}} \rho_{02} \\ \sqrt{\gamma_{10}} \rho_{01}^* +\sqrt{(1-\gamma_{10}) \gamma_{21}} \rho_{12}^*& \gamma_{10} \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{\gamma_{10} \gamma_{20}} \rho_{12}\\ \sqrt{\gamma_{20}} \rho_{02}^* & \sqrt{\gamma_{10} \gamma_{20}} \rho_{12}^* & \gamma_{20} \rho_{22} \\ \end{array} \right) \; . \end{equation} Notice finally that, as a consequence of Eq.~(\ref{Defgammadelta}), it follows that in the case of a beamsplitter type ADC ${\Psi}_{\eta}$ defined by Eq.~(\ref{fdsf}), one gets $\tilde{\gamma}_{j,k}[\eta]={\gamma}_{j,k}[1-\eta]$. This, thanks to Eq.~(\ref{impoIDE}), allows us to recover the well known fact that the complementary map $\tilde{\Psi}_{\eta}$ of ${\Psi}_{\eta}$ is isometrically equivalent to ${\Psi}_{1-\eta}$. \subsection{Composition rules}\label{sec: Composition rules} In Ref.~\cite{MAD} it was shown that MAD channels are closed under composition rules, a property that proves useful in studying their information capacities. In this section we investigate whether ReMAD channels behave similarly. Maybe surprisingly, it turns out that it's not always the case. To begin with, let's observe that from the composition rules of beamsplitters one can easily verify that the following identity holds true \begin{eqnarray}\label{dfdsfa} \Phi_{\Gamma[\eta']} \circ \Phi_{\Gamma[\eta]} = \Phi_{\Gamma[\eta'\eta]} \;, \qquad \forall \eta',\eta\in[0,1]\;, \end{eqnarray} (the symbol ``$\circ$" represents channel composition). Apart from this special case, the analysis is slightly convoluted. We therefore focus on the simplest, nontrivial case of $d=3$ qutrit systems defined by the input-output relations of Eq.~(\ref{eq: channel action}). By explicit computation it follows that being $\Gamma$ and $\Gamma'$ two $3\times 3$ transition matrices as in Eq.~(\ref{transM}) we have \begin{align} &[\Phi_{\Gamma'}\circ \Phi_{\Gamma}(\hat{\rho})]_{00}=\rho_{00} + (\gamma'_{10}\gamma_{10}+1-\gamma_{10}) \rho_{11}+ (\gamma'_{10}\gamma_{21}+\gamma'_{20}\gamma_{20}+1-\gamma_{21}-\gamma_{20}) \rho_{22} \; , \nonumber \\ &[\Phi_{{\Gamma'}}\circ \Phi_{\Gamma}(\hat{\rho})]_{01}=\sqrt{1-\gamma'_{10}}\sqrt{1-\gamma_{10}} \rho_{01} + (\sqrt{\gamma'_{10} \gamma'_{21}(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})}+ \sqrt{\gamma_{10} \gamma_{21} (1-\gamma'_{10})}) \rho_{12} \; , \nonumber \\ &[\Phi_{{\Gamma'}}\circ \Phi_{\Gamma}(\hat{\rho})]_{02}=\sqrt{1-\gamma'_{21}-\gamma'_{20}}\sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02} \; , \nonumber \\ &[\Phi_{{\Gamma'}}\circ \Phi_{\Gamma}(\hat{\rho})]_{11}=(1-\gamma'_{10})(1-\gamma_{10}) \rho_{11}+[\gamma'_{21}(1-\gamma_{21}-\gamma_{20})+(1-\gamma'_{10})\gamma_{21} ] \rho_{22} \; , \nonumber \\ &[\Phi_{{\Gamma'}}\circ \Phi_{\Gamma}(\hat{\rho})]_{12}=\sqrt{(1-\gamma'_{10}) (1-\gamma'_{21}-\gamma'_{20})}\sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})} \rho_{12} \; , \nonumber \\ &[\Phi_{{\Gamma'}}\circ \Phi_{\Gamma}(\hat{\rho})]_{22}=(1-\gamma'_{21}-\gamma'_{20})(1-\gamma_{21}-\gamma_{20}) \rho_{22} \; , \end{align} plus the Hermitian conjugate of off-diagonal elements. We are interested in understanding under which conditions a new transition matrix ${\Gamma''}$ of elements $\gamma''_{j,k}$ exists such that \begin{eqnarray} \label{dfdsfxxx} \Phi_{\Gamma''}=\Phi_{{\Gamma'}}\circ \Phi_{\Gamma}\;. \end{eqnarray} Among the equations above, those referring to elements 00, 02, 11, 12, 22 are all consistent with setting \begin{align}\label{eq: delta params} &\gamma''_{10}=\gamma_{10}+\gamma'_{10}(1-\gamma_{10}) \; , \nonumber \\ &\gamma''_{20}=\gamma_{20}+\gamma'_{10}\gamma_{21} + \gamma'_{20}(1-\gamma_{21}-\gamma_{20}) \; , \nonumber \\ &\gamma''_{21}=(1-\gamma'_{10})\gamma_{21}+\gamma'_{21}(1-\gamma_{21}-\gamma_{20}) \; . \end{align} (observe that such definitions are compatible with the request that $\gamma''_{j,k}\in [0,1]$, as well as with the normalization constraint $\sum_{k=0}^{j}\gamma''_{j,k}= 1$ for all $j$). Element 01 instead forces an additional constraint, i.e. \begin{equation}\label{eq: parameters constraint} \sqrt{\gamma''_{21}\gamma''_{10}}=\sqrt{\gamma'_{10} \gamma'_{21}(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})}+ \sqrt{\gamma_{10} \gamma_{21} (1-\gamma'_{10})} \; , \end{equation} which is not necessarily granted. As a matter of fact by substituting in Eq.~(\ref{eq: parameters constraint}) the values for $\gamma''_{21}$ and $\gamma''_{10}$ obtained in Eq.~(\ref{eq: delta params}) we get that in order to satisfy it we need \begin{equation}\label{eq: comp constraint} \gamma_{10}\gamma'_{21}(1-\gamma_{21}-\gamma_{20})=\gamma_{21}\gamma'_{10}(1-\gamma_{10})(1-\gamma'_{10}) \; . \end{equation} This identifies a specific region for the parameters $\Gamma$ and ${\Gamma'}$ where the composition $\Phi_{\Gamma'}\circ \Phi_{\Gamma}$ corresponds to a ReMAD channel. Notice in particular that, in agreement with Eq.~(\ref{dfdsfa}), the identity in Eq.~(\ref{eq: comp constraint}) is always fulfilled for beamsplitter type ADC, i.e. for $\Gamma=\Gamma[\eta]$ and $\Gamma'=\Gamma[\eta']$. Furthermore we observe that if $\Gamma'$ is such that either $\gamma_{21}'=\gamma'_{10} =0$ or $\gamma_{21}'=1-\gamma'_{10} =0$, then Eq.~(\ref{eq: comp constraint}) is always satisfied for all the choices of $\gamma'_{20}$. Of particular interest for our analysis is the first of these two configurations: here $\Phi_{{\Gamma'}}$ describes a ReMAD channel where the first two levels of S are untouched by the noise, while the third level gets mapped into the ground state with probability $\gamma'_{20}$. Looking at Eq.~(\ref{eq: delta params}) we notice that the transition probabilities $\gamma''_{10}$ and $\gamma''_{21}$ of the ReMAD channel $\Phi_{\Gamma''}$, which emerge by the concatenation, coincide with those of $\Phi_{\Gamma}$ (i.e. $\gamma''_{10}=\gamma_{10}$ and $\gamma''_{21}=\gamma_{21}$), while the transition probability $\gamma''_{20}$ increases with respect to $\gamma_{20}$. Specifically $\gamma''_{20} = \gamma_{20}+ \gamma'_{20}(1-\gamma_{21}-\gamma_{20})$. Observing this, by varying $\gamma_{20}''\in [0,1]$, the last quantity can span over the entire interval $[1-\gamma_{21},1]$. We can use this observation to claim that being $\Phi_{\Gamma}$ and $\Phi_{\Gamma''}$ two qutrits ReMAD channels having the same transition probabilities connecting level 2 to level 1 and level 1 to level 0, but with $\gamma''_{20}$ larger than or equal to $\gamma_{20}$, then they can be connected via a third ReMAD channel as in Eq.~(\ref{dfdsfxxx}). \subsection{Covariance}\label{sec: Appendix averaged covariance} The ReMAD channels exhibit a covariance property under suitable unitary transformations. Specifically consider the set of unitary gates \begin{eqnarray} \hat{U}_S(\theta) = \sum_{j=0}^{d-1} e^{-i j \theta} \ket{j}\!\!\bra{j}_{\text{S}} \;, \\ \hat{U}_E(\theta) = \sum_{j=0}^{d-1} e^{-i j \theta} \ket{j}\!\!\bra{j}_{\text{E}} \;, \end{eqnarray} with $\theta$ real. It then follows that \begin{equation} \label{COVAR} {\Phi}_{\Gamma} (\hat{U}_{\text{S}}(\theta) \hat{\rho} \hat{U}_{\text{S}}^{\dagger} (\theta))= \sum_{k=0}^{d-1} \sum_{j,j'=k}^{d-1}\; \rho_{j,j'} \; e^{-i (j-j') \theta} \sqrt{\gamma_{j,k} \gamma_{j',k} } \; \ket{j-k}\!\!\bra{j'-k}_{\text{S}}= \hat{U}_{\text{S}}(\theta) {\Phi}_{\Gamma}( \hat{\rho}) \hat{U}_{\text{S}}^{\dagger} (\theta)\;, \end{equation} and similarly for the complementary channel $\tilde{\Phi}$ we get \begin{equation}\label{eq:compl covariance} \tilde{\Phi}_{\Gamma}(\hat{U}_{\text{S}}(\theta) \hat{\rho} \hat{U}_{\text{S}}^{\dagger} (\theta))= \sum_{k=0}^{d-1} \sum_{j,j'=k}^{d-1}\; \rho_{j,j'} \; e^{-i (j-j') \theta} \sqrt{\tilde{\gamma}_{j,k} \tilde{\gamma}_{j',k} } \; \ket{j-k}\!\!\bra{j'-k}_{\text{E}}= \hat{U}_{\text{E}}(\theta) \tilde{\Phi}_{\Gamma}( \hat{\rho}) \hat{U}_{\text{E}}^{\dagger} (\theta)\;. \end{equation} As we'll see in the following the identities above turn out to be extremely useful in the computation of the capacities of our channels. \section{Review of quantum and private classical capacities}\label{sec: Deg, Qcap&Prcap} This section is dedicated to reviewing some basic notions about channel capacities for those readers who may be not familiar with these quantities. While the plethora of capacities for quantum channels includes a large collection of different functionals, our analysis will be focused on the quantum capacity $Q$ and the private classical capacity $C_{\text{p}}$. \subsection{Quantum Capacity and Private Classical Capacity} The quantum capacity $Q(\Phi)$ of a quantum channel $\Phi$ defines the maximum rate of transmitted quantum information achievable per channel use, assuming $\Phi$ to act in the regime of i.i.d. noise \cite{HOLEVOBOOK,WILDEBOOK,WATROUSBOOK,HAYASHIBOOK,HOLEGIOV,NC,ADV_Q_COMM,SURVEY}. Intuitively it tells you how faithfully a quantum state, possibly correlated with an external system, can be sent and received by two communication parties if the communication line is noisy. Recently $Q$ has also been showed to provide a lower bound to the space overhead necessary for fault tolerant quantum computation in presence of noise \cite{Q_CAP_OVER}. The formal definition of $Q$ relies on the coherent information $I_{\text{coh}}$ \cite{COH_INFO}. Assuming $n$ uses of the channel $\Phi$ and a generic state $\hat{\rho}^{(n)}\in \mathfrak{S}({\cal H}_{\text{S}}^{\otimes n})$ we have \begin{equation}\label{eq:coherent info} I_{\text{coh}} \left(\Phi^{\otimes n},\hat{\rho}^{(n)} \right):= S \left( \Phi^{\otimes n}(\hat{\rho}^{(n)}) \right)-S \left( \tilde{\Phi}^{\otimes n}(\hat{\rho}^{(n)}) \right). \end{equation} with $S(\hat{\rho}):= -\mbox{Tr}[ \hat{\rho} \log_2 \hat{\rho}]$ the von Neumann entropy of the state $\hat{\rho}$, and $\tilde{\Phi}$ the complementary channel of $\Phi$. Maximizing over $\mathfrak{S}({\cal H}_{\text{S}}^{\otimes n})$ we get the maximized coherent information $Q^{(n)}$ for $n$ instances of the channel \begin{equation} Q^{(n)}(\Phi):= \max_{\hat{\rho}^{(n)}\in {\mathfrak{S}({\cal H}_{\text{S}}^{\otimes n})}} I_{\text{coh}} \left( \Phi^{\otimes n},\hat{\rho}^{(n)} \right) \; , \end{equation} and by regularization of this expression we get the maximized coherent information per channel use, that is the quantum capacity $Q(\Phi)$ \cite{QCAP1,QCAP2,QCAP3} \begin{equation}\label{eq:QCapacity} Q(\Phi)=\lim_{n\rightarrow \infty} \frac{Q^{(n)}(\Phi)}{n} \; . \end{equation} The private classical capacity $C_{\text{p}}(\Phi)$ quantifies the maximum rate of classical information achievable per channel use assuming also the privacy of communication (where privacy is intended as limiting to arbitrarily small the amount of information that an eavesdropper can extract from the environment during the communication). The fundamental tool needed to compute $C_{\text{p}}(\Phi)$ is the Holevo information functional $\chi$ \cite{HOL_INFO}. Being ${\cal E}_n := \{ p_i , \hat{\rho}^{(n)}_i\}$ an ensemble of quantum states $\hat{\rho}^{(n)}_i \in \mathfrak{S}({\cal H}_{\text{S}}^{\otimes n})$, we have \begin{equation} \chi(\Phi^{\otimes n}, {\cal E}_n) := S\left( \Phi^{\otimes n} \left(\sum_i p_i \hat{\rho}_i^{(n)} \right) \right)-\sum_i p_i S\left( \Phi^{\otimes n} \left( \hat{\rho}_i^{(n)} \right) \right) \; . \end{equation} Through the Holevo information we define next the private information for $n$ uses $C_{\text{p}}^{(n)}(\Phi)$, that involves a maximization over all ensembles ${\cal E}_n$ \begin{equation} C_{\text{p}}^{(n)}(\Phi) := \max_{{\cal E}_n} \left( \chi \left( \Phi^{\otimes n}, {\cal E}_n \right) - \chi \left( \tilde{\Phi}^{\otimes n}, {\cal E}_n \right) \right) \; , \end{equation} from which finally $C_{\text{p}}(\Phi)$ can be computed via regularization over $n$, i.e. \cite{QCAP3, PRIV2}: \begin{equation}\label{eq:private class} C_{\text{p}}(\Phi)=\lim_{n\to \infty} \frac{C_{\text{p}}^{(n)}(\Phi)}{n} \; . \end{equation} We conclude our brief review by recalling that $Q$ and $C_{\text{p}}$ obey data processing inequalities~\cite{WILDEBOOK}. This means that, for ({\it any}) channel $\Phi$ that can be expressed as a composition of other two LCPTP maps $\Phi_1$ and $\Phi_2$ (i.e. $\Phi= \Phi_1\circ \Phi_2$) it follows \begin{eqnarray} &Q(\Phi) \leq \min\{ Q(\Phi_1), Q(\Phi_2)\}\label{data} \;, \\ &C_{\text{p}}(\Phi) \leq \min\{ C_{\text{p}}(\Phi_1), C_{\text{p}}(\Phi_2)\} \; . \end{eqnarray} \subsection{Degradability and antidegradability} \label{sec: Appendix degradable} The need of regularization in the evaluation of $Q(\Phi)$ and $C_{\text{p}}(\Phi)$ poses a well known problem which ultimately is the underlying reason of our efforts here. An exception to this predicament is given by degradable \cite{DEGRADABLE} and antidegradable \cite{ANTIDEGRADABLE} channels, whose definitions and properties are reviewed here. A quantum channel $\Phi:\mathcal{L}(\mathcal{H}_{\text{S}})\rightarrow \mathcal{L}(\mathcal{H}_{\text{S}'})$ is said degradable if a LCPTP map $\mathcal{D}:\mathcal{L}(\mathcal{H}_{\text{S}'})\rightarrow \mathcal{L}(\mathcal{H}_{\text{E}})$ exists s.t. \begin{equation}\label{eq:degradable} \tilde{\Phi}=\mathcal{D}\circ \Phi \; , \end{equation} while it's said antidegradable if it exists a LCPTP map $\mathcal{A}:\mathcal{L}(\mathcal{H}_{\text{E}})\rightarrow \mathcal{L}(\mathcal{H}_{\text{S}'})$ s.t. \begin{equation}\label{eq:antidegradable} \Phi=\mathcal{A}\circ\tilde{\Phi} \; . \end{equation} In case $\Phi$ is mathematically invertible, a simple direct way to determine whether it is degradable or not is to formally invert the composition in Eq.~(\ref{eq:degradable}). This is done (if possible) by constructing the super-operator $\mathcal{D}=\tilde{\Phi}\circ \Phi^{-1}$ and by checking whether such object is LCPTP \cite{INVERSE1,INVERSE}. The check can be performed by studying the positivity of its associated Choi matrix $\text{C}_{\mathcal{D}}$, i.e. \begin{equation} \label{NECANDSUFF} \mbox{$\Phi$ invertible} \Longrightarrow \mbox{$\Phi$ degradable iff $\text{C}_{\mathcal{D}}\geq 0$} \;, \end{equation} where given $\ket{\Gamma}_{\text{RS}}=\sum_{i=0}^{d-1}\ket{i}_\text{R}\ket{i}_\text{S}$ we set $\text{C}_{\mathcal{D}}:=(I_\text{R}\otimes \mathcal{D}_{\text{S}})\kb{\Gamma}{\Gamma}_{\text{RS}}$ (see App.~\ref{sec:channelinversion} for details). A completely similar argument can be made for antidegradability: in this case we can claim \begin{equation} \label{NECANDSUFFanti} \mbox{$\tilde{\Phi}$ invertible} \Longrightarrow \mbox{$\Phi$ antidegradable iff $\text{C}_{\mathcal{A}}\geq 0$} \;, \end{equation} where now $\text{C}_{\mathcal{A}}$ is the Choi operator of the map $\mathcal{A}={\Phi}\circ \tilde{\Phi}^{-1}$. As already mentioned the evaluation of the quantum a private classical capacity simplifies for degradable channels. To begin with, in this case $Q$ and $C_{\text{p}}$ result to be additive, so the regularization over $n$ in Eq.~(\ref{eq:QCapacity}) isn't needed, leading to the single-letter formula~\cite{PRIV4} \begin{equation}\label{eq:singLett} C_{\text{p}}(\Phi)=Q(\Phi)= Q^{(1)}(\Phi) :=\max_{\hat{\rho} \in {\mathfrak{S}({\cal H}_{\text{S}})}} I_{\text{coh}} \left( \Phi,\hat{\rho} \right) \;. \end{equation} Another important simplification arises from the fact that under degradability conditions the coherent information functional is concave w.r.t. the input state \cite{CONCAVITY}. Accordingly if $\Phi$ and $\tilde{\Phi}$ turn out to be covariant under the action of some unitary group, the maximization in Eq.~(\ref{eq:singLett}) can be further simplified. In the case of ReMAD channels where the properties (\ref{COVAR}) and (\ref{eq:compl covariance}) hold true, under degradability conditions we can write \begin{eqnarray}\label{eq: coh info concave} I_{\text{coh}}\left( \Phi_{\Gamma} , \int \frac{d \theta }{2\pi} \hat{U}_S(\theta) \hat{\rho}\hat{U}^\dag_S(\theta) \right)&\geq& \int \frac{d \theta }{2\pi} I_{\text{coh}}\left( \Phi_{\Gamma} ,\hat{U}_S(\theta) \hat{\rho}\hat{U}^\dag_S(\theta) \right) \nonumber\\ &=&\int \frac{d \theta }{2\pi} S( \Phi_{\Gamma} (\hat{U}_S(\theta) \hat{\rho}\hat{U}^\dag_S(\theta) )- S( \tilde{\Phi}_{\Gamma} (\hat{U}_S(\theta) \hat{\rho}\hat{U}^\dag_S(\theta) ) \nonumber \\ &=& I_{\text{coh}}\left( \Phi_{\Gamma} , \hat{\rho} \right)\;, \end{eqnarray} where in the last equality we made use of the invariance of von Neumann entropy under unitary transformations. Observing then that \begin{eqnarray} \int \frac{d \theta }{2\pi} \hat{U}_S(\theta) \hat{\rho}\hat{U}^\dag_S(\theta) = \hat{\rho}^{(\text{diag})} :=\sum_{j=0}^{d-1} \rho_{j,j} \; \ket{j}_{\text{S}}\!\bra{j}\;, \end{eqnarray} we can conclude that for degradable ReMAD channels the maximization over of (\ref{eq:singLett}) can be restricted on the set of states $\hat{\rho}^{(\text{diag})}$ which are diagonal in the canonical basis, i.e. \begin{equation}\label{eq:singLettsimp} C_{\text{p}}(\Phi_{\Gamma})=Q(\Phi_{\Gamma})= Q^{(1)}(\Phi_{\Gamma}) =\max_{\hat{\rho}^{(\text{diag})} \in {\mathfrak{S}({\cal H}_{\text{S}})}} I_{\text{coh}} \left( \Phi_{\Gamma},\hat{\rho}^{(\text{diag})} \right) \;. \end{equation} For antidegradable channels instead, due to a no-cloning argument \cite{ERAS_NO_CLO}, $Q(\Phi)=0$. Similarly, $C_{\text{p}}(\Phi)=0$: the environment can reconstruct the channel output simply by applying the antidegrading channel, so no private information can be transmitted. Therefore for channels exhibiting antidegradability no maximizations are needed. \section{Degradability and antidegradability regions for $d=3$ ReMAD channels}\label{sec: Degradable chan} In this section we study the degradability/antidegradability properties for qutrits ReMAD channels. Mimicking the approach used for other amplitude damping channels, we tackle the problem working under the heuristic assumption that if a degrading (or antidegrading) channel of a ReMAD channel exists, it is itself a ReMAD channel. While this choice is potentially suboptimal, numerical tests based on the more rigorous (but analytically impractical) matrix inversion method discussed in Sec.~\ref{sec: Appendix degradable}, reveals that this is not the case. Reminding the isometric connection given in Eq.~(\ref{impoIDE}), we hence translate the degradability condition on $\Phi_{\Gamma}$ into the problem of identifying a transition matrix ${\Gamma'}$ s.t. \begin{eqnarray} {\Phi}_{\tilde{\Gamma}}=\Phi_{{\Gamma'}}\circ\Phi_{\Gamma}\;. \end{eqnarray} With a procedure similar to the one employed in Sec.~\ref{sec: Composition rules} such equation can be mapped into the following constraints \begin{align}\label{eq: delta params degradability} &\tilde{\gamma}_{10}=\gamma_{10}+\gamma'_{10}(1-\gamma_{10}) \; , \nonumber \\ &\tilde{\gamma}_{20}=\gamma_{20}+\gamma'_{10}\gamma_{21} + \gamma'_{20}(1-\gamma_{21}-\gamma_{20}) \; , \nonumber \\ &\tilde{\gamma}_{21}=(1-\gamma'_{10})\gamma_{21}+\gamma'_{21}(1-\gamma_{21}-\gamma_{20}) \; , \end{align} which using the connection as in Eq.~(\ref{Defgammadelta}) to replace $\tilde{\gamma}_{10}=\gamma_{11}= 1-\gamma_{10}$, $\tilde{\gamma}_{20}=\gamma_{22}= 1-\gamma_{20}-\gamma_{21}$, and $\tilde{\gamma}_{21}=\gamma_{21}$, leads to \begin{align} &\gamma'_{10} =\frac{1-2\gamma_{10}}{1-\gamma_{10}} \; , \nonumber \\ &\gamma'_{21}=\frac{1-2\gamma_{10}}{1-\gamma_{10}}\frac{\gamma_{21}}{1-\gamma_{21}-\gamma_{20}} \; , \nonumber \\ &\gamma'_{20}=\frac{1-\gamma_{21}-2\gamma_{20}}{1-\gamma_{21}-\gamma_{20}}-\frac{\gamma_{21}}{1-\gamma_{21}-\gamma_{20}}\frac{1-2\gamma_{10}}{1-\gamma_{10}} \; . \end{align} Imposing now the vector $(\gamma'_{10},\gamma'_{21},\gamma'_{20})$ to belong to the domain ${\mathbb D}_3$ of Eq.~(\ref{DOMAIN3}) we get the degradability region which we depict as the yellow region of Fig.~\ref{fig: degr and antidegr}. In such region we can compute exactly the value of $Q(\Phi_{\Gamma})$ and $C_{\text{p}}(\Phi_{\Gamma})$ using the single-letter formula in Eq.~(\ref{eq:singLettsimp}) only involving diagonal input matrices. Results of such optimization are reported in Fig.~\ref{fig: Q capacity} for different values of the parameters $\gamma_{10},$ $\gamma_{21}$ and $\gamma_{20}$. In the cases of $\gamma_{21}=0$ and $\gamma_{10}=0$ the obtained values were already known in the whole resulting parameter space (including some non-degradable regions), since there ReMAD channels reduce to double decay MAD channels, already studied in \citep{MAD}. \begin{figure} \caption{Degradability region (yellow) and antidegradability region (blue) for a qutrit ReMAD channel, the grey volume representing the non-physical region of the parameter space (the two panels show different perspectives). Notice that the beamsplitter type ADC $\Psi_\eta$ (blue line of Fig.~\ref{fig: BS_ADC}) is fully contained in the above regions (it lies in the degradability region for $\eta \geq 1/2$, $\Phi_{\eta}$ and lies in the antidegradability region for $\eta <1/2$).} \label{fig: degr and antidegr} \end{figure} To check antidegradability of a ReMAD channel we follow the same path, looking for a ${\Gamma'}$ that fulfills the identity \begin{eqnarray} {\Phi}_{{\Gamma}}=\Phi_{{\Gamma'}}\circ\Phi_{\tilde{\Gamma}}\;, \end{eqnarray} leading to a constraint that can be obtained from the one in Eq.~(\ref{eq: delta params degradability}) by exchanging $\gamma_{j,k}$ with $\tilde{\gamma}_{j,k}$. This leads finally to \begin{align} &\gamma'_{10}=\frac{2\gamma_{10}-1}{\gamma_{10}} \; , \nonumber \\ &\gamma'_{21}=\frac{\gamma_{21}}{\gamma_{20}}\frac{2\gamma_{10}-1}{\gamma_{10}} \; , \nonumber \\ &\gamma'_{21}=2+\frac{\gamma_{21}(1-\gamma_{10})-\gamma_{10}}{\gamma_{10}\gamma_{20}} \; . \end{align} Imposing to have $(\gamma'_{10},\gamma'_{21},\gamma'_{20})$ in ${\mathbb D}_3$ brings us to the antidegradability region depicted in blue in Fig.~\ref{fig: degr and antidegr}. \begin{figure}\label{fig: Q capacity} \end{figure} \subsection{Capacities in non-degradable and non-antidegradable regions}\label{sec: cap non degradable} In those regions of parameters for which degradability or antidegradability are not achieved a closed accessible expression for $Q$ and $C_{\text{p}}$ is lacking. A notable exception is provided by the two planar regions identified respectively by $\gamma_{10}=0$ (triangular surface of Fig.~\ref{fig: BS_ADC}, identified by the vertexes ${\bf ADE}$) and $\gamma_{21}=0$ (rectangular region ${\bf ABCD}$). As one can observe from Fig.~\ref{fig: degr and antidegr} they overlap only partially with the degradable and antidegradable regions: yet, thanks to the fact that there $\Phi_{\Gamma}$ and $\Phi^{({\text{\tiny MAD}})}_{\Gamma}$ coincide (see comments in Sec.~\ref{sec: Definition}), we can use the same techniques of~\cite{MAD}, see Appendix \ref{app: Qcaps nondeg}, to compute the quantum capacity of the associated ReMAD channels. Even for those points that aren't explicitly degradable (or antidegradable), specifically: \begin{itemize} \item Planar region $\gamma_{10}=0$: here the channel $\Phi_{\Gamma}$ is neither degradable nor antidegradable for all points with $\gamma_{12}+\gamma_{20}\geq 1/2$, see Appendix \ref{app: uniqueness}. With a similar approach as the one in~\cite{MAD}, see Appendix \ref{app: Q gamma_10}, we can conclude that in this region the capacities are constant and equal to 1, i.e. \begin{eqnarray} Q(\Phi_{\Gamma})=C_{\text{p}}(\Phi_{\Gamma}) = 1\;, \qquad \forall \; \gamma_{12}+\gamma_{20}\geq 1/2\;, \label{planar1} \end{eqnarray} see left panel of Fig. \ref{fig: Qcap nondeg}. \item Planar region $\gamma_{21}=0$: here the channel $\Phi_{\Gamma}$ is neither degradable nor antidegradable for all points which verify the conditions $\gamma_{01}\geq1/2\geq \gamma_{20}$ (right-lower quadrant in Fig. \ref{fig: Qcap nondeg}, right panel) or $\gamma_{20}\geq1/2\geq \gamma_{01}$ (left-upper quadrant in Fig. \ref{fig: Qcap nondeg}, right panel), see Appendix \ref{app: uniqueness}. Applying techniques of~\cite{MAD}, see Appendix \ref{app: Q gamma_21}, we can conclude that in these regions the capacities assume symmetric values, i.e. give $0\leq x\leq 1/2 \leq y\leq 1$, we have \begin{eqnarray} && Q(\Phi_{\Gamma})\Big|_{\begin{subarray}{l} \gamma_{01}=x \\ \gamma_{21}=0\\\gamma_{20}=y \end{subarray}}=C_{\text{p}}(\Phi_{\Gamma})\Big|_{\begin{subarray}{l} \gamma_{01}=x \\ \gamma_{21}=0\\\gamma_{20}=y \end{subarray}} =Q(\Phi_{\Gamma})\Big|_{\begin{subarray}{l} \gamma_{01}=y \\ \gamma_{21}=0 \\\gamma_{20}=x \end{subarray}} = C_{\text{p}}(\Phi_{\Gamma})\Big|_{\begin{subarray}{l} \gamma_{01}=y \\ \gamma_{21}=0 \\\gamma_{20}=x \end{subarray}}={\cal Q}(x) \;, \label{frontpanel} \end{eqnarray} with \begin{eqnarray} \nonumber {\cal Q}(x) &:=& \max_{ p_1,p_2 } \{ - [1-(1-x) p_1)]\log_2 [1-(1-x) p_1)] - [(1-x) p_1]\log_2 [(1-x) p_1] \\ &&\qquad\quad +(1-x p_1-p_2)\log_2 (1-xp_1-p_2) + x p_1 \log_2 (xp_1 )+p_2 \log_2 (p_2)\}\nonumber \\ &=& \max_{p_1 \in [0,1]} \{ H_2((1-x) p_1)- H_2(x p_1 )\} \;, \label{eq: opt qubitnew} \end{eqnarray} see right panel of Fig. \ref{fig: Qcap nondeg}. In the first expression the maximization is performed over the populations $p_1,p_2\in [0,1]$ of the input state with respect to levels $|1\rangle_{\text{S}}$ and $|2\rangle_{\text{S}}$ under the consistency constraint $p_1+p_2\leq 1$; the second identity instead follows from the observation that for $p_0$ fixed, the maximum of the r.h.s. term is always achieved by $p_2=0$ (no population assigned to the second excited level). Observe that the final expression in Eq.~(\ref{eq: opt qubit}) exactly matches the quantum capacity of a qubit ADC channel with damping probability equal to $1-x$ \cite{QUBIT_ADC}. \hspace*{4cm} \begin{figure}\label{fig: Qcap nondeg} \end{figure} \item With a little additional effort we can also determine the exact value of $Q(\Phi_{\Gamma})$ for the elements on the boundary of the ${\mathbb D}_3$ region identified by the constraints \begin{eqnarray} \gamma_{10}=1\;,\qquad \gamma_{21}+\gamma_{20}=1\;, \end{eqnarray} (notice that while for $\gamma_{21} \leq 1/2$ the points belong to the antidegradable region, for $\gamma_{21}>1/2$ they are neither degradable nor antidegradable, see Appendix \ref{app: uniqueness}). For these values Eq.~(\ref{eq: channel action}) becomes \begin{equation}\label{eq: channel action EDGE} \Phi_{{\Gamma}}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00} + \rho_{11}+(1-\gamma_{21}) \rho_{22} & \sqrt{\gamma_{21}}\rho_{12} & 0 \\ \sqrt{ \gamma_{21}} \rho_{12}^* & \gamma_{21} \rho_{22} & 0 \\ 0 &0 & 0 \\ \end{array} \right) \; , \end{equation} which bears a close resemblance to the MAD channel $\Phi^{({\text{\tiny MAD}})}_{\Gamma'}$ with values $\gamma_{21}'=0$, $\gamma_{20}'=1$, i.e. \begin{equation}\label{eq: channel actionMADedge} \Phi^{({\text{\tiny MAD}})}_{\Gamma'}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00} + \gamma'_{10} \rho_{11}+ \rho_{22} & \sqrt{1-\gamma'_{10}} \rho_{01} & 0 \\ \sqrt{1-\gamma_{10}} \rho_{01}^* & (1-\gamma'_{10}) \rho_{11}& 0 \\ 0& 0 & 0\\ \end{array} \right) \; , \end{equation} for which capacities $Q$ and $C_{\text{p}}$ have been computed for all values of $\gamma_{10}'$ \cite{MAD}, see Appendix \ref{sec:first}. The explicit connection between Eq.~(\ref{eq: channel action EDGE}) and Eq.~(\ref{eq: channel actionMADedge}) follows by observing that, setting $\gamma_{10}'=1-\gamma_{21}$ we can map the former into the latter via a unitary transformation on the input of the channel via the identity \begin{eqnarray} \Phi_{\Gamma}(\hat{\rho}) = \Phi^{({\text{\tiny MAD}})}_{\Gamma'}(\hat{V}_{\text{S}} \hat{\rho}\hat{V}^\dag_{\text{S}})\;, \end{eqnarray} being $\hat{V}_{\text{S}}$ the unitary operator reordering the canonical basis $\ket{0}_{\text{S}}$, $\ket{1}_{\text{S}}$, $\ket{2}_{\text{S}}$, into $\ket{1}_{\text{S}}$, $\ket{2}_{\text{S}}$, $\ket{0}_{\text{S}}$. Accordingly using Eq.~(\ref{eq:cohID13}) of Appendix \ref{app: Q gamma_21}, in the region $\frac{1}{2}\leq \gamma_{21} \leq 1$ we can write \begin{align}\nonumber & &Q(\Phi_{\Gamma}) = C_{\text{p}}(\Phi_{\Gamma}) = \max_{ p_0,p_1 } \{ & - (1-\gamma_{21}p_0)\log_2 (1-\gamma_{21}p_0) - \gamma_{21} p_0 \log_2 ( \gamma_{21}p_0) \nonumber \\ & & &+(1-(1-\gamma_{21})p_0-p_1)\log_2 (1-(1-\gamma_{21})p_0-p_1) \nonumber \\ & & &+ (1-\gamma_{21}) p_0 \log_2 ((1-\gamma_{21})p_0) + p_1 \log_2 p_1\} \nonumber \\ & & ={\cal Q}(1-\gamma_{21}) \;, &\label{eq: opt qubit} \end{align} in the region $0\leq \gamma_{21} \leq \frac{1}{2}$ instead, $Q=C_{\text{p}}=0$ as the channel is antidegradable. The overall profile of the capacities in this case reduces then to that of a qubit ADC with damping parameter $1-\gamma_{21}$ \cite{QUBIT_ADC}. We report its evaluation in Fig. \ref{fig: Q capacity qubit}, left panel. Notice that in the first expression the maximization is performed over the populations $p_0,p_1\in [0,1]$ of the input state with respect to levels $|0\rangle_{\text{S}}$ and $|1\rangle_{\text{S}}$ under the consistency constraint $p_0+p_1\leq 1$. The second identity then follows by noticing that, similarly to what happens in Eq.~(\ref{eq: opt qubitnew}), for $p_0$ fixed the maximum of the r.h.s. term is always achieved by $p_1=0$ (no population assigned to the first excited level). Comparing Eq.~(\ref{eq: opt qubit}) with Eq.~(\ref{frontpanel}) we observe that on the edges ${\bf BC}$ and ${\bf BF}$, that bound $\mathbb{D}_3$ on the plane $\gamma_{10}=1$, the capacity of the channel takes the same values as those for fixed $\gamma_{20}$. Indeed we get \begin{eqnarray} &&Q(\Phi_{\Gamma})\Big|_{\begin{subarray}{l} \gamma_{01}=1 \\ \gamma_{21}=0\\ \gamma_{20} \end{subarray} } =Q(\Phi_{\Gamma})\Big|_{\begin{subarray}{l} \gamma_{01}=1 \\ \gamma_{21}=1- \gamma_{20}\\ \gamma_{20} \end{subarray} }= {\cal Q}(\gamma_{20}) \;, \label{eq: opt qubit-newedge} \end{eqnarray} which again is the capacity profile of a qubit ADC with damping parameter $\gamma_{20}$. See \ref{fig: Q capacity qubit}, right panel. \begin{figure} \caption{ \textbf{Left}: $Q(1-\gamma_{21})$ as in Eq.~(\ref{eq: opt qubit}), in the region $\gamma_{21}\geq \frac{1}{2}$ the capacity is found for provably not degradable nor antidegradable channels. \textbf{Right}: $Q(\gamma_{20})$ as in Eq.~(\ref{eq: opt qubit-newedge}), in the region $\gamma_{20}\geq \frac{1}{2}$ the capacity is found for provably not degradable nor antidegradable channels. } \label{fig: Q capacity qubit} \end{figure} \item In the other sectors of the parameter space it is still possible to exploit some information theoretic properties to obtain computationally efficient upper bounds. For instance exploiting the considerations underlined in the final paragraphs of Sec.~\ref{sec: Composition rules} and the data processing inequality (\ref{data}) we can claim that, for fixed values of $\gamma_{10}$ and $\gamma_{21}$, both $Q(\Phi_{\Gamma})$ and $C_{\text{p}}(\Phi_{\Gamma})$ are non-increasing functions of $\gamma_{20}$, e.g. \begin{eqnarray} Q(\Phi_{\Gamma})\geq Q(\Phi_{\Gamma'})\;, \qquad \forall \gamma_{10}'=\gamma_{10}, \gamma_{21}'= \gamma_{21}, \gamma_{20}'\geq \gamma_{20}\;. \end{eqnarray} \end{itemize} \section{Conclusions}\label{sec: Conclusions} We introduced and characterized a new class of physically relevant noise models for high-dimensional quantum systems. ReMAD channels enlarge the limited class of quantum channels for which an exact analytical or numerical approach in terms of quantum and private classical capacities is feasible. This has been shown by focusing on the simplest, yet nontrivial, case in which the system of interest is a qutrit. In such case we have shown the existence of a substantial region of the noise parameter space where the channels $\Phi_{\Gamma}$ are degradable/antidegradable, allowing hence the direct expression of the associated quantum and private classical capacities. Also, by resorting to formal mapping to the case of MAD channels discussed in \cite{MAD}, we computed the value of these capacities in regions where $\Phi_{\Gamma}$ is provably neither degradable nor antidegradable. The full characterization in terms of information capacities, such as for instance classical capacity and two-way capacities, is still missing for ReMAD channels and will require further investigations. A semi-analytical treatment is instead achievable for the characterization of entanglement assisted quantum and classical capacities $Q_E$ and $C_E$ of ReMAD channels, see Appendix \ref{sec: app ent ass cap}. \\ We acknowledge financial support by MIUR (Ministero dell' Istruzione, dell' Universit{\'a} e della Ricerca) by PRIN 2017 {\it Taming complexity via Quantum Strategies: a Hybrid Integrated Photonic approach} (QUSHIP) Id. 2017SRN- BRK, and via project PRO3 Quantum Pathfinder. \appendix \section{Channel inversion}\label{sec:channelinversion} To compute $C_{\cal D}$ we need to identify the inverse of the channel $\Phi$. Concretely the inversion of ${\Phi}$ can be done by exploiting the fact that quantum channels are linear maps connecting vector spaces of linear operators. They can in turn be represented as matrices acting on vector spaces. This through the following ``vectorization'' isomorphism, also called Liouville representation: \begin{eqnarray} \hat{\rho}_{\text{S}}=\sum_{ij} \rho_{ij}\ket{i}\!\!\bra{j}_{\text{S}}&\longrightarrow&\ket{\rho\rangle}=\sum_{ij} \rho_{ij}\ket{i}_{\text{S}}\otimes \ket{j}_{\text{S}}\in {\cal H}_{\text{S}}^{\otimes 2} \; , \nonumber \\ \label{eq:isomorphism}\\ \Phi(\hat{\rho}_{\text{S}})&\longrightarrow& \hat{\text{M}}_{\Phi} \ket{\rho\rangle} \; , \nonumber \end{eqnarray} where now $\hat{\text{M}}_{\Phi}$ is a $d_{\text{S}'}^2\times d_{\text{S}}^2$ matrix connecting ${\cal H}_{\text{S}}^{\otimes 2}$ and ${\cal H}_{\text{S}'}^{\otimes 2}$ ($d_{\text{S}}$ and $d_{\text{S}'}$ being respectively the dimensions of ${\cal H}_{\text{S}}$ and ${\cal H}_{\text{S}'}$). Given a Kraus set $\{\hat{K}_i\}_i$ for $\Phi$, $\hat{\text{M}}_{\Phi}$ can be explicitly expressed as \begin{equation} \hat{\text{M}}_{\Phi}=\sum_i \hat{K}_i\otimes \hat{K}_i^* \; . \end{equation} Following Eq.~(\ref{eq:degradable}) we have hence that for a degradable channel the following identity must apply \begin{equation} \hat{\text{M}}_{\tilde{\Phi}}=\hat{\text{M}}_{\mathcal{D}}\hat{\text{M}}_{\Phi} \; , \end{equation} with $\hat{\text{M}}_{\mathcal{D}}$ the matrix representation of the LCPTP degrading channel ${\cal D}$, implying that the super-operator $\tilde{\Phi}\circ \Phi^{-1}$ is now represented by \begin{equation}\label{eq: degr channel matrix} \hat{\text{M}}_{\mathcal{D}} = \hat{\text{M}}_{\tilde{\Phi}}\hat{\text{M}}_{\Phi}^{-1} \; . \end{equation} In the case of the qutrits ReMAD channel of (\ref{eq: channel action}), the inverse channel can also be expressed in the form \begin{widetext} \begin{equation}\label{eq: channel action INV} \hspace*{-0.5cm} \Phi^{-1}_{{\Gamma}}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00} - \tfrac{ \gamma_{10}}{1-\gamma_{10}} \rho_{11}+ \tfrac{\gamma_{10}\gamma_{21}-\gamma_{20} (1-\gamma_{10})}{(1-\gamma_{10})(1-\gamma_{21}-\gamma_{20})} \rho_{22} & \tfrac{\rho_{01}}{ \sqrt{1-\gamma_{10}} } - \tfrac{\sqrt{\gamma_{10} \gamma_{21}}\rho_{12} }{(1-\gamma_{10}) \sqrt{(1-\gamma_{20}-\gamma_{21})}} & \tfrac{\rho_{02} }{\sqrt{1-\gamma_{21}-\gamma_{20}}}\\ \tfrac{\rho^*_{01}}{ \sqrt{1-\gamma_{10}} } - \tfrac{\sqrt{\gamma_{10} \gamma_{21}} \rho^*_{12} }{(1-\gamma_{10}) \sqrt{(1-\gamma_{20}-\gamma_{21})}} &\tfrac{ \rho_{11}}{1-\gamma_{10}} -\tfrac{ \gamma_{21} \rho_{22}}{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})} & \tfrac{ \rho_{12} }{ \sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})}}\\ \tfrac{\rho_{02}^*}{\sqrt{1-\gamma_{21}-\gamma_{20}} } & \tfrac{\rho_{12}^*}{\sqrt{(1-\gamma_{10}) (1-\gamma_{21}-\gamma_{20})}}& \tfrac{\rho_{22}}{1-\gamma_{21}-\gamma_{20}} \\ \end{array} \right) \; , \end{equation} \end{widetext} as one can check by direct computation. \subsection{Uniqueness of the degrading channel}\label{app: uniqueness} In the most general case, even when the inverse of the channel considered exists, degrading maps may not be unique \cite{PITFALLS}. We show here how in the case of qutrit ReMAD channels, if these are degradable, the degrading channel must be unique and in the form of Eq.~(\ref{eq: degr channel matrix}). For our analysis this is also equivalent to say that if the degrading channel $\mathcal{D}$ can be found as in Eq.~(\ref{eq: degr channel matrix}) but it's not LCPTP, then the channel is not degradable. \noindent A proof of can be derived from \cite[Theorem 3]{PITFALLS}: \setcounter{thm}{3} \begin{thm}[\cite{PITFALLS}] Let $\Phi:\mathfrak{S}({\cal H}_{\text{A}})\rightarrow \mathfrak{S}({\cal H}_{\text{B}})$ be a quantum channel and $\tilde{\Phi}:\mathfrak{S}({\cal H}_{\text{A}})\rightarrow \mathfrak{S}({\cal H}_{\text{E}})$ its complementary channel and let the corresponding super-operator $\hat{\text{M}}_{\Phi}$ of $\Phi$ be full rank: $\text{rank}\; \hat{\text{M}}_{\Phi} = \min [d_A^2, d_B^2]$. Then, if a degrading map $\mathcal{D}:\mathfrak{S}({\cal H}_{\text{B}})\rightarrow \mathfrak{S}({\cal H}_{\text{E}})$ exists, it is unique iff $d_B \leq d_A$. \end{thm} \noindent In our case $d_B = d_A$ and the `superoperator' $\hat{\text{M}}_{\Phi}$ is always full rank except when $ \gamma_{10} = 1$ or $\gamma_{20}+\gamma_{21}=1$, since \begin{equation*} \hspace*{-1cm}\mathsf{M} = \hat{\text{M}}_{\Phi_\Gamma} = \scalebox{0.7}{$\begin{pmatrix} 1 & 0 & 0 & 0 & \gamma_{10} & 0 & 0 & 0 & \gamma_{20} \\ 0 & \sqrt{1 - \gamma_{10}} & 0 & 0 & 0 & \sqrt{\gamma_{10}\gamma_{21}} & 0 & 0 & 0 \\ 0 & 0 & \sqrt{1-\gamma_{21}-\gamma_{20}} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \sqrt{1 - \gamma_{10}} & 0 & 0 & 0 & \sqrt{\gamma_{10}\gamma_{21}} & 0 \\ 0 & 0 & 0 & 0 & 1 - \gamma_{10} & 0 & 0 & 0 & \gamma_{21} \\ 0 & 0 & 0 & 0 & 0 & \sqrt{(1 - \gamma_{10})(1-\gamma_{21}-\gamma_{20})} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{1-\gamma_{21}-\gamma_{20}} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{(1 - \gamma_{10})(1-\gamma_{21}-\gamma_{20})} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1-\gamma_{21}-\gamma_{20} \end{pmatrix}$} \end{equation*} is upper triangular and full rank, where we ordered the $\rho_{ij}$ `basis' as $\{ \rho_{00}, \rho_{01}, \rho_{02}, \rho_{10}, \rho_{11}, \rho_{12}, \rho_{20}, \rho_{21}, \rho_{22} \}$. So, whenever $\gamma_{10}\neq 1$ and $ \gamma_{21}+ \gamma_{20} \neq 1$ we have that $\hat{\text{M}}_{\Phi_\Gamma}$ is full rank (hence invertible) and if $\hat{\text{M}}_{\mathcal{D}} = \hat{\text{M}}_{\tilde{\Phi}_\Gamma}\hat{\text{M}}_{\Phi_\Gamma}^{-1}$ doesn't represent a LCPTP map then $\Phi_\Gamma$ is not degradable.\\ \noindent Now the cases $ \gamma_{10} = 1$ and $\gamma_{20}+\gamma_{21}=1$ remain, but it's straightforward to show that these channels can't degradable. Specifically we just need to show that $\text{ker}\; \Phi_\Gamma \not\subset \text{ker}\; \tilde{\Phi}_\Gamma$, see \cite[Sec. 2.1]{STR_DEG_CH}. Looking at the expressions of the channels and their complementaries \begin{eqnarray} &\Phi^{(\gamma_{10}=0)}_{{\Gamma}}(\hat{\rho})&=\left( \begin{array}{ccc} \rho_{00} + \rho_{11}+\gamma_{20} \rho_{22} & \sqrt{\gamma_{21}}\rho_{12} & \sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02} \\ \sqrt{\gamma_{21}} \rho_{12}^* & \gamma_{21} \rho_{22} & 0 \\ \sqrt{1-\gamma_{21}-\gamma_{20}} \rho_{02}^* & 0 & (1-\gamma_{21}-\gamma_{20}) \rho_{22} \\ \end{array} \right) \; \nonumber\\ &\tilde{\Phi}^{(\gamma_{10}=0)}_{{\Gamma}}(\hat{\rho})&= \left( \begin{array}{ccc} \rho_{00} + (1-\gamma_{21}-\gamma_{20})\rho_{22} & \rho_{01} & \sqrt{\gamma_{20}} \rho_{02} \\ \rho_{01}^* & \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{ \gamma_{20}}\rho_{12}\\ \sqrt{\gamma_{20}} \rho_{02}^* & \sqrt{\gamma_{20}} \rho_{12}^* & \gamma_{20} \rho_{22} \\ \end{array} \right) \; , \end{eqnarray} \begin{align} &\Phi^{(\gamma_{21}+\gamma_{20}=1)}_{{\Gamma}}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00} + \gamma_{10} \rho_{11}+\gamma_{20} \rho_{22} \sqrt{1-\gamma_{10}} \rho_{01} + \sqrt{\gamma_{10} \gamma_{21}}\rho_{12} & 0 \\ \sqrt{1-\gamma_{10}} \rho_{01}^* + \sqrt{\gamma_{10} \gamma_{21}} \rho_{12}^* & (1-\gamma_{10}) \rho_{11}+\gamma_{21} \rho_{22} & 0 \\ 0 & 0 & 0 \\ \end{array} \right) \; \nonumber \\ & \tilde{\Phi}^{(\gamma_{21}+\gamma_{20}=1)}_{{\Gamma}}(\hat{\rho}) = \scalebox{1}{$ \left( \begin{array}{ccc} \rho_{00}+(1-\gamma_{10}) \rho_{11} & \sqrt{\gamma_{10}} \rho_{01} + \sqrt{(1-\gamma_{10}) \gamma_{21}}\rho_{12} & \sqrt{\gamma_{20}} \rho_{02} \\ \sqrt{\gamma_{10}} \rho_{01}^* +\sqrt{(1-\gamma_{10}) \gamma_{21}} \rho_{12}^*& \gamma_{10} \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{\gamma_{10} \gamma_{20}} \rho_{12}\\ \sqrt{\gamma_{20}} \rho_{02}^* & \sqrt{\gamma_{10} \gamma_{20}} \rho_{12}^* & \gamma_{20} \rho_{22} \\ \end{array} \right) $}\; , \end{align} we see that the element $\ket{1}\!\!\bra{2} \in \text{ker}\; \Phi^{(\gamma_{10}=0)}_\Gamma \not\subset \text{ker}\; \tilde{\Phi}^{(\gamma_{10}=0)}_\Gamma$ and that the element $\ket{0}\!\!\bra{2} \in \text{ker}\; \Phi^{(\gamma_{21}+\gamma_{20}=1)}_\Gamma \not\subset \text{ker}\; \tilde{\Phi}^{(\gamma_{21}+\gamma_{20}=1)}_\Gamma$. Therefore both cannot be degradable. \noindent In synthesis, for ReMAD channels the degrading map can be uniquely found via the inverse matrix of the channel. If it doesn't exist or the derived degrading map is not LCPTP then the channel is not degradable.\\ A similar analysis can be applied to antidegradability but typically we don't need it in our discussion: where we can't prove degradability, in those parameter regions we consider, ReMAD channels have positive capacity and hence cannot be antidegradable.\\ \section{Entanglement assisted quantum and classical capacities}\label{sec: app ent ass cap} The discovery of protocols such as quantum teleportation \cite{TELEPORT} and superdense coding \cite{SUPERDENSE} showed how entanglement could be leveraged as an additional resource in order to boost the communication performance between two communicating parties. The formalization of these entanglement-assisted protocols in Shannon-theoretic terms was given in \cite{ENT_ASS, ENT_ASS1}, where the entanglement-assisted classical capacity $C_E$ and entanglement-assisted quantum capacity $Q_E$ were introduced. The peculiarity and the advantage with the definition of these capacities is that they are additive quantities and don't need a regularization. Specifically, recalling the definition of the quantum mutual information $I(\Phi, \hat{\rho})$ \begin{equation}\label{eq: mutual info} I(\Phi, \hat{\rho})=S(\hat{\rho}) + I_{\text{coh}}(\Phi, \hat{\rho}) \; , \end{equation} we have: \begin{equation} C_E(\Phi)=\max\limits_{\hat{\rho}\in \mathfrak{S}({\cal H}_{\text{S}})} I(\Phi, \hat{\rho}) \; , \qquad Q_E(\Phi)=\frac{1}{2} C_E(\Phi) \; , \end{equation} where the definition of $Q_E(\Phi)$ is justified by the fact that in presence of entanglement a qudit quantum state can be teleported `spending' two classical $d$its (quantum teleportation) and viceversa two classical $d$its can be communicated by sending a single qudit (superdense coding). To effectively compute these quantities we notice that the Shannon entropy is always concave w.r.t. the input state but, as we saw before, the coherent information can be proved concave only when the channel is degradable. Nevertheless the quantum mutual information is always concave in the input state \cite[chapter~13.4.2]{WILDEBOOK}. Therefore also $I(\Phi)$ can be maximized just over diagonal states if $\Phi$ is a covariant channel by following steps as in Eq.~(\ref{eq: coh info concave}). We report then in Fig.~\ref{fig: Ce capacity} the evaluation of $C_E(\Phi_{\Gamma})$ for a qutrit ReMAD channel at varying $\gamma_{20}$ and $\gamma_{21}$ for some instances of $\gamma_{10}$ values. \begin{figure} \caption{Entanglement assisted classical capacity $C_E(\Phi_{\Gamma})$ for different $\gamma_{10},$ $\gamma_{21}$ and $\gamma_{20}$. The grey area represents values of $\gamma_{20}$ and $\gamma_{21}$ s.t. $\gamma_{20}+\gamma_{21}>0$ for which the channel is not defined.} \label{fig: Ce capacity} \end{figure} \section{$Q$ and $C_\text{p}$ in non-degradable regions}\label{app: Qcaps nondeg} \subsection{$Q$ and $C_{\text{p}}$ for $\gamma_{10}=0$}\label{app: Q gamma_10} In case $\gamma_{10}=0$ the actions of a generic qutrit ReMAD channel $\Phi_{\Gamma}^{(\gamma_{10}=0)}$ and of its complementary channel $\tilde{\Phi}_{\Gamma}^{(\gamma_{10}=0)}$ on a generic density matrix $\hat{\rho}$ reduce to \begin{widetext} \begin{eqnarray} \label{defgamma23} &\Phi_{\Gamma}^{(\gamma_{10}=0)}(\hat{\rho})=\begin{pmatrix} \rho_{00}+\gamma_{20} \rho_{22} & \rho_{01} & \sqrt{1-\gamma_{21}-\gamma_{20}}\rho_{02}\\ \rho_{01}^* & \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{1-\gamma_{21}-\gamma_{20}}\rho_{12}\\ \sqrt{1-\gamma_{21}-\gamma_{20}}\rho_{02}^* & \sqrt{1-\gamma_{21}-\gamma_{20}}\rho_{12}^* & (1-\gamma_{21}-\gamma_{20})\rho_{22} \end{pmatrix},\\ \label{deftildegamma23} & \tilde{\Phi}_{\Gamma}^{(\gamma_{10}=0)}(\hat{\rho})=\left( \begin{array}{ccc} 1-(\gamma_{21}+\gamma_{20}) \rho_{22} & \sqrt{\gamma_{21}} \rho_{12} & \sqrt{\gamma_{20}} \rho_{02} \\ \sqrt{\gamma_{21}} \rho_{12}^* & \gamma_{21} \rho_{22} & 0 \\ \sqrt{\gamma_{20}} \rho_{02}^* & 0 & \gamma_{20} \rho_{22} \\ \end{array} \right), \end{eqnarray} \end{widetext} It's immediate to notice that $\Phi_{\Gamma}^{(\gamma_{10}=0)}$ has a noiseless subspace given by $\{\ket{0},\ket{1}\}$ and consequently can establish the following lower bound: \begin{eqnarray} C_\text{p}(\Phi_{\Gamma}^{(\gamma_{10}=0)}) \geq Q(\Phi_{\Gamma}^{(\gamma_{10}=0)})\geq \log_2(2)=1\;. \label{CONST23} \end{eqnarray} This implies that $\Phi_{\Gamma}^{(\gamma_{10}=0)}$ can't be antidegradable (we can reach the same conclusion by noticing that $\tilde{\Phi}_{\Gamma}^{(\gamma_{10}=0)}$ has a kernel that is not included into the kernel of $\Phi_{\Gamma}^{(\gamma_{10}=0)}$ \cite{STR_DEG_CH}. For instance the former contains $\ket{0}\!\!\bra{1}$ while the latter doesn't). \\ \noindent To compute the capacities we follow the channel inversion method described in Appendix \ref{sec:channelinversion}. We find that $\Phi_{\Gamma}^{(\gamma_{10}=0)}$ can be inverted when $\gamma_{21}+\gamma_{20}=1$, and that $\tilde{\Phi}_{\Gamma}^{(\gamma_{10}=0)}\circ \Phi_{\Gamma}^{(\gamma_{10}=0)-1}$ is LCPTP when $\gamma_{21}+\gamma_{20}\leq \frac{1}{2}$, identifying then the region of degradability for the channel. There, exploiting the channel covariance described in Sec. \ref{sec: Appendix averaged covariance}, we can compute $Q(\Phi_{\Gamma}^{(\gamma_{10}=0)})$ and as \hspace*{-0.6cm}\vbox{\begin{align}\label{eq:IcohD23} &Q(\Phi_{\Gamma}^{(\gamma_{10}=0)})\hspace*{-0.3cm}&&=C_\text{p}(\Phi_{\Gamma}^{(\gamma_{10}=0)})& \nonumber \\ & &&=\max_{p_1,p_2} \Big\{ -(p_1 +\gamma_{21}p_2)\log_2(p_1 +\gamma_{21}p_2) -[1-p_1-(1-\gamma_{20}) p_2]\log_2[1-p_1- (1-\gamma_{20}) p_2] \nonumber \\ & && \qquad\qquad -(1-\gamma_{21}-\gamma_{20})p_2\log_2((1-\gamma_{21}-\gamma_{20})p_2) +(1-(\gamma_{21}+\gamma_{20})p_2)\log_2(1-(\gamma_{21}+\gamma_{20})p_2) \nonumber \\ & && \qquad\qquad +\gamma_{21} p_2\log_2(\gamma_{21} p_2)+\gamma_{20} p_2\log_2( \gamma_{20} p_2) \Big\} \; . \end{align}} We are therefore able to evaluate numerically the value of $Q$ and $C_\text{p}$ on the boundary of the degradability region $\gamma_{21}+\gamma_{20}= \frac{1}{2}$, where we are able to showing that there it's equal to the lower bound in Eq.~(\ref{CONST23}). This, together with the closure under composition of channels $\Phi_{\Gamma}^{(\gamma_{10}=0)}$ and data processing inequalities, allows us to conclude that $Q(\Phi_{\Gamma}^{(\gamma_{10}=0)}), \; C_{\text{p}}(\Phi_{\Gamma}^{(\gamma_{10}=0)})=1$ in the entire parameter region $\gamma_{21}+\gamma_{20}\geq 1/2$. \subsection{$Q$ and $C_{\text{p}}$ for $\gamma_{21}=0$}\label{app: Q gamma_21} In the case of ReMAD channels with $\gamma_{21}=0$ we have that channels and complementary channels actions reduce to \begin{widetext} \begin{eqnarray}\label{eq:matrix form D13} &\Phi_{\Gamma}^{(\gamma_{21}=0)}(\hat{\rho})=\begin{pmatrix} \rho_{00}+\gamma_{10} \rho_{11}+\gamma_{20} \rho_{22}& \sqrt{1-\gamma_{10}} \rho_{01} & \sqrt{1-\gamma_{20}}\rho_{02}\\ \sqrt{1-\gamma_{10}}\rho_{01}^* & (1-\gamma_{10})\rho_{11} & \sqrt{1-\gamma_{10}}\sqrt{1-\gamma_{20}}\rho_{12}\\ \sqrt{1-\gamma_{20}}\rho_{02}^* & \sqrt{1-\gamma_{10}}\sqrt{1-\gamma_{20}}\rho_{12}^* & (1-\gamma_{20})\rho_{22} \end{pmatrix},\\ \label{eq:matrix form D13 tilde} & \tilde{\Phi}_{\Gamma}^{(\gamma_{10}=0)}(\hat{\rho})=\left( \begin{array}{ccc} 1-\gamma_{10} \rho_{11}-\gamma_{20} \rho_{22} & \sqrt{\gamma_{10}} \rho_{01} & \sqrt{\gamma_{20}} \rho_{02} \\ \sqrt{\gamma_{10}} \rho_{01}^* & \gamma_{10} \rho_{11} & \sqrt{\gamma_{10}} \sqrt{\gamma_{20}} \rho_{12} \\ \sqrt{\gamma_{20}} \rho_{02}^* & \sqrt{\gamma_{10}} \sqrt{\gamma_{20}} \rho_{12}^* & \gamma_{20} \rho_{22} \\ \end{array} \right). \end{eqnarray} \end{widetext} To compute the capacities we follow the channel inversion method described in Appendix \ref{sec:channelinversion}. We find that $\Phi_{\Gamma}^{(\gamma_{21}=0)}$ is invertible when $\gamma_{10},\gamma_{20}<1$, while $\tilde{\Phi}_{\Gamma}^{(\gamma_{21}=0)}\circ\Phi_{\Gamma}^{(\gamma_{21}=0)-1}$ is LCPTP when $\gamma_{10},\gamma_{20}\leq \frac{1}{2}$. From this follows that in this range of parameters the channels are degradable. Comparing Eqs.~(\ref{eq:matrix form D13}) with (\ref{eq:matrix form D13 tilde}) we can also see that \begin{eqnarray} \tilde{\Phi}_\Gamma^{({\gamma_{10},0,\gamma_{20}})}=\Phi_\Gamma^{(1-\gamma_{10},0,1-\gamma_{20})}\;. \end{eqnarray} Therefore, by the same argument above, we have that $\Phi_{\Gamma}^{(\gamma_{21}=0)}$ is antidegradable for $\gamma_{10},\gamma_{20}\geq \frac{1}{2}$ and that $Q(\Phi_{\Gamma}^{(\gamma_{21}=0)}), C_{\text{p}}(\Phi_{\Gamma}^{(\gamma_{21}=0)})=0$ for that range of values. To compute $Q(\Phi_{\Gamma}^{(\gamma_{21}=0)})$ and $C_{\text{p}}(\Phi_{\Gamma}^{(\gamma_{21}=0)})$ in the degradable region we exploit the channels covariance described in Sec.~\ref{sec: Appendix averaged covariance} to maximize only over diagonal inputs, getting \begin{widetext} \begin{align}\label{eq:cohID13} & Q(\Phi_{\Gamma}^{(\gamma_{21}=0)})=C_{\text{p}}(\Phi_{\Gamma}^{(\gamma_{21}=0)}) \nonumber \\ &=\max_{\substack{p_1,p_2\in [0,1] \\ p_1+p_2\leq 1}} \Big\{-[1 - (1-\gamma_{10})p_1+(1-\gamma_{20})p_2]\log_2[1 - (1-\gamma_{10})p_1+(1-\gamma_{20})p_2]\nonumber\\ &\qquad\qquad\qquad -(1-\gamma_{10})p_1\log_2((1-\gamma_{10})p_1) -(1-\gamma_{20})p_2\log_2((1-\gamma_{20})p_2)\nonumber\\ &\qquad\qquad\qquad +(1-\gamma_{10}p_1-\gamma_{20}p_2)\log_2(1-\gamma_{10}p_1-\gamma_{20}p_2) +\gamma_{10} p_1\log_2(\gamma_{10} p_1)+\gamma_{20} p_2\log_2(\gamma_{20} p_2) \Big\} \; . \end{align} \end{widetext} The capacities are also known on the boundaries of the parameters space, since when one of the remaining damping parameters is 0 $\Phi_{\Gamma}^{(\gamma_{21}=0)}$ reduces to a single-decay qutrit MAD, for which $Q$ and $C_\text{p}$ are known \cite{MAD}. When one of the damping parameters is instead 1 we reduce to the MAD channel discussed in Appendix~\ref{sec:first}, for which $Q$ is already available. More precisely in Appendix.~\ref{sec:first} we compute $Q(\Phi^{(1,0,\gamma_{20})})$, verifying that it coincides with the capacity of a qubit ADC. \noindent Since the value of $Q$ is known also on the boundaries of the degradable region, we can compare $Q({\Phi}_\Gamma^{(\gamma_{10},0,\gamma_{20})})$ at $\gamma_{20}=\frac{1}{2}$ and $\gamma_{20}=1$, for all $\gamma_{21}\geq 1/2$. We find that the two are the same, i.e. $Q({\Phi}_\Gamma^{(\gamma_{10},0,1)}) = Q({\Phi}_\Gamma^{(\gamma_{10},0,1/2})$, Accordingly, invoking the data-processing inequality and closure under composition, we can finally conclude that \begin{eqnarray} \label{GOODone} Q({\Phi}_\Gamma^{(\gamma_{10},0,1)}) = Q({\Phi}_\Gamma^{(\gamma_{10},0,\gamma_{20}})\; \qquad \forall \gamma_{20} \geq \frac{1}{2}\;, \end{eqnarray} that allows us to evaluate the $Q$ and $C_{\text{p}}$ over the entire parameters region. \subsection{$Q$ and $C_{\text{p}}$ for $\gamma_{10}=1$} \label{sec:first} We work here under the assumption of qutrit ReMAD channels with $\gamma_{10}=1$, for which the expression of channels and their complementary becomes \begin{widetext} \begin{eqnarray}\label{eq:matr expr 1=123} &&\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}(\hat{\rho})=\begin{pmatrix} 1-(1-\gamma_{20}) \rho_{22}& 0 & \sqrt{1-\gamma_{21}-\gamma_{20}}\rho_{02}\\ 0 & \gamma_{21} \rho_{22} & 0\\ \sqrt{1-\gamma_{21}-\gamma_{20}}\rho_{02}^* & 0 & (1-\gamma_{21}-\gamma_{20})\rho_{22} \end{pmatrix},\\ \nonumber \\ & &\tilde{\Phi}_{{\Gamma}}^{(1,\gamma_{21},\gamma_{20})}(\hat{\rho})=\left( \begin{array}{ccc} \rho_{00}+ (1-\gamma_{21}-\gamma_{20})\rho_{22} & \rho_{01} &\sqrt{\gamma_{20}} \rho_{02} \\ \rho_{01}^* & \rho_{11}+\gamma_{21} \rho_{22} & \sqrt{ \gamma_{20}} \rho_{12}\\ \sqrt{\gamma_{20}} \rho_{02}^* & \sqrt{\gamma_{20}} \rho_{12}^* & \gamma_{20} \rho_{22} \\ \end{array} \right) \; , \label{eq:matr expr 1=123ewe} \end{eqnarray} \end{widetext} for $\gamma_{21},\gamma_{20}\in[0,1]$ such that $\gamma_{21}+ \gamma_{20}\leq 1$. The channel cannot be degradable since $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$ has a kernel that is not included into the kernel of $\tilde{\Phi}_\Gamma^{(1,\gamma_{21},\gamma_{20})}$ \cite{STR_DEG_CH}. We are still able though to express exactly the value of its quantum capacity. Specifically we are able to state that \begin{eqnarray} Q(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) &=&C_{\text{p}}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) = Q^{(1)}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) ={\cal Q}(\gamma_{21},\gamma_{20}) \;, \label{exact} \end{eqnarray} with ${\cal Q}(\gamma_{21},\gamma_{20})$ defined as \begin{eqnarray}\nonumber {\cal Q}(\gamma_{21},\gamma_{20})&\equiv& \max_{\hat{\tau}_{\text{diag}}} \left\{ S(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}})) - S(\tilde{\mathcal{D}}_{(1,\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}}))\right\} \\ &=&\max_{p\in [0,1]} \Big\{ - (1-(1-\gamma_{20}) p) \log_2(1-(1-\gamma_{20}) p) - (1-\gamma_{21}-\gamma_{20})p \log_2 (1-\gamma_{21}-\gamma_{20})p)\nonumber \\ && \qquad\quad\; + ( 1-(\gamma_{21}+\gamma_{20}) p) \log_2( 1-(\gamma_{21}+\gamma_{20}) p) +\gamma_{20} p \log_2 \gamma_{20} p)\Big\} \;, \label{eq:QCapacity12312} \end{eqnarray} where we are able to restrict the maximization to diagonal density matrices of form $\hat{\tau}_{\text{diag}}=(1-p) |0\rangle\!\langle 0|+ p |2\rangle\!\langle 2|$ of $\text{A}'$ associated with the subspace ${\cal H}_{\text{A}'}\equiv \mbox{Span}\{\ket{0},\ket{2}\}$. We plot ${\cal Q}(\gamma_{21},\gamma_{20})$ in Fig.~\ref{fig: Q(21,20)}: we notice that when $\gamma_{20}\geq \frac{1-\gamma_{21}}{2}$ we have ${\cal Q}(\gamma_{21},\gamma_{20}) = 0$, coherently with the fact that in that parameter region the channel $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$ has zero capacity, i.e. \begin{eqnarray} Q(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) &=&C_{\text{p}}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) = 0 \quad \forall \; 1-\gamma_{21}\geq \gamma_{20}\geq \tfrac{1-\gamma_{21}}{2}\;. \label{exact0} \end{eqnarray} \begin{figure} \caption{ Numerical evaluation of $Q(\gamma_{21},\gamma_{20})$ obtained by performing the maximization in Eq.~(\ref{eq:QCapacity12312}). } \label{fig: Q(21,20)} \end{figure} To prove Eq.~(\ref{exact}) let us start by observing that ${\cal Q}(\gamma_{21},\gamma_{20})$ provides a natural lower bound for $Q(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})})$ and hence for $C_{\text{p}}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})})$: \begin{eqnarray} \label{lower1} Q(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) &\geq & \max_{\hat{\rho}} I_{\text{coh}}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}, \hat{\rho}) \geq \max_{\hat{\tau}_{\text{diag}}} I_{\text{coh}}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}, \hat{\tau}_{\text{diag}}) = {\cal Q}(\gamma_{21},\gamma_{20})\;. \nonumber \end{eqnarray} We need now to show that ${\cal Q}(\gamma_{21},\gamma_{20})$ can also upper bound $Q(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})})$. To do this we construct a new channel $\Phi '^{(\gamma_{21},\gamma_{20})}$ with larger capacity than $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$, i.e. \begin{eqnarray} Q(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) &\leq& Q(\Phi '^{(\gamma_{21},\gamma_{20})}) \;, \label{impo1} \\ C_{\text{p}}(\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}) &\leq& C_{\text{p}}(\Phi '^{(\gamma_{21},\gamma_{20})}) \;, \label{impo1cp} \end{eqnarray} and for which we can show that \begin{eqnarray} Q(\Phi '^{(\gamma_{21},\gamma_{20})})=C_{\text{p}}(\Phi '^{(\gamma_{21},\gamma_{20})}) = {\cal Q}(\gamma_{21},\gamma_{20})\;.\label{impo2} \end{eqnarray} For this purpose notice that, since the population of level $|1\rangle$ completely depleted, the output produced by $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$ can be reproduced by the channel $\Phi '^{(\gamma_{21},\gamma_{20})}: {\cal L}({\cal H}_{\text{A}'}) \rightarrow {\cal L}({\cal H}_{\text{A}}) $ operating on the two-levels Hilbert space ${\cal H}_{\text{A}'}\equiv \mbox{Span}\{\ket{0},\ket{2}\}$, and producing qutrit states of A as outputs. In particular, calling $\hat{\tau}$ a generic density matrix on ${\cal H}_{\text{A}'}$ we have \begin{equation}\label{eq:matrix form Phi1=123} \Phi '^{(\gamma_{21},\gamma_{20})}(\hat{\tau})=\begin{pmatrix} 1-(1-\gamma_{20}) \tau_{22}& 0 & \sqrt{1-\gamma_{21}-\gamma_{20}}\tau_{02}\\ 0 & \gamma_{21} \tau_{22} & 0\\ \sqrt{1-\gamma_{21}-\gamma_{20}}\tau_{02}^* & 0 & (1-\gamma_{21}-\gamma_{20})\tau_{22} \end{pmatrix} \end{equation} with associated complementary channel \begin{equation} \label{eq:matrix form Phi1=123tilde} \tilde{\Phi}'^{(\gamma_{21},\gamma_{20})}(\hat{\tau})=\left( \begin{array}{ccc} 1-(\gamma_{21}+\gamma_{20})\tau_{22} & 0 & \sqrt{\gamma_{20}}\tau_{02}\\ 0 & \gamma_{21} \tau_{22} & 0\\ \sqrt{\gamma_{20}}\tau_{02}^* & 0 & \gamma_{20}\tau_{22} \end{array} \right), \end{equation} where for $i,j=0,2$ we set $\tau_{ij}\equiv \langle i| \hat{\tau} | j\rangle$. \noindent $\Phi '^{(\gamma_{21},\gamma_{20})}$ fulfills the inequality (\ref{impo1}) because $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$, while producing the same output of $\Phi '^{(\gamma_{21},\gamma_{20})}$, is also `wasting' resources in the useless level $|1\rangle$. Explicitly, notice that we can write \begin{eqnarray} \Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})} = \Phi '^{(\gamma_{21},\gamma_{20})} \circ {\cal A} \;, \end{eqnarray} being $\mathcal{A}: {\cal L}({\cal H}_{\text{A}}) \rightarrow {\cal L}({\cal H}_{\text{A}'})$ is the LCPTP map bringing the input state of the qutrit A to the qubit $\text{A}'$ by transfer of the population of the level $\ket{1}$ to $\ket{0}$ followed by the erasure of $\ket{1}$ , i.e. \begin{eqnarray}\label{eq:matr expr 1=123adsf} \mathcal{A}(\hat{\rho})&=&\begin{pmatrix} \rho_{00} + \rho_{11} & \rho_{02} \\ \rho_{20} &\rho_{22} \\ \end{pmatrix}, \end{eqnarray} where $\rho_{ij} =\langle i | \hat{\rho} |j\rangle$ with $\hat{\rho} \in \mathfrak{S}({\cal H}_{\text{A}})$. Equation~(\ref{impo1}) can then be derived as a consequence of the data processing inequality applied to $Q$ (and $C_{\text{p}}$). The second part of the argument, i.e. Eq.~(\ref{impo2}), can instead be proved by noticing that, differently of the original channel $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$ that is not degradable, $\Phi '^{(\gamma_{21},\gamma_{20})}$ is degradable if \begin{eqnarray}0\leq \gamma_{20}\leq ({1-\gamma_{21}})/{2}\;, \label{REDGED} \end{eqnarray} and antidegradable otherwise, i.e. for $(1-\gamma_{21})/2\leq \gamma_{20}\leq {1-\gamma_{21}}$. This can be shown by observing that in the region identified by the inequality~(\ref{REDGED}) the quantity \begin{eqnarray} \bar{\gamma}_3\equiv \frac{1-\gamma_{21}-2\gamma_{20}}{1-\gamma_{21}-\gamma_{20}}\;, \end{eqnarray} belongs to $[0,1]$ and can be used to build a single-decay qutrit MAD channel $\Phi_\Gamma^{(0,0,\bar{\gamma}_3)}$ . Furthermore by direct calculation we get \begin{eqnarray}\label{deg} \Phi_\Gamma^{(0,0,\bar{\gamma}_3)} \circ \Phi '^{(\gamma_{21},\gamma_{20})} = \tilde{\Phi} '^{(\gamma_{21},\gamma_{20})}\;, \end{eqnarray} which shows that $\Phi_\Gamma^{(0,0,\bar{\gamma}_3)}$ acts as the degrading channel of $\Phi '^{(\gamma_{21},\gamma_{20})}$. From Eqs.~(\ref{eq:matrix form Phi1=123}) and (\ref{eq:matrix form Phi1=123tilde}) it is also evident that $\Phi '^{(\gamma_{21},\gamma_{20})}$ can be recovered from $\tilde{\Phi}'^{(\gamma_{21},\gamma_{20})}$ by the substitution $\gamma_{20}\rightarrow 1-\gamma_{21}-\gamma_{20}$. Consequently, using the same construction of Eq.~(\ref{deg}), we can conclude that $\Phi '^{(\gamma_{21},\gamma_{20})}$ is antidegradable when $(1-\gamma_{21})/2\leq \gamma_{20}\leq {1-\gamma_{21}}$. Finally, to derive Eq.~(\ref{impo2}) we observe that as the original channel $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}$ also $\Phi '^{(\gamma_{21},\gamma_{20})}$ is covariant and accordingly we can express its capacity as \begin{align} \label{CAPQprime} \hspace*{-0.5cm}Q(\Phi '^{(\gamma_{21},\gamma_{20})})=C_{\text{p}}(\Phi '^{(\gamma_{21},\gamma_{20})})=\max_{\hat{\tau}_{\text{diag}}} \Big\{ S(\Phi '^{(\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}})) - S(\tilde{\Phi}'^{(\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}}))\Big\} = {\cal Q}(\gamma_{21},\gamma_{20}), \end{align} where the last identity follows from the fact that $\Phi '^{(\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}}))$ coincides with $\Phi_\Gamma^{(1,\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}})$ and by the fact that the positive component of the spectrum of $\tilde{\Phi}'^{(\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}})$ coincides with the one of $\tilde{\Phi}^{(1,\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}})$ (strictly speaking the above derivation holds true only in the degradable region~(\ref{REDGED}) of $\Phi '^{(\gamma_{21},\gamma_{20})}(\hat{\tau}_{\text{diag}}))$: still since ${\cal Q}(\gamma_{21},\gamma_{20})$ nullifies for $1-\gamma_{21}\geq \gamma_{20}\geq ({1-\gamma_{21}})/{2}$, we can apply~(\ref{CAPQprime}) also in the antidegradability region of the channel where $Q(\Phi '^{(\gamma_{21},\gamma_{20})})=0$). \noindent Finally, we notice that for $\gamma_{21}=0$ the effective channel in Eq.~(\ref{eq:matrix form Phi1=123}) can be replaced by the channel \begin{equation}\label{eq:matrix form Phi1=123gamma2=0} \Phi '^{\gamma_{20}}(\hat{\tau})=\begin{pmatrix} 1-(1-\gamma_{20}) \tau_{22}& \sqrt{1-\gamma_{20}}\tau_{02}\\ \sqrt{1-\gamma_{20}}\tau_{02}^* & (1-\gamma_{20})\tau_{22} \end{pmatrix}\;, \end{equation} mapping the two-level system $\text{A}'$ into itself via a qubit ADC with damping parameter $\gamma_{20}$. Accordingly, following the same analysis done above it follows that $Q(\Phi_\Gamma^{(1,0,\gamma_{20})})$ and $C_{\text{p}}(\Phi_\Gamma^{(1,0,\gamma_{20})})$ coincide with the respective capacities of a qubit ADC, computed in Ref.~\cite{QUBIT_ADC}. \end{document}
arXiv
\begin{document} \setcounter{page}{1} \thispagestyle{empty} \title{Classification of Solvable Lie algebras whose non-trivial Coadjoint Orbits of simply connected Lie groups are all of Codimension 2\thanks{Received by the editors on Month/Day/Year. Accepted for publication on Month/Day/Year. Handling Editor: Name of Handling Editor. Corresponding Author: Vu Anh Le}} \author{ Hieu Van Ha\thanks{University of Economics and Law, Ho Chi Minh City, Vietnam and Vietnam National University, Ho Chi Minh City, Vietnam ([email protected]) } \and Vu Anh Le\thanks{University of Economics and Law, Ho Chi Minh City, Vietnam and Vietnam National University, Ho Chi Minh City, Vietnam ([email protected]) } \and Tu Thi Cam Nguyen\thanks{University of Science - Vietnam National University Ho Chi Minh City, Vietnam and Can Tho University, Can Tho, Vietnam ([email protected])} \and Hoa Duong Quang\thanks{Hoa Sen University, Vietnam ([email protected]).} } \markboth{Hieu V. Ha, Vu A. Le, Tu T. C. Nguyen and Hoa D. Quang}{Classification of Solvable Lie algebras whose non-trivial Coadjoint Orbits of simply connected Lie groups are all of Codimension 2} \maketitle \begin{abstract} We give a classification of real solvable Lie algebras whose non-trivial coadjoint orbits of corresponding simply connected Lie groups are all of codimension 2. These Lie algebras belong to a well-known class, called the class of MD-algebras. \end{abstract} \begin{keywords} Solvable, Lie groups, Classification of Lie algebras, MD-algebras. \end{keywords} \begin{AMS} 17B08,17B30. \end{AMS} \section{Introduction} The problem of the classification of Lie algebras (as well as Lie groups) has received much attentions since the early 20$^\textnormal{th}$ century. However, this is still an open problem. By Levi's decomposition and the Cartan's theorem, we know that the problem of classification of Lie algebras over any field of characteristic zero are reduced to the problem of classification of solvable ones. However, until now, there is no a complete classification of $n$ dimensional solvable Lie algebras if $n\geq 7$. And this classification problem seems to be impossible to solve, unless there is a suitable change on the definition of term ``classification" or there is a completely new method to classify those Lie algebras \cite{Boza13}. As we know, the Lie algebra of a (simply connected) Lie group is commutative if and only if all of its coadjoint orbits are trivial (or of dimension 0). However, Lie groups which has a non-trivial coadjoint orbit are much more complicated. In 1980, while searching for the class of Lie groups whose $C^*$-algebra can be characterized by BDF $K$-functions, Do Ngoc Diep proposed to study a class of Lie groups whose non-trivial coadjoint orbits have the same dimension \cite{Do99}. He named this class as MD-class. Any Lie group belongs to this class is called an MD-group and the Lie algebra of any MD-group is called an MD-algebra. It can be said that Vuong Manh Son and Ho Huu Viet were the authors who faced the problem of classification MD-algebras (as well as MD-groups) firstly. In 1984, they gave not only the classification of MD-groups whose non-trivial coadjoint orbits are of the same dimension as the group but also some important characteristics of this class. For example, they showed that any non-commutative MD-algebra is either 1-step solvable or 2-step solvable, i.e. the second derived algebra is commutative \cite{Son-Viet}. Afterward, from 1990, Vu A. L. and Hieu V. H. (the authors of this paper) gave the classification (up to isomorphic) of some subclasses; including all MD-algebras of dimension 4 \cite{Le90-2}, all MD-algebras of dimension 5 \cite{Vu-Shum, LHT11}, all MD-algebras which have the first derived ideal of dimension 1 or codimension 1 \cite{Vu-MD}. Besides, a list of all simply connected Lie groups whose coadjoint orbits are of dimension up to 2 was given by D. Arnal et al. in 1995 \cite{Arnal}. In 2019, Michel Goze and Elisabeth Remm used Cartan class to give the classification of all Lie algebras that all non-trivial coadjoint orbits of corresponding Lie groups are of dimension 4 \cite{GR19}. Remark that the Lie algebras classified in \cite{Arnal} and \cite{GR19} are all MD-algebras in terms of Diep. Moreover, Goze and Remm also gave some characteristics of the class of MD-algebras whose non-trivial coadjoint orbits are of codimension 1. Recently, in an earlier article \cite{HHV21}, we have classified all real solvable Lie algebras whose non-trivial coadjoint orbits are of codimension 1. Now, we will give the complete classification of real solvable Lie algebras whose non-trivial coadjoint orbits are of codimension 2. The paper is organized into 6 sections, including this introduction. In Section \ref{section-pre}, we will recall some basic preliminary concepts, notations and properties which will be used throughout the paper. In Section \ref{section-3} and Section \ref{section-4}, we will give the classification of 1-step solvable Lie algebras whose non-trivial coadjoint orbits are of codimension 2 [Theorem \ref{classification-commutative3}, Theorem \ref{theorem62}]. In Section \ref{section-5}, we will study the case of such 2-step solvable Lie algebras [Theorem \ref{main-noncommutative}], and complete the results in Sections \ref{section-3}, \ref{section-4}. Tables containing a list of results are provided in the last section. \section{Preliminaries}\label{section-pre} We now introduce some key definitions, notations and terminologies. For more details, we refer reader to \cite{Kiri}. \begin{itemize} \item Throughout this paper, the underlying field is always the field $\mathbb{R}$ of real numbers and $n$ is an integer $\geq 2$ unless otherwise stated. \item For any Lie algebra $\mathcal G$ and $0 < k \in \mathbb N$, the direct sum $\mathcal G \oplus \mathbb{R} ^k$ is called a trivial extension of $\mathcal G$. \item A Lie algebra $(\mathcal G,[\cdot,\cdot])$ is said to be $i$-step solvable or solvable of degree $i$ if its $i$-th derived algebra $\mathcal G^i:=[\mathcal G^{i-1},\mathcal G^{i-1}]$ is commutative and non-trivial (i.e. $\neq \{0\}$) where $\mathcal G^0:=\mathcal G$ and $0<i\in \mathbb N$. \item An $n\times n$ matrix whose $(i,j)$-entry is $a_{ij}$ will be written as $(a_{ij})_{n\times n}$. While the $(i,j)$-entry of a matrix $A$ will be denoted by $(A)_{ij}$. The transpose of $A$ will be denoted by $A^t$. For an endomorphism $f$ on a vector space $V$ of dimension $n$, the matrix of $f$ with respect to a basis $\mathfrak{b}:=\{x_1,\dots,x_n\}$ of $V$ will be denoted by $[f]_{\mathfrak{b}}$. For short, if $U:=\langle x_k, \dots,x_n\rangle$ is the subspace of $V$ spanned by $\{x_k, \dots, x_n\}$ and if $g:U \rightarrow U$ is a linear endomorphism on $U$ then the notation $[g]_\mathfrak{b}$ will be used to denote the matrix of $g$ with respect to the basis $\{x_k, \dots,x_n\}$ of $U$. \item As usual, the dual space of $V$ will be denoted by $V^*$. It is well-know that if $\{x_1,x_2,\dots,x_n\}$ is a basis of $V$ then $\{x_1^*,\dots,x_n^*\}$ is a basis of $V^*$, where each $x_i^*$ is defined by $x_i^*(x_j)=\delta_{ij}$ (the Kronecker delta symbol) for $1 \leq i, j \leq n$. \item For any $x \in \mathcal G$, we will denote by $\ad{x}$ the adjoint action of $x$ on $\mathcal G$, i.e. $\ad{x}$ is the endomorphism on $\mathcal G$ defined by $\ad{x}(y)= [x, y]$ for every $y \in \mathcal G$. By $\adg{x}$ and $\adgg{x}$, we mean the restricted maps of $\ad{x}$ on $\mathcal G^1$ and $\mathcal G^2$, respectively. Since $\mathcal G^1$ and $\mathcal G^2$ are ideals of $\mathcal G$, $\adg{x}$ and $\adgg{x}$ will be treated as endomorphisms on $\mathcal G^1$ and $\mathcal G^2$, respectively. \item In this paper, we will use the symbol $I$ to denote the $2\times 2$ identity matrix, and use $J$ to denote the following $2\times 2$ matrix $\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}.$ We shall denote by $\boldsymbol{0}$ the zero matrix of suitable size. \end{itemize} \begin{definition}{\rm Let $G$ be a Lie group and let $\mathcal G$ be its Lie algebra. If $\text{Ad}:G \rightarrow \text{Aut}(\mathcal G)$ denotes the {\em adjoint representation} of $G$. Then the action \begin{align*} K: & \quad G \rightarrow \text{Aut}(\mathcal G^*) \\ & \quad g \mapsto K_g \end{align*} defined by \begin{equation*} K_g(F)(x)=F(\text{Ad}(g^{-1})(x)) \text{ for } F\in\mathcal G^*, x\in\mathcal G. \end{equation*} is called the {\em coadjoint representation} of $G$ in $\mathcal G^*$. Each orbit of the coadjoint representation of $G$ is called a {\em coadjoint orbit}, or a {\em K-orbit} of $G$.} \end{definition} For each $F\in\mathcal G^*$, the coadjoint orbit for $F$ is denoted by $\Omega_F$, i.e. \begin{equation*} \Omega_F=\{K_g(F): g\in G\}. \end{equation*} The dimension of each coadjoint orbit is determined via the following proposition. \begin{proposition}\cite{Kiri} \label{rankbf} Let $F$ be any element in $\mathcal G^*$. If $\{x_1,x_2,\dots,x_n\}$ is a basis of $\mathcal G$ then \begin{equation*} \dim \Omega_F=\rank{\bigl(F([x_i,x_j])\bigr)_{n\times n}}. \end{equation*} \end{proposition} \begin{remark}\label{remark23} The dimension of each K-orbit $\Omega_F$ is always even for every $F\in\mathcal G^*$. Moreover, $\dim\Omega_F>0$ if and only if $F|_{\mathcal G^1}\neq 0$. \end{remark} As mentioned in previous section, this paper concerns with Lie algebras whose non-trivial coadjoint orbits are all of the same dimension. \begin{definition}\cite{Do99,Son-Viet} {\rm An {\em MD-group} is a finite-dimensional, simply connected and solvable Lie group whose non-trivial coadjoint orbits are of the same dimension. The Lie algebra of an MD-group is called an {\em MD-algebra}. An MD-algebra $\mathcal G$ is called an MD$_k(n)$-algebra if $\dim\mathcal G=n$ and the same dimension of non-trivial coadjoint orbits is equal to $k$.} \end{definition} One of the most interesting characteristics on this class is about the degree of solvability which is proven by Son \& Viet \cite{Son-Viet}. \begin{proposition}\cite{Son-Viet}\label{solvability2} If $\mathcal G$ is an MD-algebra then the degree of solvability is at most 2, i.e. $\mathcal G^3=\{0\}$. \end{proposition} Therefore, the problem of classification of MD-algebras falls naturally into two parts: (1) the classification of 1-step solvable ones, and (2) the classification of 2-step solvable ones. However, if $\mathcal G$ is a 2-step solvable MD-algebra then $\mathcal G/\mathcal G^2$ is a 1-step solvable MD-algebra \cite[Theorem 3.5]{HHV21}. Hence, we should firstly study some interesting properties of 1-step solvable MD-algebras. \begin{proposition}\cite{HHV21}\label{pro211} Let $\mathcal G$ be a 1-step solvable Lie algebra of dimension $n$ such that its non-trivial coadjoint orbits are all of codimension $k$. If $\dim \mathcal G^1\geq n-k+1$ then $\mathcal G$ is isomorphic to the semi-direct product $\mathcal{L}\oplus_{\rho} \mathcal G^1$ where $\mathcal{L}$ is a commutative sub-algebra of $\mathcal G$ and $\rho$ is defined by \begin{equation}\label{def-rho} \begin{array}{llll} \rho: & \mathcal{L}\times\mathcal G^1 & \rightarrow & \mathcal G^1 \\ & (x,y) & \mapsto & [x,y]. \end{array} \end{equation} \end{proposition} Moreover, if $\mathcal G$ is 1-step solvable then $[[x,y],z]=0$ for every $x,y\in\mathcal G, z\in\mathcal G^1$. It follows immediately from the Jacobi identity that $\adg{x}\adg{y}=\adg{y}\adg{x}$ for every $x,y\in\mathcal G$. \begin{lemma}\label{1-stepcomtu} If $\mathcal G$ is 1-step solvable then $\{\adg{x}:x\in\mathcal G\}$ is a family of commuting endomorphisms. \end{lemma} It is well-known that an arbitrary set of commuting matrices over an algebraic closed field may be simultaneously brought to triangular form by a unitary similarity \cite{Morris,Heydar}. A similar version for the case of the real field is given in the following proposition. \begin{proposition}\label{triangular} Let $\mathcal{S}$ be a set of commuting real matrices of the same size. Then $\mathcal{S}$ is block simultaneously triangularizable in which the maximal size of each block is 2. In other words, there is a non-singular real matrix $T$ so that \begin{equation*} T\mathcal{S}T^{-1} =\left[\begin{array}{llllllll} *_{2\times 2} & \\ & \ddots & & & \textnormal{\huge *}\\ & & *_{2\times 2} \\ & & & * \\ &\textnormal{\huge 0}& & & \ddots \\ & & & & & * \end{array}\right] \end{equation*} where each block $*_{2\times 2}$ is of the form $\begin{bmatrix} a & b\\ -b & a \end{bmatrix}$ for some $a,b\in\mathbb{R}$ ($b$ is not necessary to be non-zero). \end{proposition} The following lemma is a straightforward but useful consequence of Propositions \ref{pro211}, \ref{triangular} and Lemma \ref{1-stepcomtu}. \begin{lemma}\label{cor2} Let $\mathcal G$ be a 1-step solvable $\MD{n}{n-2}$-algebra such that $m:=\dim\mathcal G^1$ is strictly greater than 2. Then there is a basis $\mathfrak{b}:=\{x_1,\ldots,x_n\}$ of $\mathcal G$ so that \begin{itemize} \item $\mathcal G^1=\langle x_{n-m+1},\ldots, x_n\rangle$ is commutative, \item $[x_i,x_j]=0$ for every $1\leq i,j\leq n-m$, \item The matrices $[\adg{x_1}]_\mathfrak{b}, [\adg{x_2}]_\mathfrak{b},\ldots, [\adg{x_{n-m}}]_\mathfrak{b}$ are of the block triangular form in the sense of Proposition \ref{triangular}. \end{itemize} \end{lemma} \begin{remark}\label{remark210} In the above lemma, we can choose $\mathfrak{b}$ so that the space $\mathcal{L}$ in the semi-direct sum $\mathcal{L}\oplus_\rho\mathcal G^1$ of $\mathcal G$ is spanned by $\{x_1, \dots, x_{n-m}\}$. If so, for each $F\in\mathcal G^*$, \begin{equation*} \left(F\left([x_i,x_j]\right)\right)_{n\times n} = \begin{bmatrix} \boldsymbol 0 & P_F \\ -P_F^t & \boldsymbol 0 \end{bmatrix}, \end{equation*} where $P_F$ is an $(n-m)\times m$ matrix which is defined by: \begin{equation*} (P_F)_{ij}:= F\left([x_i,x_{n-m+j}]\right). \end{equation*} By Proposition \ref{rankbf}, \begin{equation*} \dim \Omega_F = 2 \rank{(P_F)} \quad \text{for every } F\in\mathcal G^*. \end{equation*} \end{remark} Finally, if $\mathcal G$ is an $\MD{n}{n-2}$-algebra then $\mathcal G/\mathcal G^2$ is an $\MD{n-\dim\mathcal G^2}{n-2}$-algebra \cite[Theorem 3.5]{HHV21}. Hence, we should recall here the classifications of $\MD{n}{n-1}$-algebras and $\MD{n}{n}$-algebras which are solved by Hieu et. al. \cite{HHV21} and Son \& Viet \cite{Son-Viet}, respectively. \begin{proposition}\cite{HHV21}\label{pro210} Let $\mathcal G$ be a real MD$_{n-1}(n)$-algebra with $n\geq 5$. Then $\mathcal G$ is isomorphic to one of the followings: \begin{enumerate} \item A trivial extension of $\textnormal{aff}(\mathbb C)$, namely $\mathbb R \oplus \textnormal{aff}(\mathbb C)$, where $\textnormal{aff}(\mathbb C):=\langle x_1,x_2,y_1,y_2\rangle$ is the complex affine algebra defined by \begin{equation*} [x_1,y_1]=y_1, [x_1,y_2]=y_2, [x_2,y_1]=-y_2,[x_2,y_2]=y_1. \end{equation*} \item The real Heisenberg Lie algebra \[\mathfrak{h}_{2m+1}:=\langle x_i,y_i,z:i=1,\dots,m\rangle, \] with $[x_i,y_i]=z$ for every $1\leq i\leq m$. \item The Lie algebra \[\mathfrak{s}_{5,45}:=\langle x_1,x_2,y_1,y_2,z\rangle,\] with \[ [x_1,y_1]=y_1, [x_1,y_2]=y_2, [x_1,z]=2z, [x_2,y_1]=y_2, [x_2,y_2]=-y_1, [y_1,y_2]=z. \] \end{enumerate} \end{proposition} \begin{proposition}\cite{Son-Viet}\label{sonviet} Let $\mathcal{G}$ be a real $\MD{n}{n}$-algebra. Then $\mathcal{G}$ is isomorphic to one of the following forms: \begin{enumerate} \item The real affine algebra $\textnormal{aff}(\mathbb{R}):=\langle x,y\rangle$ with $ [x,y]=y, $ \item The complex affine algebra $\textnormal{aff}(\mathbb{C})$ defined in Proposition \ref{pro210}. \end{enumerate} \end{proposition} \begin{remark}\label{remark213} Note that the dimension of any coadjoint orbit is even [Remark \ref{remark23}], therefore if $\mathcal G$ is an $\MD{n}{n-2}$-algebra then $n$ must be even. The case $n=2$ is trivial. The case $n=4$ is solved completely in \cite{Le90-2}. Namely, up to an isomorphism, in the $\MD{4}{2}$-class there are 5 decomposable algebras and 8 indecomposable ones as follows: \begin{enumerate} \item[(1)] The decomposable case: \begin{itemize} \item[(i)] $\textnormal{aff}(\mathbb{R})\oplus \mathbb{R}^2$. \item[(ii)] $\mathfrak{s}_{3}\oplus \mathbb{R}$ where $\mathfrak{s}_3 \in \{\mathfrak{n}_{3,1},\, \mathfrak{s}_{3,1},\, \mathfrak{s}_{3,2},\, \mathfrak{s}_{3,3}\}$, i.e. $\mathfrak{s}_{3}$ is a non-commutative solvable Lie algebra of dimension 3 according to the notation of \cite{Snob}. \end{itemize} \item[(2)] The indecomposable case: $\mathfrak{n}_{4,1}$, $\mathfrak{s}_{4,1}$, $\mathfrak{s}_{4,2}$, $\mathfrak{s}_{4,3}$, $\mathfrak{s}_{4,4}$, $\mathfrak{s}_{4,5}$, $\mathfrak{s}_{4,6}$, $\mathfrak{s}_{4,7}$ according to the notation of \cite{Snob}. \end{enumerate} \end{remark} Hence, to completely classify the $\MD{n}{n-2}$-class, we only have to consider the remaining case when $n\geq 6$. \section[One step solvable MD{n-2}{n}-algebras]{One-step solvable $\boldsymbol{\MD{n}{n-2}}$-algebras} \label{section-3} According to Proposition \ref{solvability2} and Lemma \ref{cor2}, the classification of $\MD{n}{n-2}$-algebras falls naturally into three problems: \begin{itemize} \item The problem of classification those 1-step solvable algebras which have the derived algebra of dimension at least 3. \item The problem of classification of those 1-step solvable algebras which have the derived algebra of dimension at most 2. \item The problem of classification of those 2-step solvable algebras. \end{itemize} We will solve the first item in this section. The remaining items will be solved in the next sections. \begin{theorem}\label{classification-commutative3} Let $\mathcal G$ be a 1-step solvable $\MD{n}{n-2}$-algebra of dimension $n\geq 6$ and $\dim\mathcal G^1\geq 3$. Then $n$ must be 6 and $\mathcal G$ is isomorphic to one of the following families: $\mathfrak{s}_{6,211}$, $\mathfrak{s}_{6,225}$, $\mathfrak{s}_{6,226}$, $\mathfrak{s}_{6,228}$\footnote{Some algebras contained in families listed in \cite{Snob} are not MD-algebras, we will give the detail Lie brackets of these Lie algebras (which are MD-algebras) in the final section} listed in \cite{Snob}. \end{theorem} \begin{remark} If $\mathcal G$ is a decomposable $\MD{6}{4}$-algebra then $\mathcal G$ is a trivial extension of either an indecomposable $\MD{5}{4}$-algebra or an indecomposable $\MD{4}{4}$-algebra \cite[Theorem 3.1]{HHV21}. These indecomposable MD-algebras are classified in \cite{Son-Viet,LHT11,Vu-Shum}. Based on their classification, there are exactly one indecomposable $\MD{4}{4}$-algebra $\textnormal{aff}(\mathbb{C})$ and exactly one indecomposable $\MD{5}{4}$-algebra $\mathfrak{s}_{5,45}$ in Proposition \ref{pro210} Hence, if $\mathcal G$ is a decomposable $\MD{6}{4}$-algebra then $\mathcal G$ is either isomorphic to $\mathbb{R}^2\oplus\textnormal{aff}(\mathbb{C})$ or isomorphic to $\mathbb{R}\oplus\mathfrak{s}_{5,45}$. \end{remark} In order to prove Theorem \ref{classification-commutative3}, we will need the following lemma. \begin{lemma}\label{lemma35} Let $f, g$ be two commutative endomorphisms on $\mathbb{R}^4$, i.e. $f\circ g=g\circ f$. Assume that the matrices of $f$ and $g$ with respect to a basis $\mathfrak{b}$ are equal to \begin{equation*} [f]_{\mathfrak{b}}=\begin{bmatrix} A_1 & A_2 \\ \boldsymbol{0} & I \end{bmatrix}, [g]_{\mathfrak{b}}=\begin{bmatrix} B_1 & B_2 \\ \boldsymbol{0} & J \end{bmatrix}; \end{equation*} where $A_1,A_2, B_1,B_2$ are $2\times 2$ matrices. If either $\det(B_1^2+I)\neq 0$ or $\det(A_1-I)\neq 0$ then there is a basis $\mathfrak{b}'$ of $\mathbb{R}^4$ so that \begin{equation*} [f]_{\mathfrak{b}'}=\begin{bmatrix} A_1 & \boldsymbol{0} \\ \boldsymbol{0} & I \end{bmatrix}, [g]_{\mathfrak{b}'}=\begin{bmatrix} B_1 & \boldsymbol{0} \\ \boldsymbol{0} & J \end{bmatrix}. \end{equation*} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma35}] Let's denote the vectors in the basis $\mathfrak{b}$ by $\{y_1,y_2,y_3,y_4\}$. \begin{itemize} \item If $\det(B_1^2+I)\neq 0$, then we first claim that there are $\alpha,\beta,\gamma,\delta \in\mathbb{R}$ so that \begin{equation*} \begin{bmatrix} -\gamma & \alpha \\ -\delta & \beta \end{bmatrix} = B_2+B_1\begin{bmatrix} \alpha & \gamma \\ \beta & \delta \end{bmatrix}. \end{equation*} Indeed, the above system is equivalent to \begin{equation*} \left\{ \begin{array}{rl} \begin{bmatrix} -\gamma \\ -\delta \end{bmatrix} = & \begin{bmatrix} (B_2)_{11} \\ (B_2)_{21} \end{bmatrix} + B_1\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = & \begin{bmatrix} (B_2)_{12} \\ (B_2)_{22} \end{bmatrix} + B_1\begin{bmatrix} \gamma \\ \delta \end{bmatrix} \end{array} \right., \end{equation*} or \begin{equation*}\label{eq425} \left\{ \begin{array}{rl} \begin{bmatrix} -\gamma \\ -\delta \end{bmatrix} = & \begin{bmatrix} (B_2)_{11} \\ (B_2)_{21} \end{bmatrix} + B_1\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ (B_1^2+I)\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = & \begin{bmatrix} (B_2)_{12} \\ (B_2)_{22} \end{bmatrix} - B_1\begin{bmatrix} (B_2)_{11} \\ (B_2)_{21} \end{bmatrix} \end{array} \right.. \end{equation*} The existence of $\alpha,\beta,\gamma,\delta$ follows from the non-singularity of $B_1^2+I.$ Let $\mathfrak{b}':=\{y_1',y_2',y_3',y_4'\}$ be a basis of $\mathbb{R}^4$ defined by: \begin{equation*} \left\{ \begin{array}{ll} y_1'=y_1, y_2'=y_2\\ y_3'=y_3+\alpha y_1+\beta y_2\\ y_4'=y_4+\gamma y_1+\delta y_2. \end{array} \right. \end{equation*} Then the matrix of $f$ and $g$ with respect to $\mathfrak{b}'$ are determined as \begin{equation*} [f]_{\mathfrak{b}'}=\begin{bmatrix} A_1 & A_2' \\ \boldsymbol{0} & I \end{bmatrix}, \quad [g]_{\mathfrak{b}'}=\begin{bmatrix} B_1 & \boldsymbol{0} \\ \boldsymbol{0} & J \end{bmatrix}, \end{equation*} for some $2\times 2$ matrix $A_2'$. Moreover, \begin{equation*} f\circ g=g\circ f \Longleftrightarrow A_2' \times J=B_1\times A_2' \Longleftrightarrow \left\{ \begin{array}{rl} -\begin{bmatrix} (A'_2)_{12} \\ (A'_2)_{22} \end{bmatrix}= B_1 \begin{bmatrix} (A'_2)_{11} \\ (A'_2)_{21} \end{bmatrix} \\ \begin{bmatrix} (A'_2)_{11} \\ (A'_2)_{21} \end{bmatrix}= B_1 \begin{bmatrix} (A'_2)_{12} \\ (A'_2)_{22} \end{bmatrix} \end{array} \right.. \end{equation*} Hence, \begin{equation*} \left\{ \begin{array}{rl} -\begin{bmatrix} (A'_2)_{12} \\ (A'_2)_{22} \end{bmatrix}= B_1 \begin{bmatrix} (A'_2)_{11} \\ (A'_2)_{21} \end{bmatrix} \\ (B_1^2+I)\begin{bmatrix} (A'_2)_{11} \\ (A'_2)_{21} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \end{array} \right. \end{equation*} which implies, from $\det(B_1^2+I)\neq 0$, that $A'_2=\boldsymbol{0}$. \item By the same manner as previous item, if $\det(A_1-I)\neq 0$ then there exist $\alpha,\beta,\gamma,\delta\in \mathbb{R}$ so that \begin{equation*} (A_1-I)\begin{bmatrix} \alpha & \gamma\\ \beta & \delta \end{bmatrix} = -A_2. \end{equation*} Equivalently, the matrix of $f$ with respect to the basis $\mathfrak{b}':=\{y_1,y_2,y_3+\alpha y_1+\beta y_2, y_4+\gamma y_1+\delta y_2\}$ is equal to $ \begin{bmatrix} A_1 & \boldsymbol{0}\\ \boldsymbol{0} & I \end{bmatrix}. $ Once again, the commutation of $f$ and $g$ implies that the matrix of $g$ with respect to $\mathfrak{b}'$ is equal to $ \begin{bmatrix} B_1 & \boldsymbol{0}\\ \boldsymbol{0} & J \end{bmatrix}. $ This completes the proof of the Lemma. \end{itemize} \end{proof} Now, we begin to prove Theorem \ref{classification-commutative3}. The proof falls into three parts. Firstly, we will prove that $\dim \mathcal G = 6$, and $\dim \mathcal G^1 \leq 4$. Secondly, we will prove that there is no $\MD{6}{4}$-algebra with $\dim\mathcal G^1=3$. Thirdly, we will classify $\MD{6}{4}$-algebras with $\dim\mathcal G^1=4$. \begin{proof}[Proof of Theorem \ref{classification-commutative3}] Let's denote by $m$ the dimension of $\mathcal G^1$ ($m\geq 3$) and let $\mathfrak{b}$ be a basis of $\mathcal G$ which satisfies all conditions in Lemma \ref{cor2}. If so, \begin{equation*} P_{x_n^*}=\begin{bmatrix} x_n^*([x_1,x_{n-m+1}]) & x_n^*([x_1,x_{n-m+2}]) & \cdots & x_n^*([x_1,x_n]) \\ x_n^*([x_2,x_{n-m+1}]) & x_n^*([x_2,x_{n-m+2}]) & \cdots & x_n^*([x_2,x_n]) \\ \vdots & \vdots & & \vdots \\ x_n^*([x_{n-m},x_{n-m+1}]) & x_n^*([x_{n-m},x_{n-m+2}]) & \cdots & x_n^*([x_{n-m},x_n]) \\ \end{bmatrix}. \end{equation*} Because the matrices $[\adg{x_1}]_\mathfrak{b}, \ldots,[\adg{x_{n-m}}]_\mathfrak{b}$ are of block triangular form in the sense of Proposition \ref{triangular}, the first $(m-2)$ columns of $P_{x_n^*}$ are equal to zero. Hence, \begin{equation*} \rank{(P_{x_n^*})} \leq 2. \end{equation*} By Remark \ref{remark210}, we obtain $\dim\Omega_{x_n^*}\leq 4$. Since each non-trivial coadjoint orbit of $\mathcal G$ is of dimension $n-2$, we get $n-2\leq 4$, i.e. $n\leq 6$. By the assumption, $n\geq 6$. Therefore, $n$ must be 6. In particular, $m = \dim \mathcal G^1 < \dim \mathcal G = 6$. Now, we will prove that $m \leq 4$. Assume the contrary that $m=5$ then all but the first row of $P_{x_n^*}$ is zero. This turns out that $\dim\Omega_{x_n^*} \leq 2$, a contradiction to the fact that every non-trivial coadjoint orbit of an $\MD{n}{n-2}$-algebra is of dimension $n-2$. Hence, $3\leq m\leq 4$. However, if $m=3$ then there is at least one block of size 1 in the triangular form of the matrices $\left\{[\adg{x_i}]_\mathfrak{b}: i=1,2,3\right\}$. In the other words, we may assume that \begin{equation*} [\adg{x_1}]_\mathfrak{b}=\begin{bmatrix} *_{2\times 2} & *\\ 0 & a_1 \end{bmatrix}, [\adg{x_2}]_\mathfrak{b}=\begin{bmatrix} *_{2\times 2} & *\\ 0 & a_2 \end{bmatrix}, [\adg{x_3}]_\mathfrak{b}=\begin{bmatrix} *_{2\times 2} & *\\ 0 & a_3 \end{bmatrix}, \end{equation*} for some $a_1,a_2,a_3\in\mathbb{R}$. If so, \begin{equation*} P_{x_6^*}=\begin{bmatrix} 0 & 0 & a_{1}\\ 0 & 0 & a_{2}\\ 0 & 0 & a_{3} \end{bmatrix} \end{equation*} which must have rank 1, or $\dim\Omega_{x_6^*}=2$, a contradiction. Therefore, $m=4$. Finally, let's classify $\MD{6}{4}$-algebras. By rewriting \begin{equation*} [\adg{x_1}]_\mathfrak{b}=\begin{bmatrix} A_1 & A_2\\ \boldsymbol{0} & A_3 \end{bmatrix}, \text{ and } [\adg{x_2}]_\mathfrak{b}=\begin{bmatrix} B_1 & B_2 \\ \boldsymbol{0} & B_3 \end{bmatrix}, \end{equation*} we have four possibilities for the $2\times 2$ matrices $A_3, B_3$ as follows: \begin{itemize} \item $A_3$ and $B_3$ are both of triangular form, i.e. $(A_3)_{21}=(B_3)_{21}=0$. \item $A_3=\lambda I_2$ and $B_3 = \begin{bmatrix} \mu & \zeta\\ -\zeta & \mu \end{bmatrix}$ for some $\lambda, \mu\in\mathbb{R}, 0\neq \zeta \in \mathbb{R}$. \item $A_3=\begin{bmatrix} \mu & \zeta\\ -\zeta & \mu \end{bmatrix}$ and $B_3 = \lambda I_2$ for some $\lambda,\mu\in\mathbb{R}, 0\neq \zeta \in \mathbb{R}$. \item $A_3=\begin{bmatrix} \lambda & \eta\\ -\eta & \lambda \end{bmatrix}$ and $B_3=\begin{bmatrix} \mu & \zeta\\ -\zeta & \mu \end{bmatrix}$ for some $\lambda,\eta,\mu,\zeta\in \mathbb{R}$ with $\eta \neq 0,\zeta \neq 0$. \end{itemize} Remark that the change of basis $x_1\rightarrow x_1-\dfrac{\eta}{\zeta}x_2$ and the change of basis $x_1\leftrightarrow x_2$ bring respectively the fourth item and the third item to the second item. Hence, it is sufficient to consider only the two first possibilities. However, if $A_3$ and $B_3$ are both of triangular form, then \begin{equation*} x_6^*([x_i,x_j])=0 \quad \forall 1\leq i,j\leq 5, \end{equation*} and hence, $\rank{(P_{x^*_6})}=1$, or $\dim\Omega_{x_6^*}=2$, a contradiction again. Therefore, it suffices to consider the second item only: \begin{equation*} A_3=\lambda I \text{ and } B_3 = \mu I + \zeta J \quad (\zeta \neq 0). \end{equation*} If so, by the same manner, we obviously obtain $\lambda \neq 0$. Now, by the following change of basis: \begin{equation*} \left\{ \begin{array}{ll} x_1 & \rightarrow \frac{1}{\lambda} x_1\\ x_2 & \rightarrow\frac{1}{\zeta}(x_2-\mu x_1), \end{array} \right. \end{equation*} we may assume $\lambda=1,\mu=0$ and $\zeta=1$. Hence, without loss of generality, we may assume from beginning that \begin{equation*} [\adg{x_1}]_\mathfrak{b}=\begin{bmatrix} A_1 & A_2\\ \boldsymbol{0} & I \end{bmatrix}, [\adg{x_2}]_\mathfrak{b}=\begin{bmatrix} B_1 & B_2 \\ \boldsymbol{0} & J \end{bmatrix}. \end{equation*} Similarly, we have two possibilities for the forms of $A_1$ and $B_1$ as follows: \begin{itemize} \item $A_1$ and $B_1$ are both of triangular form, i.e. $(A_1)_{21}=(B_1)_{21}=0$. \item $A_1=\begin{bmatrix} \lambda & \eta\\ -\eta & \lambda \end{bmatrix}$ and $B_1 = \begin{bmatrix} \mu & \zeta\\ -\zeta & \mu \end{bmatrix}$ with $\eta^2+\zeta^2 \neq 0$. \end{itemize} However, If $A_1$ and $B_1$ are both of triangular form then $\det(B_1^2+I)\neq 0$. It follows from Lemma \ref{lemma35} that we may assume $A_2=B_2=\boldsymbol{0}$. If so, it is elementary to check that \begin{equation*} \left\{ \begin{array}{ll} x_4^*([x_1,x_5]) = x_4^*([x_1,x_6]) = x_4^*([x_2,x_5]) = x_4^*([x_2,x_6]) = 0, \\ x_4^*([x_1,x_3])=x_4^*([x_2,x_3])=0. \end{array} \right. \end{equation*} Therefore, \begin{equation*} P_{x_4^*}=\begin{bmatrix} 0 & * & 0 & 0 \\ 0 & * & 0 & 0 \end{bmatrix} \end{equation*} which has rank exactly 1. Hence, $\dim \Omega_{x_4^*}=2$, a contradiction again. In summary, we may assume that \begin{equation*} [\adg{x_1}]_{\mathfrak{b}}=\begin{bmatrix} \lambda I+\eta J & A_2\\ \boldsymbol{0} & I \end{bmatrix}, [\adg{x_2}]_{\mathfrak{b}}=\begin{bmatrix} \mu I+\zeta J & B_2 \\ \boldsymbol{0} & J \end{bmatrix} \text{ with } \eta^2+\zeta^2\neq 0. \end{equation*} Besides, it is elementary to check that \begin{equation*} \left\{ \begin{array}{ll} \det(\lambda I+\eta J-I) = 0 \Longleftrightarrow (\lambda,\eta)=(1,0) \\ \det\left((\mu I+\zeta J)^2+I\right) = 0 \Longleftrightarrow (\mu,\zeta)= (0,\pm 1). \end{array} \right. \end{equation*} Hence, in light of Lemma \ref{lemma35}, we shall split the rest of the proof into two cases as followings: \begin{enumerate} \item \textbf{Case 1: $\boldsymbol{A_1=I}$ and $\boldsymbol{B_1=\pm J}$.} If so, by the following change of basis: $x_4\rightarrow -x_4$ if necessary, we can assume that $B_1=J$. In the other words, \begin{equation*} [\adg{x_1}]_{\mathfrak{b}}=\begin{bmatrix} I & A_2 \\ \boldsymbol{0} & I \end{bmatrix}, [\adg{x_2}]_{\mathfrak{b}}=\begin{bmatrix} J & B_2\\ \boldsymbol{0} & J \end{bmatrix}. \end{equation*} By the following change of basis: $x_5\rightarrow x_5+(B_2)_{12}x_3+(B_2)_{22}x_4$, we can assume $(B_2)_{12}=(B_2)_{22}=0$. If so, the commutation of $\adg{x_1}$ and $\adg{x_2}$ implies that \begin{equation*} (A_2)_{11}=(A_2)_{22}, (A_2)_{12}=-(A_2)_{21}. \end{equation*} In the other words, we can assume that \begin{equation*} [\adg{x_1}]_{\mathfrak{b}}=\begin{bmatrix} 1 & 0 & \nu & \theta\\ 0 & 1 & -\theta & \nu\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix}, [\adg{x_2}]_{\mathfrak{b}}=\begin{bmatrix} 0 & 1 & \chi & 0\\ -1 & 0 & \omega & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & -1 & 0 \end{bmatrix}. \end{equation*} Let's denote this Lie algebra by $L(\nu,\theta,\chi,\omega)$. Then, via the following change of basis: \begin{equation*} \left\{ \begin{array}{ll} x_3\rightarrow (\chi+\omega)x_3-(\chi-\omega)x_4 \\ x_4\rightarrow (\chi -\omega) x_3+(\chi+\omega)x_4 \\ x_5\rightarrow x_5-x_6+\chi x_3+\omega x_4 \\ x_6\rightarrow x_5+x_6 \end{array} \right. \quad \text{ (if } \chi^2+\omega^2\neq 0), \end{equation*} we easily see that \begin{equation}\label{1} L(\nu,\theta,\chi,\omega) \cong L(\nu,\theta,1,0) \quad \text{ (if } \chi^2+\omega^2\neq 0). \end{equation} Remark that by basis changing: $x_3 \rightarrow -x_3$ if necessary, we can assume that $\nu \geq 0$. Similarly, via the following change of basis: \begin{equation*} \left\{\begin{array}{ll} x_3\rightarrow \nu x_3-\theta x_4 \\ x_4\rightarrow \theta x_3+\nu x_4, \end{array}\right. \quad \text{ (if } \nu^2+\theta^2\neq 0), \end{equation*} we easily see that \begin{equation}\label{3} L(\nu,\theta,0,0) \cong L(1,0,0,0) \quad \text{ (if } \nu^2+\theta^2\neq 0). \end{equation} In summary. we conclude from the equations (\ref{1}) and (\ref{3}) that \begin{equation*} L(\nu,\theta,\chi,\omega) \cong \left\{\begin{array}{ll} L(0,0,0,0) & \text{if } \nu^2+\theta^2=\chi^2+\omega^2=0 \\ L(1,0,0,0) & \text{if } \nu^2+\theta^2\neq 0, \text{ and } \chi^2+\omega^2= 0 \\ L(\nu,\theta,1,0) \quad (\text{with } \nu\geq 0) & \text{if } \chi^2+\omega^2 \neq 0 \\ \end{array}\right. \end{equation*} Remark that $L(1,0,0,0)$ and $L(\nu,\theta,1,0)$ (with $\nu\geq 0$) are respectively isomorphic to $\mathfrak{s}_{6,211}$ and $\mathfrak{s}_{6,225}$ listed in \cite{Snob}. While $L(0,0,0,0)$ belongs to the family $\mathfrak{s}_{6,226}$ listed in \cite{Snob}. \item \textbf{Case 2. Either $\boldsymbol{A_1\neq I}$ or $\boldsymbol{B_1\neq \pm J}$}. If so, we can assume that $A_2=B_2=\boldsymbol{0}$ [Lemma \ref{lemma35}], or \begin{equation*} [\adg{x_1}]_\mathfrak{b}=\begin{bmatrix} \lambda I + \eta J & \boldsymbol{0}\\ \boldsymbol{0} & I \end{bmatrix}, [\adg{x_2}]_\mathfrak{b}=\begin{bmatrix} \mu I +\zeta J & \boldsymbol{0} \\ \boldsymbol{0} & J \end{bmatrix} \text{ with } \eta^2+\zeta^2\neq 0. \end{equation*} Let's denote the corresponding Lie algebra as $L(\lambda,\eta,\mu,\zeta)$. Then for any $F=a_1x_1^*+\cdots+a_6x_6^* \in\mathcal G^*$, we have \begin{equation*} P_{F}=\begin{bmatrix} \lambda a_3-\eta a_4 & \eta a_3+\lambda a_4 & a_5 & a_6 \\ \mu a_3 - \zeta a_4 & \zeta a_3+\mu a_4 & -a_6 & a_5 \end{bmatrix}. \end{equation*} Therefore, $\rank{(P_F)}=2$ for any $F\in\mathcal G^*$ with $F|_{\mathcal G^1}\neq 0$ if and only if $\lambda\zeta-\mu\eta \neq 0$. In the other words, $L(\lambda,\eta,\mu,\zeta)$ is an $\MD{6}{4}$-algebra if and only if \begin{equation}\label{lambda} \lambda\zeta-\mu\eta \neq 0. \end{equation} Furthermore, by the following change of basis: \begin{equation*} \left\{\begin{array}{ll} x_1 \rightarrow & \frac{1}{\lambda\zeta-\mu\eta}(\zeta x_1-\eta x_2) \\ x_2 \rightarrow & \frac{1}{\lambda\zeta-\mu\eta}(-\mu x_1+\lambda x_2) \\ x_3 \leftrightarrow & x_5 \\ x_4 \leftrightarrow & x_6, \end{array}\right. \end{equation*} we can see that \begin{equation}\label{condition} L(\lambda,\eta,\mu,\zeta)\cong L(\dfrac{\zeta}{\lambda\zeta-\mu\eta},-\dfrac{\eta}{\lambda\zeta-\mu\eta},-\dfrac{\mu}{\lambda\zeta-\mu\eta},\dfrac{\lambda}{\lambda\zeta-\mu\eta}). \end{equation} Similarly, by the following change of basis: $x_4\rightarrow -x_4$, we get \begin{equation}\label{eq413} L(\lambda,\eta,\mu,\zeta) \cong L(\lambda,-\eta,\mu,-\zeta); \end{equation} and by the following change of basis: \begin{equation*} \left\{ \begin{array}{lr} x_2 \rightarrow & -x_2\\ x_4 \rightarrow & - x_4\\ x_5 \rightarrow & x_6\\ x_6 \rightarrow & x_5 \end{array} \right., \end{equation*} we get \begin{equation}\label{eq411} L(\lambda,\eta,\mu,\zeta) \cong L(\lambda,-\eta,-\mu,\zeta). \end{equation} \begin{itemize} \item If $\eta=0$ then it follows from the equation (\ref{lambda}) that $\lambda\zeta \neq 0$. Hence, the equation (\ref{condition}) becomes \begin{equation}\label{s226} L(\lambda,0,\mu,\zeta) \cong L(\dfrac{1}{\lambda},0,\dfrac{-\mu}{\lambda\zeta},\dfrac{1}{\zeta}). \end{equation} By combining the equations (\ref{eq413}), (\ref{eq411}) and (\ref{s226}), we obtain \begin{equation*} L(\lambda,0,\mu,\zeta) \cong L(\lambda',0,\mu',\zeta') \end{equation*} where $0<\zeta'\leq 1$, $\mu'\geq 0$, $\lambda' \neq 0$; and if $\zeta'=1$ then $|\lambda'|\leq 1$. This class of MD-algebras coincides with the family $\mathfrak{s}_{6,226}$ in \cite{Snob}, except some non MD-algebras cases. Hence, we also use the notation $\mathfrak{s}_{6,226}$ to denote this class. \item If $\eta\neq 0$ then, by the same manner, we obtain \begin{equation*} L(\lambda,\eta,\mu,\zeta) \cong L(\lambda',\eta',\mu',\zeta') \end{equation*} where $\lambda'\eta'-\mu'\zeta' >0$ and $\mu'\geq 0$. This class of MD-algebras coincides with the family $\mathfrak{s}_{6,228}$ in \cite{Snob}, except some non MD-algebras cases. Hence, we also denote this class by $\mathfrak{s}_{6,228}$. The proof is completed. \end{itemize} \end{enumerate} \end{proof} \section[One step solvable MD{n-2}{n}-algebras-2]{One-step solvable $\boldsymbol{\MD{n}{n-2}}$-algebras which have low-dimensional derived algebras}\label{section-4} In order to obtain a complete classification of 1-step solvable $\MD{n}{n-2}$-algebras, we need to solve the problem for $\dim \mathcal G^1 \leq 2$. The classification of Lie algebras which have low-dimensional derived algebras has been studied by T. Janisse \cite{Janisse10}, C. Sch\"obel \cite{Schobel93}, Vu A. L. et al. \cite{Vuthieu20}, F. Levstein \& A. L. Tiraboschi \cite{Levstein99}, and C.~Bartolone et al. \cite{Bartolone2011}. \begin{proposition}[\cite{Janisse10,Schobel93,Vuthieu20}] Let $\mathcal G$ be a real $n$-dimensional Lie algebra with $n\geq 5$. \begin{itemize} \item If $\dim\mathcal G^1\leq 2$ then $\mathcal G^1$ is commutative. \item If $\dim\mathcal G^1=1$ then $\mathcal G$ is an trivial extension of either $\textnormal{aff}(\mathbb{R})$ or $\mathfrak{h}_{2m+1}$ ($n\geq 2m+1, m \geq 1$) \item If $\dim\mathcal G^1=2$ and $\mathcal G^1$ is not completely contained in the centre $C(\mathcal G)$ of $\mathcal G$, then $\mathcal G$ is isomorphic to one of the following forms: \begin{itemize} \item[(i)] $\mathcal G_{5+2k}:=\langle x_1, x_2, \dots,x_{5+2k}\rangle \ (n=5+2k,k\in \mathbb{N})$ with $[x_3,x_4]=x_1$ and \begin{equation*} [x_3,x_1]=[x_4,x_5]=\cdots=[x_{4+2k},x_{5+2k}]=x_2. \end{equation*} \item[(ii)] $\mathcal G_{6+2k,1}:=\langle x_1,x_2,\dots,x_{6+2k}\rangle \ (n=6+2k,k\in \mathbb{N})$ with $[x_3,x_1]=x_1$ and \begin{equation*} [x_3,x_4]=[x_5,x_6]=\cdots=[x_{5+2k},x_{6+2k}]=x_2. \end{equation*} \item[(iii)] $\mathcal G_{6+2k,2}:=\langle x_1,x_2,\dots,x_{6+2k}\rangle \ (n=6+2k,k\in \mathbb{N})$ with $[x_3,x_4]=x_1$ and \begin{equation*} [x_3,x_1]=[x_5,x_6]=\cdots=[x_{5+2k},x_{6+2k}]=x_2. \end{equation*} \item[(iv)] $\textnormal{aff}(\mathbb{R})\oplus\mathfrak{h}_{2m+1}$ ($m \geq 1$). \item[(v)] A trivial extension of one of Lie algebras listed above in (i), (ii), (iii) and (iv). \item[(vi)] A trivial extension of $\textnormal{aff}(\mathbb{R})\oplus\textnormal{aff}(\mathbb{R})$. \item[(vii)] A trivial extension of a Lie algebra $\mathcal H$ of dimension less than 5 such that $\dim\mathcal H^1=2$ and $\mathcal H^1$ is not contained in the centre of $\mathcal H$. \end{itemize} \end{itemize} \end{proposition} It is easy to see that $\mathcal G_{5+2k}, \mathcal G_{6+2k,1}$, $\mathcal G_{6+2k,2}$, $\textnormal{aff}(\mathbb{R})\oplus \mathfrak{h}_{2m+1}$ and any trivial extension of $\textnormal{aff}(\mathbb{R})\oplus \textnormal{aff}(\mathbb{R})$ listed above are not MD-algebras for every $k$. For example, $\mathcal G_{5+2k}$ has a coadjoint orbit of dimension 2 and a coadjoint orbit of dimension $4+2k$: \begin{equation*} \dim \Omega_{x_1^*} = 2, \dim \Omega_{x_2^*} = 4+2k. \end{equation*} \begin{corollary}\label{lemma-non2step} Let $\mathcal G$ be an $\MD{n}{n-2}$-algebra with $n\geq 6$. \begin{itemize} \item If $\dim\mathcal G^1=1$ then $\mathcal G$ is isomorphic to $\mathfrak{h}_{2m+1}\oplus \mathbb{R}$ where $m=\frac{n-2}{2}$. \item If $\left\{\begin{array}{ll} \dim\mathcal G^1=2 \\ \mathcal G^1\nsubseteq C(\mathcal G) \end{array}\right.$ then $\mathcal G$ is isomorphic to $\textnormal{aff}(\mathbb{C})\oplus \mathbb{R}^2$. \end{itemize} \end{corollary} Now, we will investigate the remaining case: \begin{equation*} \left\{\begin{array}{ll} \dim\mathcal G^1=2 \\ \mathcal G^1\subseteq C(\mathcal G) . \end{array}\right. \end{equation*} Firstly, it is easy to check that $\mathcal G^1\subseteq C(\mathcal G)$ if and only if $\mathcal G$ is 2-step nilpotent, i.e. $\mathcal G_2:=[[\mathcal G,\mathcal G],\mathcal G]$ is trivial (a 2-step nilpotent Lie algebra is also called a metabelian Lie algebra). Because $\mathcal G$ is 2-step nilpotent with $\dim\mathcal G^1=2$, there is a basis $\mathfrak{b}:=\{x_1,\dots,x_n\}$ of $\mathcal G$ such that $\mathcal G^1=\langle x_{n-1},x_n\rangle$ and $[x_i,x_{n-1}]=[x_i,x_n]=0$ for all $i$. Therefore, $\mathcal G$ determines a pair of $(n-2)\times (n-2)$ skew-symmetric matrices $(M,N)$ defined by \begin{equation}\label{def-MN} (M)_{ij}: = x_{n-1}^*([x_i,x_j]); (N)_{ij}: = x_{n}^*([x_i,x_j]). \end{equation} Since $\dim\mathcal G^1=2$, $M$ and $N$ are linearly independent in the sense that there is no $(0,0) \neq (\alpha,\beta)$ such that $\alpha M+\beta N=\boldsymbol 0$. The matrices $(M,N)$ are called the {\em associated matrices} of $\mathcal G$ with respect to the basis $\mathfrak{b}$ (we also say that $\mathcal G$ is associated by the matrices $(M,N)$ with respect to $\mathfrak{b}$). Conversely, Let $(M,N)$ be any pair of skew-symmetric matrices of size $(n-2)\times (n-2)$ which are linearly independent. Then we can define a Lie algebra $\mathcal G$ of dimension $n$ as follows: $\mathcal G$ is spanned by a basis $\{x_1, \dots, x_n\}$, and the Lie brackets are defined via that basis as follows: \begin{equation*} \left\{\begin{array}{ll} [x_i,x_{n-1}] = [x_i,x_n] = 0 & 1 \leq i \leq n \\ {}[x_i,x_j] = (M)_{ij} x_{n-1}+(N)_{ij}x_n & 1 \leq i,j\leq n-2. \end{array}\right. \end{equation*} In 1999, F. Levstein \& A. L. Tiraboschi \cite{Levstein99} proved the corresponding between the isomorphism of two such 2-step nilpotent Lie algebras with the (strict) congruence of vector spaces spanned by their associated matrices, as stated in the following proposition. \begin{proposition}\cite{Levstein99} Let $\mathcal G$ and $\mathcal G'$ be two 2-step nilpotent Lie algebras which have $\dim\mathcal G^1=\dim\mathcal G'^1=2$. Suppose that $\mathcal G$ and $\mathcal G'$ are associated (with respect to some bases) with $(M,N)$ and $(M',N')$ respectively. Then $\mathcal G$ is isomorphic to $\mathcal G'$ if and only if there is a nonsingular matrix $T$ so that \begin{equation*} T \cdot \langle M, N\rangle\cdot T^t = \langle M', N'\rangle \end{equation*} \end{proposition} In particular, if the pencils $M-\rho N$ and $M'-\rho N'$ are strictly congruent, i.e. there is a nonsingular matrix $T$ (which does not depend on $\rho$) so that $T(M-\rho N)T^t=M'-\rho N'$, then their associated Lie algebras are isomorphic. Although the converse of the later statement is not true in general, but the statement is still useful to classify Lie algebras in this paper. The classification (up to strict congruence) of pencils of complex/real matrices which are either symmetric or skew-symmetric was solved by R. C. Thompson \cite{Thompson91} (the skew-symmetric case was classified in \cite{Scharlau76}). Because we are concerning with real skew-symmetric matrices, we will state his theorem for the case of pencils of real skew-symmetric matrices only. \begin{proposition}\cite[Theorem 2]{Thompson91}\label{pencil} Let A and B be real skew-symmetric matrices. Then a simultaneous (real) congruence of $A$ and $B$ exists reducing $A-\rho B$ to a direct sum of types $m, \infty,\alpha$, and $\beta$, where \begin{equation*} \begin{array}{cc} m:=\begin{bmatrix} \boldsymbol 0 & L_{e}(\rho) \\ -L_e(\rho)^t & \boldsymbol 0 \end{bmatrix}, \infty: = \begin{bmatrix} \boldsymbol 0 & \Delta_f -\rho \Lambda_f \\ -\Delta_f+\rho\Lambda_f & \boldsymbol 0 \end{bmatrix}, \\ \alpha: =\begin{bmatrix} \boldsymbol 0 & (a -\rho) \Delta_g + \Lambda_g \\ (a+\rho)\Delta_g-\Lambda_g & \boldsymbol 0 \end{bmatrix}, \beta:=\begin{bmatrix} \boldsymbol 0 & \Gamma_h(\rho) \\ -\Gamma_h(\rho) & \boldsymbol 0 \end{bmatrix} \end{array} \end{equation*} with \begin{equation*} L_e(\rho):= \begin{bmatrix} \rho & \\ 1 & \ddots & \\ & \ddots & -\rho \\ & & 1 \end{bmatrix}_{(e+1)\times e}, \Delta_f: = \begin{bmatrix} & & 1 \\ & \iddots \\ 1 \end{bmatrix}_{f\times f},\Lambda_f: = \begin{bmatrix} & & & 0\\ & & \iddots & 1\\ &\iddots & \iddots\\ 0 & 1 \end{bmatrix}_{f\times f}, \end{equation*} and \begin{equation*} \Gamma_g(\rho) := \begin{bmatrix} \boldsymbol 0 & \begin{bmatrix} & & & R \\ & &\iddots& S \\ & \iddots & \iddots \\ R & S \end{bmatrix} \\ \begin{bmatrix} & & & R \\ & &\iddots& S \\ & \iddots & \iddots \\ R & S \end{bmatrix} & \boldsymbol 0 \end{bmatrix}_{g\times g}, R: = \begin{bmatrix} c & d-\rho \\ d-\rho & -c \end{bmatrix}, S: = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \end{equation*} for some $a,c,d\in\mathbb R:c\neq 0$. \end{proposition} We can now return to the problem of classification of such 2-step nilpotent MD-algebras. According to Proposition \ref{rankbf}, $\dim\Omega_{F}=\rank \left(F([x_i,x_j])\right)_{n\times n}$ for every $0\neq F:=\lambda x_{n-1}^*+\mu x_n^* \in\mathcal G^*$. Hence, $\mathcal G$ is an $\MD{n}{k}$-algebra if and only if $\rank{(\lambda M+\mu N)} = k$ for every $(0,0)\neq (\lambda,\mu)\in\mathbb R^2$. Moreover, the type $\beta$ is the unique nonsingular type among the types $m,\infty,\alpha,\beta$ in the sense that every non-zero matrix of the type $\beta$ is nonsingular. This proves the following proposition. \begin{proposition} Let $\mathcal G$ be a 2-step nilpotent $\MD{n}{n-2}$-algebra such that $\dim\mathcal G^1=2$. Then there is a basis $\mathfrak{b}:=\{x_1, \dots, x_n\}$ of $\mathcal G$ so that $[x_i,x_{n-1}]=[x_i,x_n]=0$ for every $i$ and the associated pencil of $\mathcal G$ with respect to $\mathfrak{b}$ is equal to a direct sum of matrices of the form $\beta$ defined in Proposition \ref{pencil}. \end{proposition} \begin{corollary} If $\mathcal G$ is a 2-step nilpotent $\MD{n}{n-2}$-algebra which has $\dim\mathcal G^1=2$ then $n-2$ is divisible by $4$. \end{corollary} \begin{proof} It is straightforward from the fact that the type $\beta$ is of the size $(2g)\times (2g)$ where 2 divides $g$. \end{proof} Now, we will give illustrations for $n=6$ and $n=10$. \begin{itemize} \item Let $n=6$. Then there is a basis $\{x_1,x_2,\dots,x_6\}$ of $\mathcal G_6$ such that $\mathcal G_6^1=\langle x_5,x_6\rangle$ and \begin{equation*} \left(x_5^*([x_i,x_j])\right)_{4\times 4} = \begin{bmatrix} 0 & 0 & b & a \\ 0 & 0 & a & -b\\ -b & -a & 0 & 0 \\ -a & b & 0 & 0 \end{bmatrix}, \quad \left( x_6^*([x_i,x_j])\right)_{4\times 4} = \begin{bmatrix} 0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0\\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} \end{equation*} for some non-zero $b\in\mathbb{R}$. By applying the change of basis: \begin{equation*} \left\{ \begin{array}{ll} x_5 \rightarrow bx_5 \\ x_6\rightarrow ax_5-x_6, \end{array} \right. \end{equation*} we can assume $a=0$ and $b=1$. This Lie algebra is denoted as $\mathfrak{n}_{6,3}$ in \cite{Snob}. \item Let $n=10$. Then there is a basis $\{x_1,x_2,\dots,x_{10}\}$ of $\mathcal G$ such that $\mathcal G^1=\langle x_9,x_{10}\rangle$ and the associated pencil $M-\rho N:=\left(x_9^*([x_i,x_j])\right)_{8\times 8} - \rho\left(x_{10}^*([x_i,x_j])\right)_{8\times 8}$ is either a direct sum of two $4\times 4$ blocks of the type $\beta$ or just an $8\times 8$ matrix of the type $\beta$. Hence, we have either \begin{equation*} M -\rho N = \begin{bmatrix} 0 & 0 & b_1 & a_1 - \rho\\ 0 & 0 & a_1 -\rho & -b_1\\ -b_1 & -a_1+\rho & 0 & 0 \\ -a_1+\rho & b_1 & 0 & 0 \\ &&&& 0 & 0 & b_2 & a_2 - \rho \\ &&&& 0 & 0 & a_2 -\rho & -b_2\\ &&&&-b_2 & -a_2-\rho & 0 & 0 \\ &&&& -a_2-\rho & b_2 & 0 & 0 \end{bmatrix} \end{equation*} or \begin{equation*} M -\rho N = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & b_1 & a_1 - \rho\\ 0 & 0 & 0 & 0 & 0 & 0 & a_1 -\rho & -b_1\\ 0 & 0 & 0 & 0 & b_1 & a_1-\rho & 0 & 1 \\ 0 & 0 & 0 & 0 & a_1-\rho & -b_1 & 1 & 0 \\ 0 & 0 & -b_1 & -a_1 - \rho & 0 & 0 & 0 & 0\\ 0 & 0 & -a_1 -\rho & b_1 & 0 & 0 & 0 & 0\\ -b_1 & -a_1-\rho & 0 & -1 & 0 & 0 & 0 & 0\\ -a_1-\rho & b_1 & -1 & 0 & 0 & 0 & 0 & 0\\ \end{bmatrix} \end{equation*} for some non-zero $b_1,b_2\in\mathbb{R}$. Equivalently, $\mathcal G$ is isomorphic to one of the following forms: \begin{itemize} \item[(i)] $\mathcal G_{10,1}(a_1,b_1,a_2,b_2) := \langle x_1, x_2, \dots,x_{10}\rangle $ with $[x_i,x_9]=[x_i,x_{10}]=0$ for all $i$ and \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_7$ & $x_8$\\ \hline $x_1$ & 0 & $b_1x_9$ & $a_1x_9-x_{10}$ & 0 & 0 & 0 & 0\\ \hline $x_2$ & & $a_1x_9-x_{10}$ & $-b_1x_9$ & 0 & 0 & 0 & 0\\ \hline $x_3$ & & & 0 & 0 & 0 & 0 & 0\\ \hline $x_4$ & & & & 0 & 0 & 0 & 0\\ \hline $x_5$ & & & & & 0 & $b_2x_9$ & $a_2x_9-x_{10}$\\ \hline $x_6$ & & & & & & $a_2x_9-x_{10}$ & $-b_2x_9$\\ \hline $x_7$ & & & & & & & 0\\ \hline \end{tabular} \\ $(b_1b_2\neq 0)$ \end{center} If so, by the change of basis: \begin{equation*} x_i\leftrightarrow x_{i+4}: i \in \{1,2,3,4\}, \end{equation*} we easily see that \begin{equation}\label{first} \mathcal G_{10,1}(a_1,b_1,a_2,b_2) \cong \mathcal G_{10,1}(a_2,b_2,a_1,b_1). \end{equation} Similarly, by the following change of basis: \begin{equation*} \left\{\begin{array}{ll} x_{10} & \rightarrow -a_1x_9+x_{10}\\ x_9 & \rightarrow b_1x_9, \end{array}\right. \end{equation*} we obtain \begin{equation}\label{second} \mathcal G_{10,1}(a_1,b_1,a_2,b_2) \cong \mathcal G_{10,1}(0,1,\dfrac{a_2-a_1}{b_1},\dfrac{b_2}{b_1}). \end{equation} We conclude from the isomorphism (\ref{first}) that we always can assume $0<|b_2|\leq |b_1|$, and from the isomorphism (\ref{second}) that $a_1=0, b_1=1$, i.e., \begin{equation*} \mathcal G_{10,1}(a_1,b_1,a_2,b_2) \cong \mathcal G_{10,1}(0,1,\mu,\lambda) \quad (0<|\lambda|\leq 1) \end{equation*} \item[(ii)] $\mathcal G_{10,2}(a_1,b_1):= \langle x_1, x_2, \dots,x_{10}\rangle $ with $[x_i,x_9]=[x_i,x_{10}]=0$ for all $i$ and \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_7$ & $x_8$\\ \hline $x_1$ & 0 & 0 & 0 & 0 & 0 & $b_1x_9$ & $a_1x_9-x_{10}$\\ \hline $x_2$ & & 0 & 0 & 0 & 0 & $a_1x_9-x_{10}$ & $-b_1x_9$\\ \hline $x_3$ & & & 0 & $b_1x_9$ & $a_1x_9-x_{10}$ & 0 & $x_9$\\ \hline $x_4$ & & & & $a_1x_9-x_{10}$ & $-b_1x_9$ & $x_9$ & 0\\ \hline $x_5$ & & & & & 0 & 0 & 0\\ \hline $x_6$ & & & & & & 0 & 0\\ \hline $x_7$ & & & & & & & 0\\ \hline \end{tabular} \\ $(b_1\neq 0)$ \end{center} \end{itemize} By the change of basis: $x_{10} \rightarrow a_1x_9-x_{10}$, we easily see that \begin{equation*} \mathcal G_{10,2}(a_1,b_1) \cong \mathcal G_{10,2}(0,\lambda) \quad (\lambda \neq 0). \end{equation*} \end{itemize} In summary, we have proven the following theorem. \begin{theorem}\label{theorem62} Let $\mathcal G$ be an $\MD{n}{n-2}$-algebra of dimension $n\geq 6$ with $\dim\mathcal G^1\leq 2$. \begin{enumerate} \item If $\mathcal G$ is not 2-step nilpotent, i.e. $[\mathcal G,\mathcal G^1]\neq 0$, then $\mathcal G$ is either isomorphic to $\mathbb{R}^2\oplus\textnormal{aff}(\mathbb{C})$ or isomorphic to $\mathbb{R}\oplus \mathfrak{h}_{2m+1}$ where $2m=n-2$. \item If $\mathcal G$ is a 2-step nilpotent Lie algebra then $n=4k+2$ for some $k\in\mathbb N$, and the associated pencil of $\mathcal G$ is a direct sum of type $\beta$. \item If $n=6$ then $\mathcal G$ is isomorphic to $\mathfrak{n}_{6,1}$ defined in \cite{Snob}. \item If $n=10$ then $\mathcal G$ is isomorphic to one of the following families: $\mathcal G_{10,1}(0,1,\mu,\lambda) \ (0<|\lambda|\leq 1)$ and $\mathcal G_{10,2}(0,\lambda) \ (\lambda \neq 0)$. \end{enumerate} \end{theorem} \section[Two-step solvable MD{n}{n-2}-algebras]{Two-step solvable $\boldsymbol{\MD{n}{n-2}}$-algebras}\label{section-5} Finally, to complete the classification of $\MD{n}{n-2}$-algebras, we only need to classify 2-step solvable $\MD{n}{n-2}$-algebras. Surprising, such a Lie algebra is decomposable and has dimension exactly 6. \begin{theorem}\label{main-noncommutative} Let $\mathcal G$ be a 2-step solvable real Lie-algebra whose non-trivial coadjoint orbits are all of codimension 2. Then $\mathcal G$ is isomorphic to $\mathbb{R}\oplus \mathfrak{s}_{5,45}$. \end{theorem} \begin{proof} Recall that for every $x,y,z\in\mathcal G$, we have: \begin{equation*} \left[[x,y],z\right] = \left[x,[y,z]\right]-\left[y,[x,z]\right]. \end{equation*} It follows that \begin{equation*} \ad{x}\ad{y}-\ad{y}\ad{x}=\ad{[x,y]}. \end{equation*} Hence, for every $x\in \mathcal G^1$, we have \begin{equation}\label{commutative} \text{trace}(\ad{x})=\text{trace}(\adg{x})=\text{trace}(\adgg{x})=0 . \end{equation} According to Theorem 3.5 in \cite{HHV21}, $1\leq \dim\mathcal G^2\leq 2$. Therefore, we will divide the proof into two cases: \begin{itemize} \item \textbf{Case 1: $\boldsymbol{\dim\mathcal G^2=2}$.} If so, $\mathcal H:=\mathcal G/\mathcal G^2$ is a 1-step solvable Lie algebra whose non-trivial coadjoint orbits are all of the same dimension as $\mathcal H$ \cite[Theorem 3.5]{HHV21}. In the other words, $\mathcal{H}$ is an $\MD{n}{n}$-algebra. According to Proposition \ref{sonviet}, $\mathcal H$ is isomorphic to either $\textnormal{aff}(\mathbb{R})$ or $\textnormal{aff}(\mathbb{C})$. Since $\dim\mathcal G\geq 6$, $\mathcal H \cong \textnormal{aff}(\mathbb{C})$. It implies the existence of a basis $\mathfrak{b}:=\{x_1,x_2,y_1,y_2,z_1,z_2\}$ of $\mathcal G$ such that: \begin{align*} \mathcal G^1= & \langle y_1,y_2,z_1,z_2\rangle, \quad \mathcal G^2= \langle z_1,z_2 \rangle\\ \mathcal H = & \langle \overline{x_1}, \overline{x_2},\overline{y_1},\overline{y_2}\rangle \cong \text{aff}(\mathbb{C}), \end{align*} where \begin{align*} [\overline{x_1},\overline{y_1}]=\overline{y_1}, [\overline{x_1},\overline{y_2}]=\overline{y_2} \text{ and } [\overline{x_2},\overline{y_1}]=\overline{y_2}, [\overline{x_2},\overline{y_2}]=-\overline{y_1}. \end{align*} Since $\mathcal G^1$ and $\mathcal G^2$ are both ideals of $\mathcal G$, the Lie brackets in $\mathcal G$ can be determined as follows: \begin{equation*} \begin{array}{|c|c|c|c|c|c|c|} \hline & x_1 & x_2 & y_1 & y_2 & z_1 & z_2\\ \hline x_1 & 0 & \lambda_1z_1+\lambda_2z_2 & y_1+\lambda_3z_1+\lambda_4z_2 & y_2+\lambda_5z_1+\lambda_6z_2 & \lambda_7 z_1+\lambda_8z_2 & \lambda_9z_1+\lambda_{10}z_2\\ \hline x_2 & & 0 & y_2+\lambda_{11}z_1+\lambda_{12}z_2 & -y_1+\lambda_{13}z_1+\lambda_{14}z_2 & \lambda_{15} z_1+\lambda_{16}z_2 & \lambda_{17}z_1+\lambda_{18}z_2\\ \hline y_1 & & & 0 & \lambda_{19}z_1+\lambda_{20}z_2 & \lambda_{21} z_1+\lambda_{22}z_2 & \lambda_{23}z_1+\lambda_{24}z_2\\ \hline y_2 & & & & 0 & \lambda_{25} z_1+\lambda_{26}z_2 & \lambda_{27}z_1+\lambda_{28}z_2\\ \hline z_1 & & & & & 0 & 0\\ \hline \end{array} \end{equation*} Since $\mathcal G^2$ is commutative, we can obtain directly from the Jacobi identity that $\adgg{y_1}\adgg{y_2}=\adgg{y_2}\adgg{y_1}$. By Proposition \ref{triangular}, we can assume that $[\adgg{y_1}]_{\mathfrak{b}}$ and $[\adgg{y_2}]_{\mathfrak{b}}$ are both either of the diagonal form or of the form $aI+bJ$. Without loss of generality, we can assume that \begin{equation*} \text{either} \quad \left\{ \begin{array}{ll} [\adgg{y_1}]_\mathfrak{b} = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} \\ {}[\adgg{y_2}]_\mathfrak{b} = \begin{bmatrix} c & 0 \\ 0 & d \end{bmatrix} \end{array} \right. \quad \text{or} \quad \left\{ \begin{array}{ll} [\adgg{y_1}]_\mathfrak{b} = \begin{bmatrix} a & b \\ -b & a \end{bmatrix} \\ {}[\adgg{y_2}]_\mathfrak{b} = \begin{bmatrix} c & d \\ -d & c \end{bmatrix}. \end{array} \right. \end{equation*} Moreover, it follows from the equation (\ref{commutative}) that \begin{equation*} \textnormal{trace}(\adgg{y_1})=\textnormal{trace}(\adgg{y_2}) =0. \end{equation*} It turns out that \begin{equation*} \text{either} \quad \left\{ \begin{array}{ll} [\adgg{y_1}]_\mathfrak{b} = \begin{bmatrix} a & 0 \\ 0 & -a \end{bmatrix} \\ {}[\adgg{y_2}]_\mathfrak{b} = \begin{bmatrix} c & 0 \\ 0 & -c \end{bmatrix} \end{array} \right. \quad \text{or} \quad \left\{ \begin{array}{ll} [\adgg{y_1}]_\mathfrak{b} = \begin{bmatrix} 0 & b \\ -b & 0 \end{bmatrix} \\ {}[\adgg{y_2}]_\mathfrak{b} = \begin{bmatrix} 0 & d \\ -d & 0 \end{bmatrix}. \end{array} \right. \end{equation*} In both cases, there is $(0,0) \neq (\lambda,\mu)\in\mathbb{R}^2$ so that $\lambda \adgg{y_1}+\mu\adgg{y_2}=\boldsymbol 0$. Now, by applying the Jacobi identity to $(x_2,\lambda y_1+\mu y_2,z)$ for any $z\in\mathcal G^2$, we easily see that \begin{equation*} \boldsymbol 0=\adgg{x_2}\adgg{\lambda y_1+\mu y_2}-\adgg{\lambda y_1+\mu y_2}\adgg{x_2}=\adgg{[x_2,\lambda y_1+\mu y_2]}= -\mu\adgg{y_1}+\lambda \adgg{y_2}. \end{equation*} Therefore, \begin{equation*} \lambda \adgg{y_1}+\mu\adgg{y_2} = -\mu\adgg{y_1}+\lambda \adgg{y_2}=\boldsymbol 0. \end{equation*} This clearly forces $\adgg{y_1}=\adgg{y_2}=\boldsymbol 0$, and consequently $\mathcal G^2$ is spanned by $\{[y_1,y_2]\}$, a contradiction to $\dim\mathcal G^2=2$. Hence, this case is excluded. \item \textbf{Case 2: $\boldsymbol{\dim\mathcal G^2=1}$.} If so, $\mathcal H:=\mathcal G/\mathcal G^2$ is a 1-step solvable Lie-algebra whose non-zero coadjoint orbits are of codimension 1. It follows from Proposition \ref{pro210} that $\mathcal H$ is isomorphic to one of the followings: $\mathfrak{h}_{2m+1}$, $\mathbb{R}\oplus \textnormal{aff}(\mathbb{C})$. Furthermore, if $\mathcal{H}\cong \mathfrak{h}_{2m+1}$ then $\dim\mathcal G^1=2$ and $\dim\mathcal G^2=1$. This is impossible because $\mathcal G^1$ is nilpotent. Hence, $\mathcal{H}\cong \mathbb{R}\oplus \textnormal{aff}(\mathbb{C})$. Equivalently, we can fix a basis $\{x_1,x_2,y_1,y_2,y_3,z\}$ of $\mathcal G$ so that \begin{equation*} \left\{ \begin{array}{ll} \mathcal G^1= & \langle y_1,y_2,y_3\rangle, \quad \mathcal G^2= \langle z \rangle, \\ \mathcal H = & \langle\overline{y_3}\rangle\oplus\langle \overline{x_1}, \overline{x_2},\overline{y_1},\overline{y_2}\rangle \end{array} \right. \end{equation*} where the Lie brackets in $\mathcal H$ are the same as those in $\mathbb{R}\oplus \textnormal{aff}(\mathbb{C})$, i.e. \begin{align*} [\overline{x_1},\overline{y_1}]=\overline{y_1}, [\overline{x_1},\overline{y_2}]=\overline{y_2} \text{ and } [\overline{x_2},\overline{y_1}]=\overline{y_2}, [\overline{x_2},\overline{y_2}]=-\overline{y_1}. \end{align*} It implies that the Lie brackets in $\mathcal G$ must have the form \[\begin{array}{|c|c|c|c|c|c|c|c|} \hline & x_1 & x_2 & y_1 & y_2 & y_3 & z\\ \hline x_1 & & \lambda_1z & y_1+\lambda_2z & y_2+\lambda_3z & \lambda_4 z & \lambda_5z\\ \hline x_2 & & & y_2+\lambda_6z & -y_1+\lambda_7z & \lambda_8 z & \lambda_9z\\ \hline y_1 & & & & \lambda_{10}z & \lambda_{11} z & \lambda_{12}z\\ \hline y_2 & & & & & \lambda_{13} z & \lambda_{14}z\\ \hline y_3 & & & & & & \lambda_{15}z\\ \hline \end{array}\] If so, it follows from the equation (\ref{commutative}) that \begin{equation*} \lambda_{12}=\lambda_{14}=0. \end{equation*} This means $[y_1,z]=[y_2,z]=0$. Because $\mathcal G^2\neq \{0\}$, we must have $\lambda_{10}\neq 0$. By basis changing $z\rightarrow \dfrac{1}{\lambda_{10}}z$, we may assume $\lambda_{10}=1$. Now, by checking the Jacobi identity to the following triples $(x_1,y_1,y_2)$; $(x_2,y_1,y_2)$; $(y_1,y_2,y_3)$; $(x_1,x_2,y_3)$; $(x_1,y_1,y_3)$; and $(x_1,y_2,y_3)$; we obtain \begin{equation*} \lambda_{5}=2, \lambda_{9}=\lambda_{15}=2\lambda_8+\lambda_1\lambda_{15}=\lambda_{11}+\lambda_2\lambda_{15}=\lambda_{13}+\lambda_3\lambda_{15}=0 \end{equation*} Hence, \begin{equation*} \lambda_5=2, \lambda_{8}=\lambda_9=\lambda_{11}=\lambda_{12}=\lambda_{13}=\lambda_{14}=\lambda_{15}=0. \end{equation*} By basis changing $y_3\rightarrow y_3-\dfrac{\lambda_4}{2}z$ if necessary, we get $\mathcal G$ decomposable. In the other words, $\mathcal G$ is isomorphic to a direct sum of $\mathbb{R}$ with a Lie algebra $\mathcal G'$. Since $\mathcal G$ is 2-step solvable, so is $\mathcal G'$. Furthermore, non-zero coadjoint orbits of $\mathcal G'$ and $\mathcal G$ have the same dimension \cite[Theorem 3.1]{HHV21}. In the other words, $\mathcal G'$ is a 2-step solvable MD-algebra whose non-trivial coadjoint orbits are all of codimension 1. According to Proposition \ref{pro210}, $\mathcal G'$ must be isomorphic to $\mathfrak{s}_{5,45}$. Equivalently, $\mathcal G$ is isomorphic to $\mathbb{R}\oplus\mathfrak{s}_{5,45}$. This completes the proof. \end{itemize} \end{proof} \section{Concluding Remarks} In summary, the paper has introduced the classification of $\MD{n}{n-2}$-class with $2 \leq n \in \mathbb{N}$. There are 14 different $\MD{n}{n-2}$-algebras (up to an isomorphism) of dimension $n < 5$ listed in Table \ref{table:1}. The subclass of all 2-step nilpotent $\MD{n}{n-2}$-algebras with $n \geq 6$ is classified by canonical forms of associated pencils of matrices, in which algebras of dimension $n \leq 10$ are listed in Table \ref{table:3}. The remaining subclass of $\MD{n}{n-2}$-algebras is classified (up to an isomorphism) and listed in Table \ref{table:2}. In the following tables, $\{x_1,x_2, \dots, x_n \}$ is used to denote a basis of corresponding $\MD{n}{n-2}$-algebra $\mathcal G$. \begin{table}[!ht] \centering {\small \caption{List of all $\MD{n}{n-2}$-algebras with $n=2,4$.} \label{table:1} \begin{tabular}{l l l l} \hline $n$ & Algebras & Non-trivial Lie brackets & Notes \\ \hline 2 & $\mathbb{R}^2$ & - & \\ \hline 4 & $\mathfrak{n}_{4,1}$ & $[x_2,x_4]=x_1, [x_3,x_4]=x_2$ & \\ & $\mathfrak{s}_{4,1}$ & $[x_4,x_2]=x_1, [x_4,x_3]=x_3$ & \\ & $\mathfrak{s}_{4,2}$ & $[x_4,x_1]=x_1, [x_4,x_2]=x_1+x_2, [x_4,x_3]=x_2+x_3$ & \\ & $\mathfrak{s}_{4,3}$ & $[x_4,x_1]=x_1, [x_4,x_2]=\alpha x_2, [x_4,x_3]=\beta x_3$ & $0<|\beta|\leq |\alpha| \leq 1$, $(\alpha,\beta) \neq (-1,-1)$ \\ & $\mathfrak{s}_{4,4}$ & $[x_4,x_1]=x_1, [x_4,x_2]=x_1+x_2, [x_4,x_3]=\alpha x_3$ & $\alpha \neq 0$ \\ & $\mathfrak{s}_{4,5}$ & $[x_4,x_1]=\alpha x_1, [x_4,x_2]=\beta x_2-x_3, [x_4,x_3]=x_2+\beta x_3$ & $\alpha > 0$ \\ & $\mathfrak{s}_{4,6}$ & $[x_4,x_2]=x_2, [x_4,x_3]=-x_3$ & \\ & $\mathfrak{s}_{4,7}$ & $[x_4,x_2]=-x_3, [x_4,x_3]=x_2$ & \\ & $\textnormal{aff}(\mathbb{R}) \oplus \mathbb{R}^2$ &$[x_1,x_2]=x_2$ & \\ & $\mathfrak{n}_{3,1} \oplus \mathbb{R}$ &$[x_2,x_3]=x_1$ & \\ & $\mathfrak{s}_{3,1} \oplus \mathbb{R}$ &$[x_3,x_1]=x_1, [x_3,x_2]=\alpha x_2$ & $0< |\alpha| \leq 1$\\ & $\mathfrak{s}_{3,2} \oplus \mathbb{R}$ &$[x_3,x_1]=x_1, [x_3,x_2]=x_1+x_2$ & \\ & $\mathfrak{s}_{3,3} \oplus \mathbb{R}$ &$[x_3,x_1]=\alpha x_1-x_2, [x_3,x_2]=x_1+\alpha x_2$ & $\alpha \geq 0$ \\ \hline \end{tabular} } \end{table} \begin{table}[!ht] \centering {\small \caption{List of all $\MD{n}{n-2}$-algebras with $n \geq 6$ which are not 2-step nilpotent.} \label{table:2} \begin{tabular}{l l l l} \hline $\dim \mathcal G^1$ & Algebras & Non-trivial Lie brackets & Notes \\ \hline 1& $ \mathfrak{h}_{2m+1} \oplus \mathbb{R}$ & $[x_i,x_{m+i}]=x_{2m+1}$ $\forall i=1,\dots,m$ & $2m=n-2$ \\ \hline 2& $ \textnormal{aff}(\mathbb{C}) \oplus \mathbb{R}^2$ & $[x_3,x_1]=-x_2, [x_3,x_2]=[x_4,x_1]=x_1, [x_4,x_2]=x_2$ & \\ \hline $\geq 3$ & $\mathfrak{s}_{6,211}$ & \begin{tabular}{|c|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_3$ & $x_4$ & $x_5$ & $x_6$\\ \hline $x_1$ & $x_3$ & $x_4$ & $x_5+x_3$ & $x_6+x_4$\\ \hline $x_2$ & $-x_4$ & $x_3$ & $-x_6$ & $x_5$ \\ \hline \end{tabular} & \\ & $\mathfrak{s}_{6,225}(\nu,\theta)$ & \begin{tabular}{|c|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_3$ & $x_4$ & $x_5$ & $x_6$\\ \hline $x_1$ & $x_3$ & $x_4$ & $x_5+\nu x_3-\theta x_4$ & $x_6+\theta x_3+\nu x_4$\\ \hline $x_2$ & $-x_4$ & $x_3$ & $-x_6+x_3$ & $x_5$ \\ \hline \end{tabular} & $\nu\geq 0$ \\ & $\mathfrak{s}_{6,226}(\lambda,\mu,\zeta)$ & \begin{tabular}{|c|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_3$ & $x_4$ & $x_5$ & $x_6$\\ \hline $x_1$ & $\lambda x_3$ & $\lambda x_4$ & $x_5$ & $x_6$\\ \hline $x_2$ & $\mu x_3-\zeta x_4$ & $\zeta x_3+\mu x_4$ & $-x_6$ & $x_5$\\ \hline \end{tabular} & $\begin{cases} \lambda \neq 0,\mu\geq 0, 0<\zeta \leq 1 \\ \mbox{if } \zeta=1 \mbox{ then } |\lambda| \leq 1 \end{cases} $ \\ & $\mathfrak{s}_{6,228}(\lambda,\mu,\eta,\zeta)$ & \begin{tabular}{|c|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_3$ & $x_4$ & $x_5$ & $x_6$\\ \hline $x_1$ & $\lambda x_3-\eta x_4$ & $\eta x_3+\lambda x_4$ & $x_5$ & $x_6$\\ \hline $x_2$ & $\mu x_3-\zeta x_4$ & $\zeta x_3+\mu x_4$ & $-x_6$ & $x_5$ \\ \hline \end{tabular} & $\lambda\zeta-\mu\eta>0, \mu\geq 0$ \\ & $\mathfrak{s}_{5,45} \oplus \mathbb{R}$ & \begin{tabular}{|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_1$ & $x_2$ & $x_3$\\ \hline $x_2$ & 0 & 0 & $x_1$\\ \hline $x_4$ & $2x_1$ & $x_2$ & $x_3$\\ \hline $x_5$ & 0 & $x_3$ & $-x_2$ \\ \hline \end{tabular} & \\ \hline \end{tabular} } \end{table} \begin{table}[!ht] \centering {\small \caption{List of all 2-step nilpotent $\MD{n}{n-2}$-algebras with $6 \leq n \leq 10$.} \label{table:3} \begin{tabular}{l l l l} \hline $n$ & Algebras & Non-trivial Lie brackets & Notes \\ \hline 6 & $\mathfrak{n}_{6,1}$ & $[x_4,x_5]=x_2, [x_4,x_6]=x_3, [x_5,x_6]=x_1$ & \\ \hline 8 & There is no $\MD{8}{6}$-algebra \\ \hline 10 & $\mathcal G_{10,1}(0,1,\mu,\lambda)$ & \begin{tabular}{|c|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_3$ & $x_4$ & $x_7$ & $x_8$\\ \hline $x_1$ & $x_9$ & $-x_{10}$ & 0 & 0\\ \hline $x_2$ & $-x_{10}$ & $-x_9$ & 0 & 0 \\ \hline $x_5$ & 0 & 0 & $\lambda x_9$ & $\mu x_9-x_{10}$ \\ \hline $x_6$ & 0 & 0 & $\mu x_9-x_{10}$ & $-\lambda x_9$ \\ \hline \end{tabular} & $0<|\lambda|\leq 1$\\ & $\mathcal G_{10,2}(0,\lambda)$ & \begin{tabular}{|c|c|c|c|c|} \hline $[\cdot,\cdot]$ & $x_5$ & $x_6$ & $x_7$ & $x_8$\\ \hline $x_1$ & 0 & 0 & $\lambda x_9$ & $-x_{10}$\\ \hline $x_2$ & 0 & 0 & $-x_{10}$ & $-\lambda x_9$ \\ \hline $x_3$ & $\lambda x_9$ & $-x_{10}$& 0 & $x_9$ \\ \hline $x_4$ & $-x_{10}$ & $-\lambda x_9$ & $x_9$ & 0 \\ \hline \end{tabular} & $\lambda \neq 0$\\ \hline \end{tabular} } \end{table} {\bf Acknowledgment.} This research is funded by University of Economics and Law, Vietnam National University Ho Chi Minh City / VNU-HCM. A part of this paper was done during the visit of Hieu V. Ha and Vu A. Le to Vietnam Institute for Advanced Study in Mathematics (VIASM) in summer 2022. They are very grateful to VIASM for the support and hospitality. \end{document}
arXiv
\begin{document} \begin{abstract} We introduce a new family of noncommutative analogues of the Hall-Littlewood symmetric functions. Our construction relies upon Tevlin's bases and simple $q$-deformations of the classical combinatorial Hopf algebras. We connect our new Hall-Littlewood functions to permutation tableaux, and also give an exact formula for the $q$-enumeration of permutation tableaux of a fixed shape. This gives an explicit formula for: the steady state probability of each state in the partially asymmetric exclusion process (PASEP); the polynomial enumerating permutations with a fixed set of {\it weak excedances} according to {\it crossings}; the polynomial enumerating permutations with a fixed set of {\it descent bottoms} according to occurrences of the {\it generalized pattern} $2-31$. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} The combinatorics of Hall-Littlewood functions is one of the most interesting aspects of the modern theory of symmetric functions~\cite{Mcd}. These are bases of symmetric functions, depending on a parameter $q$ which was originally regarded as the cardinality of a finite field. They are named after Littlewood's explicit realization of the Hall algebra in terms of symmetric functions, which gave meaning to arbitrary complex values of $q$~\cite{Li1}. Combinatorics entered the scene with the observation by Foulkes~\cite{Fo1} that the transition matrices between Schur functions and Hall-Littlewood $P$-functions seemed to be given by polynomials, which were nonnegative $q$-analogues of the well-known Kostka numbers, counting Young tableaux according to shape and weight. The conjecture of Foulkes was established by Lascoux and Sch\"ut\-zen\-berger~\cite{LS2}, who introduced the charge statistic on Young tableaux to explain the powers of $q$. Almost simultaneously, Lusztig~\cite{Lu2} obtained an interpretation in terms of the intersection homology of nilpotent orbits, and it is now known that these Kostka-Foulkes polynomials are particular Kazhdan-Lusztig polynomials associated with the affine Weyl groups of type $A$~\cite{Lus}. But this was not the end of the story. Some ten years later, Kirillov and Reshetikhin~\cite{KR} discovered an interpretation of Kostka-Foulkes polynomials in statistical physics, as generating functions of Bethe ansatz configurations for some generalizations of Heisenberg's $XXX$-magnet model, and obtained a closed expression in the form of a sum of products of $q$-binomial coefficients. All of these results have been generalized in many directions. Generalized Hall algebras (associated with quivers) have been introduced~\cite{Ring}. Ribbon tableaux~\cite{LLT2} and $k$-Schur functions~\cite{LLM} give rise to generalizations of the charge polynomials, sometimes interpretable as Kazhdan-Lusztig polynomials~\cite{LT}. Intersection homology has been computed for other varieties. The Kirillov-Reshetikhin formula is now included in a vast corpus of fermionic formulas, available for a large number of models~\cite{HKOTZ}. However, the relations -if any- between these theories are generally unknown. The present article is devoted to a different kind of generalization of the Hall-Littlewood theory. It is by now well-known that many aspects of the theory of symmetric functions can be lifted to Noncommutative symmetric functions, or quasi-symmetric functions, and that those points which do not have a good analogue at this level can sometimes be explained by lifting them to more complicated combinatorial Hopf algebras\footnote{ There is no general agreement on the precise definition of a combinatorial Hopf algebra, see \cite{ABS} and \cite{LR-cha} for attempts at making this concept precise, and \cite{NT, NT07, NTqthooks} for more examples.}. The paradigm here is the Littlewood-Richardson rule, which becomes trivial in the algebra of Free symmetric functions, all the difficulty having been diluted in the definition of the algebra~\cite{NCSF6}. A theory of noncommutative and quasi-symmetric Hall-Littlewood functions has been worked out by Hivert~\cite{Hiv-adv}. Since there is no Hall algebra to use as a starting point, Hivert's choice was to imitate Littlewood's definition, which can be reformulated in terms of an action of the affine Hecke algebra on polynomials. By replacing the usual action by a quasi-symmetrizing one, Hivert obtained interesting bases, behaving in much the same way as the original ones, and were easily deformable with a second parameter, so that analogues of Macdonald's functions could also be defined \cite{HLT}. See also the interesting work of Bergeron and Zabrocki \cite{BZ}. However, Hivert's analogues of the Kostka-Foulkes polynomials are just Kostka-Foulkes {\em monomials}, i.e., powers of $q$, given moreover by a simple explicit formula. So the combinatorial connections to tableaux, geometry and statistical physics do not show up in this theory. More recently, new possibilities arose with Tevlin's~\cite{Tev} discovery of a plausible analogue of monomial symmetric functions on the noncommutative side. Tevlin's constructions are incompatible with the Hopf structure (his monomial functions are not dual to products of complete functions in any reasonable sense), so it seemed unlikely that they could lead to interesting combinatorics. Nevertheless, Tevlin computed analogues of the Kostka matrices in his setting, and conjectured that they had nonnegative integer coefficients. This conjecture was proved in~\cite{HNTT}, and turned out to be more interesting than expected. The proof required the use of larger combinatorial Hopf algebras, and led to a vast generalization of the Gennocchi numbers. In this paper we give a new generalization of Hall-Littlewood functions, starting from Tevlin's bases. We define $q$-analogues $S^I(q)$ of the products of complete homogeneous functions by embedding ${\bf Sym}$ in an associative deformation of ${\bf WQSym}$ and projecting back to ${\bf Sym}$ by the map introduced in \cite{HNTT}. This defines a nonassociative $q$-product $\star_q$ on ${\bf Sym}$, and our Hall-Littlewood functions are equal to the products (see Section~\ref{sec-SL}) \begin{equation} S^I(q) = S^{i_1} \star_q (S^{i_2}\star_q(\dots (S^{i_{r-1}}\star_q S^{i_r}))). \end{equation} \noindent These functions can be regarded as interpolating between the $S^I$ (at $q=1$) and a new kind of noncommutative Schur functions (at $q=0$), have nonnegative coefficients, which can be expressed in closed form as products of $q$-binomial coefficients, and have a transparent combinatorial interpretation. As a consequence, the basis $R_I(q)$, defined by Moebius inversion on the composition lattice, is also nonnegative on the same basis. The really interesting phenomenon occurs with Tevlin's second basis (denoted here and in~\cite{HNTT} by $L_I$), an analogue of Gessel's fundamental basis $F_I$. One can observe that the last column of the matrix $M(S,L)$ (which expresses $S^{1^n}$ in terms of the $L_I$'s) gives the enumeration of \emph{permutation tableaux} \cite{SW} by shape. This observation can be easily shown, and one may wonder whether the expansion of $S^{1^n}(q)$ on $L_I$ gives rise to interesting $q$-analogues. This is clearly not the case (there are negative coefficients), but it turns out that introducing a simple $q$-analogue $L_I(q)$ of $L_I$, we obtain again nonnegative polynomials in the matrix $M(S(q),L(q))$. Finally, the matrix $M(R(q),L(q))$ gives the $q$-enumeration of permutation tableaux according to shape and rank, and another (yet unknown) statistic. Other (conjectural) combinatorial interpretations in terms of permutations or packed words are also proposed\footnote{Since many of our results and conjectures are stated in terms of permutations, one might be tempted to work only with ${\bf FQSym}$, bypassing ${\bf WQSym}$ entirely. However, as was already clear in~\cite{HNTT}, one cannot make sense of the definition of Tevlin's monomial basis $\Psi$ using ${\bf FQSym}$.}. As permutation tableaux occur in geometry (they are a distinguished subset of Postnikov's {\em $\hbox{\rotatedown{$\Gamma$}}$-diagrams}, which parameterize cells in the totally non-negative part of the Grassmannian \cite{Postnikov}) and the $q$-enumeration of permutation tableaux is (up to a shift) counting cells according to dimension. Additionally, permutation tableaux occur in physics -- Corteel and Williams~\cite{CW} found a close connection to a well-known model from statistical physics called the partially asymmetric exclusion process, which in turn is related to the Hamiltonian of the XXZ quantum spin chain \cite{ER}. Therefore we may say that our new Hall-Littlewood functions have some of the features which were absent from Hivert's theory. However, we do not have the algebraic side coming from affine Hecke algebras, and it is an open question whether both points of view can be unified. We conclude this paper with exact formulas for the $q$-enumeration of permutation tableaux of types A and B, according to shape. In the type A case, by the result of Corteel and Williams \cite{CW}, this gives an exact formula for the steady state probability of each state of the partially asymmetric exclusion process (with arbitrary $q$ and $\alpha=\beta=\gamma=\delta=1$). Applying results of \cite{SW}, this also gives an exact formula for the number of permutations with a fixed {\it weak excedance set} enumerated according to {\it crossings}, and for the number of permutations with a fixed set of {\it descent bottoms}, enumerated according to occurrences of the pattern $2-31$. {\footnotesize {\it Acknowledgments.-} This work has been partially supported by Agence Nationale de la Recherche, grant ANR-06-BLAN-0380. The authors would also like to thank the contributors of the MuPAD project, and especially those of the combinat package, for providing the development environment for this research (see~\cite{HT} for an introduction to MuPAD-Combinat). } \section{Notations and background} \subsection{Words, permutations, and compositions} We assume that the reader is familiar with the standard notations of the theory of noncommutative symmetric functions~\cite{NCSF1,NCSF6}. We shall need an infinite totally ordered alphabet $A=\{a_1<a_2<\cdots<a_n<\cdots\}$, generally assumed to be the set of positive integers. We denote by ${\mathbb K}$ a field of characteristic $0$, and by ${\mathbb K}\<A\rangle$ the free associative algebra over $A$ when $A$ is finite, and the projective limit ${\rm proj\,lim}_B {\mathbb K}\<B\rangle$, where $B$ runs over finite subsets of $A$, when $A$ is infinite. The \emph{evaluation} ${\rm ev}(w)$ of a word $w$ is the sequence whose $i$-th term is the number of times the letter $a_i$ occurs in $w$. The \emph{standardized word} ${\rm Std}(w)$ of a word $w\in A^*$ is the permutation obtained by iteratively scanning $w$ from left to right, and labelling $1,2,\ldots$ the occurrences of its smallest letter, then numbering the occurrences of the next one, and so on. For example, ${\rm Std}(bbacab)=341625$. For a word $w$ on the alphabet $\{1,2,\ldots\}$, we denote by $w[k]$ the word obtained by replacing each letter $i$ by the integer $i+k$. If $u$ and $v$ are two words, with $u$ of length $k$, one defines the \emph{shifted concatenation} $u\bullet v = u\cdot (v[k])$ and the \emph{shifted shuffle} $ u\Cup v= u\, \shuffl \, (v[k])$, where $\, \shuffl \,$ is the usual shuffle product. Recall that a permutation $\sigma$ admits a {\it descent} at position $i$ if $\sigma(i)> \sigma(i+1)$. Symmetrically, $\sigma$ admits a {\it recoil} at $i$ if $\sigma^{-1}(i)>\sigma^{-1}(i+1)$. The descent and recoil sets of $\sigma$ are the positions of the descents and recoils, respectively. A \emph{composition} of an integer $n$ is a sequence $I=(i_1,\dots,i_r)$ of positive integers of sum $n$. In this case we write $I \models n$. The integer $r$ is called the \emph{length} of the composition. The \emph{descent set} of $I$ is $\operatorname{Des}(I) = \{ i_1,\ i_1+i_2, \ldots , i_1+\dots+i_{r-1}\}$. The \emph{reverse refinement order}, denoted by $\succeq$, on compositions is such that $I=(i_1,\ldots,i_k)\succeq J=(j_1,\ldots,j_l)$ iff $\operatorname{Des}(I)\supseteq\operatorname{Des}(J)$, or equivalently, $\{i_1,i_1+i_2,\ldots,i_1+\cdots+i_k\}$ contains $\{j_1,j_1+j_2,\ldots,j_1+\cdots+j_l\}$. In this case, we say that $I$ is finer than $J$. For example, $(2,1,2,3,1,2)\succeq (3,2,6)$. The \emph{descent composition} $\operatorname{DC}(\sigma)$ of a permutation $\sigma\in{\mathfrak S}_n$ is the composition $I$ of $n$ whose descent set is the descent set of $\sigma$. Similarly we can define recoil compositions. If $I=(i_1,\dots,i_q)$ and $J=(j_1,\dots,j_p)$ are two compositions, then $I \cdot J$ refers to their concatenation $(i_1,\dots,i_q,j_1,\dots,j_p)$, and $I {\,\triangleright} J$ is equal to $(i_1,\dots,i_q+j_1, j_2,\dots,j_p)$. The {\it major index} ${\rm maj}(K)$ of a composition $K = (k_1,\dots,k_r)$ is equal to the dot product of $(k_1,\dots,k_r)$ with $(r-1,r-2,\dots,2,1,0)$, i.e. $\sum_{i=1}^r (r-i) k_i$. \subsection{Word Quasi-symmetric functions: ${\bf WQSym}$} Let $w\in A^*$. The \emph{packed word} $u={\rm pack}(w)$ associated with $w$ is obtained by the following process. If $b_1<b_2<\ldots <b_r$ are the letters occuring in $w$, $u$ is the image of $w$ by the homomorphism $b_i\mapsto a_i$. A word $u$ is said to be \emph{packed} if ${\rm pack}(u)=u$. We denote by ${\rm PW}$ the set of packed words. With such a word, we associate the polynomial \begin{equation} {\bf M}_u :=\sum_{{\rm pack}(w)=u}w\,. \end{equation} {\footnotesize For example, restricting $A$ to the first five integers, \begin{equation} {\bf M}_{13132}= 13132 + 14142 + 14143 + 24243 + 15152 + 15153 + 25253 + 15154 + 25254 + 35354. \end{equation} } Under the abelianization $\chi:\ {\mathbb K}\langle A\rangle\rightarrow{\mathbb K}[X]$, the ${\bf M}_u$ are mapped to the monomial quasi-symmetric functions $M_I$ ($I=(|u|_a)_{a\in A}$ being the evaluation vector of $u$). These polynomials span a subalgebra of ${\mathbb K}\langle A\rangle$, called ${\bf WQSym}$ for Word Quasi-Symmetric functions~\cite{Hiv-adv}. These are the invariants of the noncommutative version of Hivert's quasi-symmetrizing action \cite{Hiv-adv}, which is defined by $\sigma\cdot w = w'$ where $w'$ is such that ${\rm Std}(w')={\rm Std}(w)$ and $\chi(w')=\sigma\cdot\chi(w)$. Thus, two words are in the same ${\mathfrak S}(A)$-orbit iff they have the same packed word. The graded dimension of ${\bf WQSym}$ is the sequence of ordered Bell numbers (\cite[A000670]{Slo}) $1, 1, 3, 13, 75, 541, 4683, 47293, 545835,\dots$. Hence, ${\bf WQSym}$ is much larger than ${\bf Sym}$, which can be embedded in it in various ways~\cite{NT4,NT07}. The product of the ${\bf M}_u$ of ${\bf WQSym}$ is given by \begin{equation} \label{prodG-wq} {\bf M}_{u'} {\bf M}_{u''} = \sum_{u \in u'{*_W} u''} {\bf M}_u\,, \end{equation} where the \emph{convolution} $u'{*_W} u''$ of two packed words is defined as \begin{equation} u'{*_W} u'' = \sum_{v,w ; u=v\cdot w\,\in\,{\rm PW}, {\rm pack}(v)=u', {\rm pack}(w)=u''} u\,. \end{equation} {\footnotesize For example, \begin{equation} \label{M1121} {\bf M}_{11} {\bf M}_{21} = {\bf M}_{1121} + {\bf M}_{1132} + {\bf M}_{2221} + {\bf M}_{2231} + {\bf M}_{3321}. \end{equation} \begin{equation} \begin{split} {\bf M}_{21} {\bf M}_{121} =& \ \ \ {\bf M}_{12121} + {\bf M}_{12131} + {\bf M}_{12232} + {\bf M}_{12343} + {\bf M}_{13121} + {\bf M}_{13232} + {\bf M}_{13242}\\ &+ {\bf M}_{14232} + {\bf M}_{23121} + {\bf M}_{23131} + {\bf M}_{23141} + {\bf M}_{24131} + {\bf M}_{34121}. \end{split} \end{equation} } \subsection{Matrix quasi-symmetric functions: ${\bf MQSym}$} This algebra is introduced in \cite{Hiv-adv, NCSF6}. We start from a totally ordered set of commutative variables $X=\{x_1<\cdots<x_n\}$ and consider the ideal ${\mathbb K}[X]^+$ of polynomials without constant term. We denote by ${\mathbb K}\{X\}=T({\mathbb K}[X]^+)$ its tensor algebra. We will also consider tensor products of elements of this algebra. To avoid confusion, we denote by ``$\operatorname{\cdot}$'' the product of the tensor algebra and call it the dot product. We reserve the notation $\otimes$ for the external tensor product. A natural basis of ${\mathbb K}\{X\}$ is formed by dot products of nonconstant monomials (called \emph{multiwords} in the sequel), which can be represented by nonnegative integer matrices $M=(m_{ij})$, where $m_{ij}$ is the exponent of the variable $x_i$ in the $j$th factor of the tensor product. Since constant monomials are not allowed, such matrices have no zero column. We say that they are \emph{horizontally packed}. A multiword $\operatorname{\bf m}$ can be encoded in the following way. Let $V$ be the \emph{support} of $\operatorname{\bf m}$, that is, the set of those variables $x_i$ such that the $i$th row of $M$ is non zero, and let $P$ be the matrix obtained from $M$ by removing the null rows. We set $\operatorname{\bf m}=V^P$. A matrix such as $P$, without zero rows or columns, is said to be \emph{packed}. {\footnotesize For example the multiword $\operatorname{\bf m} = a\operatorname{\cdot} ab^3e^5\operatorname{\cdot} a^2d$ is encoded by $\indexmat\SMat{1&1&2\\0&3&0\\0&0&0\\0&0&1\\0&5&0}$. Its support is the set $\{a,b,d,e\}$, and the associated packed matrix is $\SMat{1&1&2\\0&3&0\\0&0&1\\0&5&0}$. } Let ${\bf MQSym}(X)$ be the linear subspace of ${\mathbb K}\{X\}$ spanned by the elements \begin{equation} \mathbf{MS}_M = \sum_{V\in{\mathcal P}_k(X)}V^M \end{equation} where ${\mathcal P}_k(X)$ is the set of $k$-element subsets of $X$, and $M$ runs over packed matrices of height $h(m)<n$. {\footnotesize For example, on the alphabet $\{a<b<c<d\}$ \renewcommand{\indexmat} {\smallmatrice{\vrule height \Hackl width 0pt a\\\vrule height \Hackl width 0pt b\\\vrule height \Hackl width 0pt c\\\vrule height \Hackl width 0pt d\\}} $$ \mathbf{MS}_\SMat{1&1&2\\0&3&0\\0&0&1}= \indexmat\SMat{1&1&2\\0&3&0\\0&0&1\\0&0&0}+ \indexmat\SMat{1&1&2\\0&3&0\\0&0&0\\0&0&1}+ \indexmat\SMat{1&1&2\\0&0&0\\0&3&0\\0&0&1}+ \indexmat\SMat{0&0&0\\1&1&2\\0&3&0\\0&0&1} $$ } One can show that ${\bf MQSym}$ is a subalgebra of ${\mathbb K}\{X\}$. Actually, $$ \mathbf{MS}_P\mathbf{MS}_Q =\sum_{R\in\, \underline{\shuffl} \, (P,Q)} \mathbf{MS}_R $$ where the {\em augmented shuffle} of $P$ and $Q$, $\, \underline{\shuffl} \, (P,Q)$ is defined as follows: let $r$ be an integer between $\max(p,q)$ and $p+q$, where $p=h(P)$ and $q=h(Q)$. Insert null rows in the matrices $P$ and $Q$ so as to form matrices $\tilde P$ and $\tilde Q$ of height $r$. Let $R$ be the matrix $(\tilde P,\tilde Q)$. The set $\, \underline{\shuffl} \, (P,Q)$ is formed by all the matrices without null rows obtained in this way. {\footnotesize For example : $$ \begin{array}{l} \mathbf{MS}{\SMat{2&1\\1&0}}\mathbf{MS}_{\SMat{3&1}} = \\[3mm] \qquad\ \mathbf{MS}{\SMat{2&1&0&0\\1&0&0&0\\0&0&3&1}}+\mathbf{MS}{\SMat{2&1&0&0\\1&0&3&1}}+ \mathbf{MS}{\SMat{2&1&0&0\\0&0&3&1\\1&0&0&0}}+\mathbf{MS}{\SMat{2&1&3&1\\1&0&0&0}}+ \mathbf{MS}{\SMat{0&0&3&1\\2&1&0&0\\1&0&0&0}} \end{array} $$ } \subsection{Free quasi-symmetric functions: ${\bf FQSym}$} The Hopf algebra ${\bf FQSym}$ is the subalgebra of ${\bf WQSym}$ spanned by the polynomials~\cite{NCSF6} \begin{equation} {\bf G}_\sigma := \sum_{{\rm Std}(u)=\sigma} {\bf M}_u = \sum_{{\rm Std}(w)=\sigma} w. \end{equation} The multiplication rule is, for $\alpha\in{\mathfrak S}_k$ and $\beta\in{\mathfrak S}_l$, \begin{equation}\label{multG} {\bf G}_\alpha {\bf G}_\beta = \sum_{\genfrac{}{}{0pt}{}{\gamma\in{\mathfrak S}_{k+l};\,\gamma=u\cdot v} {{\rm Std}(u)=\alpha,{\rm Std}(v)=\beta}}{\bf G}_\gamma\,. \end{equation} As a Hopf algebra, ${\bf FQSym}$ is self-dual. The scalar product materializing this duality is the one for which $({\bf G}_\sigma\,,\,{\bf G}_\tau)=\delta_{\sigma,\tau^{-1}}$ (Kronecker symbol). Hence, ${\bf F}_\sigma:={\bf G}_{\sigma^{-1}}$ is the dual basis of ${\bf G}$. Their product is given by \begin{equation} \label{multF} {\bf F}_\alpha {\bf F}_\beta = \sum_{\gamma\in\alpha\Cup\beta} {\bf F}_\gamma. \end{equation} \subsection{Embeddings} \subsubsection{${\bf Sym}$ into ${\bf MQSym}$} Recall that the algebra of {\it noncommutative symmetric functions} is the free associative algebra ${\bf Sym}={\mathbb C} \langle S_1,S_2,\dots \rangle$ generated by an infinite sequence of non-commutative indeterminates $S_k$, called \emph{complete} symmetric functions. For a composition $I=(i_1,\dots,i_r)$, one sets $S^I = S_{i_1} \dots S_{i_r}$. The family $(S^I)$ is a linear basis of ${\bf Sym}$. A useful realization, denoted by ${\bf Sym}(A)$, can be obtained by taking an infinite alphabet $A=\{a_1,a_2,\dots \}$ and defining its complete homogeneous symmetric functions by the generating function \begin{equation*} \sum_{n \geq 0} t^n S_n(A) = (1-ta_1)^{-1}(1-ta_2)^{-1} \dots . \end{equation*} Given a packed matrix $P$, the vector of its \emph{column sums} will be denoted by $\text{Col}(P)$. The algebra morphism defined on generators by \begin {equation} \beta:\ S_n\mapsto \sum_{{\rm Col}(P)=(n)}\mathbf{MS}_P \end{equation} is an embedding of Hopf algebras \cite{NCSF6}. By definition of ${\bf MQSym}$, for an arbitrary composition, we have \begin{equation} \beta(S^I) = \sum_{{\rm Col}(P)=I}\mathbf{MS}_P . \end{equation} \subsubsection{${\bf Sym}$ into ${\bf WQSym}$} The algebra morphism defined on generators by \begin {equation} \alpha:\ S_n\mapsto \sum_{{\rm Std}(u)=12\cdots n}{\bf M}_u \end{equation} is also an embedding of Hopf algebras. For an arbitrary composition, \begin{equation} \alpha(S^I) = \sum_{\operatorname{DC}(u)\raffi I}{\bf M}_u\,. \end{equation} Indeed, when ${\bf Sym}$ is realized as ${\bf Sym}(A)$, the latter sum is equal to $S^I(A)$. \subsection{Epimorphisms} We shall also need to project back from the algebras ${\bf MQSym}$ and ${\bf WQSym}$ to ${\bf Sym}$. The crucial projection is the one associated with the (non-Hopf) quotient of ${\bf WQSym}$ introduced in \cite{HNTT}. \subsubsection{${\bf MQSym}$ to ${\bf WQSym}$} To a packed matrix $M$, one associates a packed word $w(M)$ as follows. Read the entries of $M$ columnwise, from top to bottom and left to right. The word $w(M)$ is obtained by repeating $m_{ij}$ times each row index $i$. Let ${\mathcal J}$ be the ideal of ${\bf MQSym}$ generated by the differences \begin{equation} \{\mathbf{MS}_P-\mathbf{MS}_Q |w(P)=w(Q)\}. \end{equation} Then the quotient ${\bf MQSym}/{\mathcal J}$ is isomorphic as an algebra to ${\bf WQSym}$, via the identification $\overline{\mathbf{MS}}_M={\bf M}_{w(M)}$. More precisely, $\eta:\ \overline{\mathbf{MS}}_M\mapsto {\bf M}_{w(M)}$ is a morphism of algebras. \subsubsection{${\bf WQSym}$ to ${\bf Sym}$} Let $w$ be a packed word. The \emph{Word composition} (W-comp\-os\-ition) of $w$ is the composition whose descent set is given by the positions of the last occurrences of each letter in $w$. {\footnotesize For example, \begin{equation} \operatorname{WC}(1543421323) = (2,3,2,2,1). \end{equation} Indeed, the descent set is $\{2,5,7,9,10\}$ since the last $5$ is in position $2$, the last $4$ is in position $5$, the last $1$ is in position $7$, the last $2$ is in position $9$, and the last $3$ is in position $10$. The following tables group the packed words in ${\rm PW}_2$ and ${\rm PW}_3$ according to their W-composition. \begin{equation} \begin{array}{|c|c|} \hline 2 & 11 \\ \hline \hline 11 & 12 \\ & 21 \\ \hline \end{array} \hskip2cm \begin{array}{|c|c|c|c|} \hline 3 & 21 & 12 & 111\\ \hline \hline 111 & 112 & 122 & 123\\ & 121 & 211 & 132\\ & 212 & & 213\\ & 221 & & 231\\ & & & 312\\ & & & 321\\ \hline \end{array} \end{equation} } Let $\sim$ be the equivalence relation on packed words defined by $u\sim v$ iff $\operatorname{WC}(u)=\operatorname{WC}(v)$. Let ${\mathcal J'}$ be the subspace of ${\bf WQSym}$ spanned by the differences \begin{equation} \{ {\bf M}_u - {\bf M}_v\, |\, u\sim v\}. \end{equation} Then, it has been shown \cite{HNTT} that ${\mathcal J'}$ is a two-sided ideal of ${\bf WQSym}$, and that the quotient ${\bf T'}$ defined by ${\bf T'}={\bf WQSym}/{\mathcal J'}$ is isomorphic to ${\bf Sym}$ as an algebra. More precisely, recall that $\Psi_n$ is a {\it noncommutative power sum of the first kind}~\cite{NCSF1}. Tevlin defined the {\it noncommutative monomial symmetric functions} $\Psi_I$ \cite{Tev} as quasideterminants in the $\Psi_n$'s. We do not need the precise definition of $\Psi_I$ here, only the following result. \begin{proposition}\cite{HNTT}\label{HNTTMorphism} $\zeta:\ \overline{{\bf M}}_u\mapsto \Psi_{\operatorname{WC}(u)}$ is a morphism of algebras. \end{proposition} \section{Quantizations and noncommutative Hall-Littlewood functions} In this section, we introduce a new $q$-analogue $S^I(q)$ of the basis $S^I$ of ${\bf Sym}$, giving two different but equivalent definitions. When we examine the transition matrices between this new basis and other bases, we will see a connection to permutation tableaux and hence to the asymmetric exclusion process. The new basis elements $S^I(q)$ play the role of the classical Hall-Littlewood $Q'_\mu$~\cite[Ex. 7.(a) p. 234]{Mcd}, and of Hivert's $H_I(q)$. \subsection{The special inversion statistic} Let $u=u_1\cdots u_n$ be a packed word. We say that an inversion $u_i=b>u_j=a$ (where $i<j$ and $a<b$) is {\em special} if $u_j$ is the {\em rightmost} occurence of $a$ in $u$. Let ${\rm sinv} (u)$ denote the number of special inversions in $u$. Note that if $u$ is a permutation, this coincides with its ordinary inversion number. \subsection{Quantizing ${\bf WQSym}$} Let ${\bf M}'_u=q^{{\rm sinv}(u)}{\bf M}_u$ and define a linear map $\phi_q$ by $\phi_q({\bf M}_u)={\bf M}'_u$. We define a new associative product $\star_q$ on ${\bf WQSym}$ by requiring that \begin{equation} {\bf M}'_u\star_q {\bf M}'_v=\phi_q({\bf M}_u{\bf M}_v)\,. \end{equation} {\footnotesize For example, by~(\ref{M1121}), one has \begin{equation} \begin{split} {\bf M}'_{11} \star_q {\bf M}'_{21} &= {\bf M}'_{1121} + {\bf M}'_{1132} + {\bf M}'_{2221} + {\bf M}'_{2231} + {\bf M}'_{3321}\\ &= q{\bf M}_{1121} + q{\bf M}_{1132} + q^3{\bf M}_{2221} + q^3{\bf M}_{2231} + q^5{\bf M}_{3321}. \end{split} \end{equation} } This algebra structure on the vector space ${\bf WQSym}$ will be denoted by ${\bf WQSym}_q$. \subsection{Quantizing ${\bf MQSym}$} Similarly, the $q$-product $\star_q$ can be defined on ${\bf MQSym}$, by requiring that the $\mathbf{MS}'_M=q^{{\rm sinv}(w(M))}\mathbf{MS}_M$ multiply as the $\mathbf{MS}_M$. \subsection{Two equivalent definitions of $S^I(q)$} Embedding ${\bf Sym}$ into ${\bf MQSym}_q$ and projecting back to ${\bf Sym}$, we define $q$-analogues of the products $S^I$ by \begin{equation} S^I(q)=\zeta\circ\eta ( \beta(S_{i_1})\star_q\cdots\star_q \beta(S_{i_r}))\,. \end{equation} \noindent Equivalently, since under the above embeddings, the image of ${\bf Sym}$ in ${\bf MQSym}$ is contained in the image of ${\bf WQSym}$, one can embed ${\bf Sym}$ into ${\bf WQSym}_q$ and project back to ${\bf Sym}$, which yields \begin{equation} S^I(q)=\zeta (\alpha(S_{i_1})\star_q\cdots\star_q \alpha(S_{i_r}))\,. \end{equation} \subsection{The transition matrix $M(S(q),\Psi)$} For any two bases $F$, $G$ of ${\bf Sym}$, we denote by $M_n(F,G)$ the matrix indexed by compositions of $n$, whose entry in row $I$ and column $J$ is the coefficient of $G_I$ in the $G$-expansion of $F_J$. We will give two combinatorial formulas (Propositions \ref{SP1} and \ref{SP2}) and one recursive formula (Theorem \ref{recursive}) for the elements of the transition matrix $M(S(q),\Psi)$, where the $\Psi_I$'s are Tevlin's noncommutative monomial symmetric functions. \subsubsection{First examples} Let $[n]$ denote the $q$-analogue $1+q+\dots+q^{n-1}$ of $n$. The first transition matrices $SP_n=M_n(S(q),\Psi)$ are \begin{equation*} SP_3 = M_3(S(q),\Psi) = \left( \begin{matrix} [1] & [1] & [1] & [1] \\ [1] & [3] & [2] & [2][2] \\ [1] & [1] & [2] & [2] \\ [1] & [3] & [3] & [2][3] \end{matrix} \right) \end{equation*} \begin{equation*} SP_4 = \left( \begin{matrix} [1] & [1] & [1] & [1] & [1] & [1] & [1] & [1] \\ [1] & [4] & [3] & [2][3] & [2] & [2][3] & [2][2] & [2][2][2] \\ [1] & [1] & [3] & [3] & [2] & [2] & [2][2] & [2][2] \\ [1] & [4] & [4][3]/[2] & [3][4] & [3] & [3][3] & [3][3] & [2][3][3] \\ [1] & [1] & [1] & [1] & [2] & [2] & [2] & [2] \\ [1] & [4] & [3] & [2][3] & [3] & [3][3] & [2][3] & [2][2][3] \\ [1] & [1] & [3] & [3] & [3] & [3] & [2][3] & [2][3] \\ [1] & [4] & [4][3]/[2] & [3][4] & [4] & [3][4] & [3][4] & [2][3][4] \end{matrix} \right) \end{equation*} The coefficient of $\Psi_I$ in $S^J(q)$ will be denoted by $C_I^J(q)$. \subsubsection{Combinatorial interpretations} Recall that by Proposition \ref{HNTTMorphism}, $\zeta\circ\eta$ is a morphism of algebras sending $\mathbf{MS}_M$ to $\Psi_{\operatorname{WC}(w(M))}$. Hence, our first definition of $S^J(q)$ gives the following: \begin{proposition}\label{SP1} Let $I=(i_1,\dots,i_k)$ and $J=(j_1,\dots,j_l)$ be two compositions. Let $M(I,J)$ be the set of integer matrices $M=(m_{p,q})_{1\leq p\leq l;1\leq q\leq k}$ without null rows such that \begin{equation} \operatorname{WC}(w(M))=I \qquad\text{and}\qquad {\rm Col}(M)=J. \end{equation} Then \begin{equation} C_I^J(q) = \sum_{M\in M(I,J)} q^{{\rm sinv}(w(M))}. \end{equation} \end{proposition} {\footnotesize For example, the six matrices corresponding to the coefficient $[4][3]/[2]$ of $M_4$ in row $(2,1,1)$ and column $(2,2)$ are \begin{equation} \label{mats211} \SMat{2&.\\.&1\\.&1} \quad \SMat{1&1\\1&.\\.&1} \quad \SMat{1&1\\.&1\\1&.} \quad \SMat{.&1\\2&.\\.&1} \quad \SMat{.&1\\1&1\\1&.} \quad \SMat{.&1\\.&1\\2&.} \end{equation} The corresponding statistics are \begin{equation} \{ 0,\ 1,\ 2,\ 2,\ 3,\ 4 \}. \end{equation} } The second definition of $S^J(q)$ yields a different combinatorial description: \begin{proposition} \label{SP2} \label{cijW} Let $I$ and $J$ be two compositions and let $W(I,J)$ be the set of packed words $w$ such that \begin{equation} \operatorname{WC}(w)=I \qquad\text{and}\qquad \operatorname{DC}(w)\raffi J. \end{equation} Then \begin{equation} C_I^J(q) = \sum_{w\in W(I,J)} q^{{\rm sinv}(w)}. \end{equation} \end{proposition} {\footnotesize For example, the six packed words corresponding to the coefficient $[4][3]/[2]$ of $M_4$ in row $I=(2,1,1)$ and column $J=(2,2)$ are \begin{equation} 1123,\ 1213,\ 1312,\ 2213,\ 2312,\ 3312. \end{equation} These words are the column readings of the six matrices from (\ref{mats211}). The first word has descent composition $(4)$ and the others have descent composition $(2,2)$. } \subsubsection{The $q$-product on ${\bf Sym}$} To explain the factorization of the coefficients of the transition matrix $M(S(q),\Psi)$, we need a recursive formula for $S^J(q)$. This will be given in Theorem \ref{recursive}. In ${\bf WQSym}$, let \begin{equation} \tilde S_n=\alpha(S_n)=\sum_{u;u\uparrow n}{\bf M}_u \end{equation} where $u\uparrow n$ means that $u$ is a nondecreasing packed word of length $n$, and define \begin{equation} \tilde S^J=\tilde S_{j_1}\star_q\dots\star_q \tilde S_{j_r} \end{equation} so that $S^J(q)=\zeta(\tilde S^J)$. Let $J=(j_1,\dots,j_r)$ and set $J'=(j_2,\dots,j_r)$. Since $\star_q$ is associative in ${\bf WQSym}_q$, we have \begin{equation} S^{j_1,J'}(q) = \zeta( \tilde S^{j_1,J'}) = \sum_{\genfrac{}{}{0pt}{}{u,v;u\uparrow j_1}{\operatorname{DC}(v)\raffi J'}} q^{{\rm sinv}(v)} \zeta( {\bf M}_u \star_q {\bf M}_v). \end{equation} This expression can be simplified by means of the following Lemma. \begin{lemma} \label{lem-uvuvp} Let $u$ be a nondecreasing packed word. Then \begin{equation} \zeta({\bf M}_u\star_q{\bf M}_v)=\zeta({\bf M}_u\star_q{\bf M}_{v'}) \end{equation} for all $v'$ such that $\operatorname{WC}(v')=\operatorname{WC}(v)$. \end{lemma} \noindent \it Proof -- \rm Since $u$ is nondecreasing, each packed word $z=x\cdot y$ appearing in the expansion of ${\bf M}_u\star_q{\bf M}_v$ is completely determined by the letters used in $x$ and the letters used in $y$. Looking at the packed words $z$ and $z'$ occuring in ${\bf M}_u\star_q{\bf M}_v$ and in ${\bf M}_u\star_q{\bf M}_{v'}$ with given letters used for their prefixes and suffixes, we have ${\rm sinv}(z')={\rm sinv}(z)+{\rm sinv}(v')-{\rm sinv}(v)$, whence the result. \qed {\footnotesize For example, \begin{equation} {\bf M}_{11}\star_q {\bf M}_{12} = {\bf M}_{1112} + {\bf M}_{1123} + q^2{\bf M}_{2212} + q^2{\bf M}_{2213} + q^4{\bf M}_{3312}. \end{equation} \begin{equation} {\bf M}_{11}\star_q q{\bf M}_{21} = q{\bf M}_{1121} + q{\bf M}_{1132} + q^3{\bf M}_{2221} + q^3{\bf M}_{2231} + q^5{\bf M}_{3321}. \end{equation} } Let now $\sigma:{\bf Sym}\to{\bf WQSym}$ be the section of the projection $\zeta$ defined by \begin{equation} \sigma(\Psi_I) = {\bf M}_{1^{i_1}2^{i_2}\dots r^{i_r}}. \end{equation} We can define a (non-associative!) $q$-product on ${\bf Sym}$ by \begin{equation}\label{non-assoc} f \star_q g = \zeta( \sigma(f) \star_q \sigma(g)). \end{equation} Then Lemma~\ref{lem-uvuvp} implies that \begin{equation} S^I(q) = S^{i_1} \star_q (S^{i_2}\star_q(\dots (S^{i_{r-1}}\star_q S^{i_r}))). \end{equation} \subsubsection{Closed form for the coefficients} >From Lemma~\ref{lem-uvuvp}, we now have \begin{equation} S^{j_1,J'}(q) = \zeta( \tilde S^{j_1,J'}) = \sum_{\genfrac{}{}{0pt}{}{u,v;u\uparrow j_1}{v\uparrow j_2+\dots+j_r}} C^{J'}_{\operatorname{WC}(v)}(q) \zeta( {\bf M}_u \star_q {\bf M}_v). \end{equation} Note that $\zeta({\bf M}_u\star_q{\bf M}_v)$ and $\zeta({\bf M}_{u'}\star_q{\bf M}_{v'})$ are linear combinations of disjoint sets of $\Psi_K$ as soon as the nondecreasing words $v$ and $v'$ are different. So the computation of the coefficient $C_J^I$ boils down to the evaluation of \begin{equation} \sum_{u;u\uparrow j_1} \zeta( {\bf M}_u \star_q {\bf M}_v) = \zeta(\tilde S^{j_1}\star_q{\bf M}_v), \end{equation} where $v$ is a nondecreasing word. Let us first characterize the terms of the product yielding a given $\Psi_I$. \begin{lemma} \label{QR} Let $u$ be a nondecreasing word of length $k$ over $[1,r]$. Given a composition $I=(i_1,\dots,i_r)$ of length $r$, there exists at most one nondecreasing word $v$ over $[1,r]$ such that $uv$ is packed and $\operatorname{WC}(uv)=I$. Such a $v$ exists precisely when $u=u_1\cdots u_k$ satisfies $u_i<u_{i+1}$ for $i\in\operatorname{Des}(I)$. In this case, let $y=1^{i_1}2^{i_2}\dots r^{i_r}$. Then ${\rm sinv}(uv)$ is equal to \begin{equation} \sum_{1\leq i\leq k} (u_i-y_i). \end{equation} This sum is also equal to \begin{equation} \sum_{1\leq i\leq k} u_i - (k+{\rm maj}(\overline K)), \end{equation} where $K$ is the composition of $k$ such that $\operatorname{Des}(K)=\operatorname{Des}(I)\cap[1,k-1]$. \end{lemma} \noindent \it Proof -- \rm The construction of $v$ was already given in the proof of Theorem 6.1 of~\cite{HNTT}. It comes essentially from the facts that the letters which should be used in $v$ are determined by the letters used in $u$, and that a word is uniquely determined by its packed word and its alphabet. Now, for each letter $x$ of $u$, its contribution to ${\rm sinv}(uv)$ is given by the number of different letters strictly smaller than $x$ appearing in $v$. This is equal to $u_i-y_i$. The sum of the $y_i$ is $k+{\rm maj}(\overline K)$. \qed {\footnotesize For example, given $I=1221$, there are 10 nondecreasing words $u$ of $[1,4]$ of length $3$ satisfying the conditions of the lemma. The following table gives the corresponding~$v$ and the ${\rm sinv}$ statistics of the products $uv$. \begin{equation} \label{tab1221} \begin{array}{|c|c|c|} \hline u & v & {\rm sinv} \\ \hline \hline 122 & 334 & 0 \\ 123 & 224 & 1 \\ 124 & 223 & 2 \\ 133 & 224 & 2 \\ 134 & 223 & 3 \\ 144 & 223 & 4 \\ 233 & 114 & 3 \\ 234 & 113 & 4 \\ 244 & 113 & 5 \\ 344 & 112 & 6 \\ \hline \end{array} \end{equation} } We are now in a position to compute \begin{equation} \zeta(\tilde S^{j_1}\star_q{\bf M}_v) \end{equation} when $v$ is a nondecreasing word. \begin{lemma} \label{qprod-init} Let $I$ be a composition of $k+n$ and let $I'$ be the composition of $n$ such that $\operatorname{Des}(I')=\{a_1,\dots,a_s\}$ satisfies $\{k+a_1,\dots,k+a_s\} = \operatorname{Des}(I)\cap[k+1,k+n]$. Let $v$ be the nondecreasing word of evaluation $I'$. The coefficient of $\Psi_I$ in $\zeta(\tilde S_k\star_q{\bf M}_v)$ is the $q$-binomial coefficient \begin{equation} \qbin{k+r-s}{r-s} \end{equation} where $r=l(I)$, $K$ is the composition of $k$ such that $\operatorname{Des}(K)=\operatorname{Des}(I)\cap[1,k-1]$, and $s=l(K)$. \end{lemma} \noindent \it Proof -- \rm To start with, write \begin{equation} \tilde S_k\star_q{\bf M}_v=\sum_w q^{{\rm sinv}(w)}{\bf M}_w, \end{equation} where $w$ runs over packed words of the form $w=u'v'$, with $u'$ nondecreasing and ${\rm pack}(v')=v$. From Lemma \ref{QR}, we see that in order to have $\operatorname{WC}(u'v')=I$, $u'=x_1\cdots x_k$ must be a word over the interval $[1,r]$ with equalities $x_i=x_j$ allowed precisely when cells $i$ and $j$ are in the same row of the diagram of $K$. The commutative image of the formal sum of such words, which are the nondecreasing reorderings of the quasi-ribbons of shape $K$ \cite{NCSF4}, is the quasi-symmetric {\it quasi-ribbon } polynomial $F_K(t_1,\ldots,t_r)$, introduced in \cite{Gessel}. Hence, the coefficient of $\Psi_I$ is \begin{equation} q^{-{\rm maj}(\bar K)}\qbin{k+r-s}{r-s} \end{equation} given the generating function~\cite{GR} \begin{equation} \sum_{m\ge 0}t^mF_K(1,q,\ldots,q^{m-1}) =\frac{t^{l(K)} q^{{\rm maj}(\bar K)}}{(t;q)_{k+1}}\,. \end{equation} \qed {\footnotesize The example presented in~(\ref{tab1221}) corresponds to the case $I=(1,2,2,1)$ and $i=3$, so that $K=(1,2)$. We then find $\qbin{3+4-2}{4-2}=\qbin{5}{2}$, which indeed corresponds to the statistic in the last column of~(\ref{tab1221}). } Summarizing the above discussion, we can now state the main result of this section: \begin{theorem} \label{recursive} Let $I=(i_1,\dots,i_k)$ and $J=(j_1,\dots,j_l)$ be compositions of $n$. Then the coefficient $C_I^J(q)$ of $\Psi_I$ in $S^J(q)$ is given by the following rule:\\ (i) if $i_1<j_1$, then $C_I^J(q)=C_{(i_1+i_2,i_3,\dots,i_k)}^J(q)$,\\ (ii) otherwise, \begin{equation} C_I^J(q) = \qbin{k+j_1-1}{j_1} C_{I'}^{(j_2,\dots,j_l)}(q) \end{equation} where the diagram of $I'$ is obtained by removing the first $j_1$ cells of the diagram of $I$. \end{theorem} \qed \subsection{The transition matrix $M(R(q),\Psi)$} \subsubsection{$q$-deformed ribbons} We now define a $q$-ribbon basis $R_I(q)$ in terms of the $S^J(q)$'s by analogy to the relationship between the ordinary $R_I$'s and $S^J$'s: \begin{equation} R_I(q) := \sum_{J\raffi I} (-1)^{l(J)-l(I)} S^J(q). \end{equation} The coefficient of $\Psi_I$ in the expansion of $R^J(q)$ will be denoted by $D_I^J(q)$. \subsubsection{First examples} We get the following transition matrices between $R(q)$ and $\Psi$ for $n=3,4$: \begin{equation*} RP_3 = M_3(R,\Psi) = \left( \begin{matrix} 1 & . & . & . \\ 1 & q+q^2 & q & . \\ 1 & . & q & . \\ 1 & q+q^2 & q+q^2 & q^3 \end{matrix} \right) \end{equation*} \begin{equation*} RP_4 = \left( \begin{matrix} 1 & . & . & . & . & . & . & . \\ 1 & q[3] & q\!+\!q^2 & . & q & q^2 & . & . \\ 1 & . & q\!+\!q^2 & . & q & . & . & . \\ 1 & q[3] & q\!+\!2q^2\!+\!q^3\!+\!q^4 & q^3[3] & q\!+\!q^2 & q^2\!+\!q^3\!+\!q^4 & q^3 & . \\ 1 & . & . & . & q & . & . & . \\ 1 & q[3] & q\!+\!q^2 & . & q\!+\!q^2 & q^2\!+\!q^3\!+\!q^4 & q^3 & . \\ 1 & . & q\!+\!q^2 & . & q\!+\!q^2 & . & q^3 & . \\ 1 & q[3] & q\!+\!2q^2\!+\!q^3\!+\!q^4 & q^3[3] & q[3] & q^2\!+\!q^3\!+\!2q^4\!+\!q^5 & q^3[3] & q^6 \end{matrix} \right) \end{equation*} \subsubsection{Combinatorial interpretations} By definition of the transition matrix from $S(q)$ to $R(q)$, the matrices $M(R(q),\Psi)$ can be described as follows: \begin{proposition} \label{dijW} Let $I$ and $J$ be compositions of $n$, and let $W'(I,J)$ be the set of packed words $w$ such that \begin{equation} \operatorname{WC}(w)=I \qquad\text{and}\qquad \operatorname{DC}(w)=J. \end{equation} Then \begin{equation} D_I^J(q) = \sum_{w\in W'(I,J)} q^{{\rm sinv}(w)}. \end{equation} \end{proposition} \noindent \it Proof -- \rm This follows directly from the combinatorial interpretation of $C_I^J$ in terms of packed words (see Proposition~\ref{cijW}). \qed In terms of ${\bf MQSym}$, this can be rewritten as follows: \begin{corollary} Let $I$ and $J$ be compositions of $n$. Then $D_I^J(q)$ is given by the statistic ${\rm sinv}(w(M))$ applied to the elements $M$ of the subset of $M(I,J)$ where in each pair of consecutive columns, the bottommost nonzero entry of the left one is strictly below the top-most nonzero entry of the right one. \end{corollary} \subsection{The transition matrix $M(L(q),\Psi)$} \subsubsection{A new $q$-analogue of the $L$ basis of ${\bf Sym}$} Let ${\rm st}(I,J)$ be the statistic on pairs of compositions of the same weight defined by \begin{equation} {\rm st}(I,J):= \left\{ \begin{array}{ll} \#\{(i,j)\in \operatorname{Des}(I)\times\operatorname{Des}(J) | i\geq j\} & \text{if } I\raff J,\\ -\infty & \text{otherwise} \end{array} \right. \end{equation} We define a new basis $L(q)$ by \begin{equation} L_J(q) := \sum_{I\models |J|} q^{st(I,J)} \Psi_I = \sum_{I\raff J} q^{st(I,J)} \Psi_I. \end{equation} For $q=1$, this reduces to Tevlin's basis $L_I$ (in the notation of \cite{HNTT}). Since $M(L_J(q),\Psi)$ is unitriangular, $L_I(q)$ is a basis of ${\bf Sym}$. \subsubsection{First examples} Here are the first transition matrices from $L(q)$ to $\Psi$: \begin{equation} MLP_3 = M_3(L(q),\Psi) = \left( \begin{matrix} 1 & . & . & . \\ 1 & q & . & . \\ 1 & . & q & . \\ 1 & q & q^2 & q^3 \end{matrix} \right) \end{equation} \begin{equation} MLP_4 = \left( \begin{matrix} 1 & . & . & . & . & . & . & . \\ 1 & q & . & . & . & . & . & . \\ 1 & . & q & . & . & . & . & . \\ 1 & q & q^2 & q^3 & . & . & . & . \\ 1 & . & . & . & q & . & . & . \\ 1 & q & . & . & q^2 & q^3 & . & . \\ 1 & . & q & . & q^2 & . & q^3 & . \\ 1 & q & q^2 & q^3 & q^3 & q^4 & q^5 & q^6 \end{matrix} \right) \end{equation} Note that up to some minor changes (conjugation w.r.t. mirror image of compositions), these are the matrices expressing Hivert's Hall-Littlewood $\tilde{H}_J$ on the basis $R_I$~\cite{Hiv-adv}. This allows us to derive the expression of their inverse, that is, transition matrices from $\Psi$ to $L(q)$ (see~\cite{Hiv-adv}, Theorem 6.6): \begin{equation} \label{PsiL} \Psi_J = \sum_{I\raff J} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} L_I(q), \end{equation} where ${\rm st'}(I,J)$ is \begin{equation} {\rm st'}(I,J):= \left\{ \begin{array}{ll} \#\{(i,j)\in \operatorname{Des}(I)\times\operatorname{Des}(J) | i\leq j\} & \text{if } I\raff J,\\ -\infty & \text{otherwise} \end{array} \right. \end{equation} \subsection{The transition matrix $M(S(q),L(q))$} \label{sec-SL} The coefficient of $L_I(q)$ in $S^J(q)$ will be denoted by $E_I^J(q)$. In this section, we will see a connection to permutation tableaux and hence to the asymmetric exclusion process. \subsubsection{First examples} \label{FirstExamples} Here are the first transition matrices from $S(q)$ to $L(q)$: \begin{equation*} SL_3 = M_3(S(q),L(q)) = \left( \begin{matrix} 1 & 1 & 1 & 1 \\ . & 1+q & 1 & 2+q \\ . & . & 1 & 1 \\ . & . & . & 1 \end{matrix} \right) \end{equation*} \begin{equation*} SL_4= \left( \begin{matrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ . & 1\!+\!q\!+\!q^2 & 1+q & 2+2q+q^2 & 1 & 2\!+\!2q\!+\!q^2 & 2+q & 3+3q+q^2 \\ . & . & 1+q & 1+q & 1 & 1 & 2+q & 2+q \\ . & . & q & 1+2q+q^2 & . & 1+q & 1+q & 3+3q+q^2 \\ . & . & . & . & 1 & 1 & 1 & 1 \\ . & . & . & . & . & 1+q & 1 & 2+q \\ . & . & . & . & . & . & 1 & 1 \\ . & . & . & . & . & . & . & 1 \end{matrix} \right) \end{equation*} In fact the right-hand column of each of these matrices contains the (un-normalized) steady-state probabilities of each state of the partially asymmetric exclusion process (PASEP). More specifically, the steady-state probabilities of the states $\bullet \bullet$, $\bullet \circ$, $\circ \bullet$, and $\circ \circ$ (the states of the PASEP on $2$ sites) are $\frac{1}{q+5}$, $\frac{q+2}{q+5}$, $\frac{1}{q+5}$, and $\frac{1}{q+5}$, respectively; compare this with the right-hand column of $SL_3$. The steady-state probabilities of $\bullet \bullet \bullet$, $\bullet \bullet \circ$, $\bullet \circ \bullet$, $\bullet \circ \circ$, $\circ \bullet \bullet$, $\circ \bullet \circ$, $\circ \circ \bullet$, $\circ \circ \circ$ are given by the right-hand column of $SL_4$. This will be proved in Section \ref{PASEP}, building on work of \cite{CW}. Note that since all coefficients of the matrix $SP_n$ are explicit (and products of $q$-binomials) and since $SL_n$ comes from $SP_n$ by adding and substracting rows, we have a simple expression of $E_I^J(q)$ as an alternating sum of $C_I^J(q)$, hence of products of $q$-binomials. Note that these matrices are invertible for generic values of $q$, in particular for $q=0$. Hence, we can define Hall-Littlewood type functions by \begin{equation} \tilde{H}_J(q) = \sum_I E_I^J(q) L_I \end{equation} which interpolate between $S^I$ (at $q=1$) and a new kind of noncommutative Schur functions $\Sigma_I$ at $q=0$. We will see in the next section that the last column gives the $q$-enumeration of permutation tableaux according to shape. Let us write down the precise statement in that case. \begin{proposition} \label{ei1} Let $c_I(q)$ be the coefficient of $\Psi_I$ in $S^{1^n}(q)$. Let $e_I(q)$ be the coefficient of $L_I(q)$ in $S^{1^n}(q)$. Then, \begin{equation} \label{eqei} e_I(q) = \sum_{J\raffi I} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} c_J(q), \text{ and } \end{equation} \begin{equation} \label{eqci} c_{j_1,\dots,j_r}(q) = [r]_q^{j_1} [r-1]_q^{j_2} \dots [2]_q^{j_{r-1}} [1]_q^{j_r}. \end{equation} \end{proposition} \noindent \it Proof -- \rm Straightforward from Theorem~\ref{recursive} and Equation~\ref{PsiL}. \qed We shall use the notation $\mathrm{QFact}_A(J):=c_J(q)$ in the sequel, regarding it as a generalized $q$-factorial defined for all compositions (the classical one coming from $J=(1^n)$.) \subsubsection{A combinatorial lemma} We need to describe the $q$-product in the $L(q)$ basis. Our first objective will be to understand how ${\rm st}(I,K)$ can be related to ${\rm st}(I,J)$ and ${\rm st}(J,K)$ for all $K\raff J\raff I$. \begin{lemma} \label{XYZ} Let $X=\{x_1<\dots<x_r\} \subseteq Z=\{z_1<\dots<z_m\}$ be two sets of positive integers. For an integer $y$, let \begin{equation} \nu(y) = \#\{z\in Z|z\geq y\} + \#\{x\in X| x\leq y\}, \end{equation} and for a set $Y$, \begin{equation} \nu(Y) = \sum_{y\in Y} \nu(y). \end{equation} For $r\leq s\leq m$, let \begin{equation} \Sigma_s(X,Z) = \sum_{\genfrac{}{}{0pt}{}{X\subseteq Y\subseteq Z}{|Y|=s}} q^{\nu(Y)}. \end{equation} Then, \begin{equation} \Sigma_s(X,Z) = q^{\nu(X) + (r+1)(s-r)+\binom{s-r}{2}} \qbin{m-r}{s-r}. \end{equation} \end{lemma} \noindent \it Proof -- \rm Let $Z/X = U = \{ u_1<\dots<u_{m-r}\}$ and $\nu_i=\nu(u_i)$. Then $\nu_{i}=m-i+1$, so that all $\nu_j$ are consecutive integers. By definition, \begin{equation} \begin{split} \Sigma_s(X,Z) &= \sum_{X\subseteq \{y_1<\dots<y_s\}\subseteq Z} q^{\nu(y_1)+\dots+\nu(y_s)}\\ &= q^{\nu(X)} \sum_{k_1<\dots<k_{s-r}} q^{\nu_{k_1}+\dots+\nu_{k_{s-r}}}\\ &= q^{\nu(X)} e_{s-r}(q^{\nu_1},\dots,q^{\nu_{m-r}}), \end{split} \end{equation} where $e_{n}(X)$ is the usual elementary symmetric function on the alphabet $X$. Thus, \begin{equation} \begin{split} \Sigma_s(X,Z) &= q^{\nu(X)} e_{s-r}(q^{r+1},\dots,q^m) \\ &= q^{\nu(X)} q^{(r+1)(s-r)} e_{s-r}(1,q,\dots,q^{m-r-1}) \\ &= q^{\nu(X) + (r+1)(s-r)+\binom{s-r}{2}} \qbin{m-r}{s-r}. \end{split} \end{equation} \qed {\footnotesize For example, with $X=\{3,7\}$ and $Z=\{1,\dots,10\}$, one has \begin{equation} \Sigma_3(X,Z) = q^{\nu(3)+\nu(7)} \sum_{y\in Z/X} q^{\nu(y)} = q^{15} (q^{10}+q^9+\dots+q^4+q^3) = q^{18} \qbin{8}{1}. \end{equation} \begin{equation} \Sigma_4(X,Z) = q^{15} e_2(q^3,\dots,q^{10}) = q^{21} e_2(1,\dots,q^7) = q^{22} \qbin{8}{2}. \end{equation} } \subsubsection{The $q$-product on the basis $L(q)$} Recall that the $q$-product on ${\bf Sym}$ is a non-associative product but that $S^I(q)$ is the $q$-product on the parts of $I$, multiplied from right to left (see (\ref{non-assoc})). \begin{lemma} \label{LLP} For an integer $p$ and a composition $I$, \begin{equation} L_p(q) \star_q L_I(q) = \sum_{K\raff p{\,\triangleright} I} q^{st(K,p{\,\triangleright} I)} \qbin{m_K+p}{p} \Psi_K, \end{equation} where $m_K = \#\{k\in\operatorname{Des}(K) | k\geq p\}$. \end{lemma} \noindent \it Proof -- \rm Note that $L_p(q) = S_p$. The $q$-products $S_p \star_q \Psi_L$ are easily computed by means of Lemma~\ref{qprod-init}. \qed We are now in a position to expand such $q$-products on the $L(q)$ basis: \begin{lemma} For an integer $p$ and a composition $I$, we have \begin{equation} \label{LLL} L_p(q) \star_q L_I(q) = \sum_{\genfrac{}{}{0pt}{}{J\raff p{\,\triangleright} I}{j_1\geq p}} q^{st(J,p{\,\triangleright} I) + \binom{l(J)-l(I)}{2} - \binom{l(I)}{2}} \qbin{l(I)+p-1}{l(J)-1} L_J(q). \end{equation} \end{lemma} \noindent \it Proof -- \rm From Lemma \ref{LLP}, with $r=l(I)-1$, we have \begin{equation} \begin{split} L_p(q) \star_q L_I(q) &= \sum_{K\raff p{\,\triangleright} I} q^{st(K,p{\,\triangleright} I)} \sum_{s\ge r}q^{sr}\qbin{p-r}{p-s}\qbin{p+r}{s}\Psi_K\\ &= \sum_{K\raff p{\,\triangleright} I}\sum_{s\ge r} \qbin{p+r}{s} \left( q^{st(K, p{\,\triangleright} I)+rs}\qbin{p-r}{p-s}\Psi_K \right) \end{split} \end{equation} Thanks to Lemma~\ref{XYZ}, noting that ${\rm st}(K,J)+{\rm st}(J,p{\,\triangleright} I)=\nu(\operatorname{Des}(J))$ with $X=\operatorname{Des}(I)$ and $Z=\operatorname{Des}(K)\cap[p,\infty]$, this is equal to \begin{equation} \begin{split} &= \sum_{s\ge r}\qbin{p+r}{s} \sum_{\genfrac{}{}{0pt}{}{J\raff p{\,\triangleright} I}{l(J)=s+1,\ j_1\ge p}} q^{{\rm st}(J,p{\,\triangleright} I)+\binom{s-r}{2}-\binom{r+1}{2}} \sum_{K\succeq J}q^{{\rm st}(K,J)}\Psi_K\\ &=\sum_{\genfrac{}{}{0pt}{}{J\raff p{\,\triangleright} I}{j_1\geq p}} q^{st(J,p{\,\triangleright} I) + \binom{l(J)-l(I)}{2} - \binom{l(I)}{2}} \qbin{l(I)+p-1}{l(J)-1} L_J(q). \end{split} \end{equation} \qed {\footnotesize For example, \begin{equation} L_{2}(q) \star_q L_{21}(q) = [3] L_{41}(q) + [3] L_{311}(q) + [3] L_{221}(q) + q L_{2111}(q). \end{equation} \begin{equation} L_{2}(q) \star_q L_{12}(q) = [3] L_{32}(q) + q[3] L_{311}(q) + [3] L_{212}(q) + q^2 L_{2111}(q). \end{equation} \begin{equation} \begin{split} L_{3}(q) \star_q L_{22}(q) =& [4] L_{52} + q\qbin{4}{2} L_{511} +\qbin{4}{2} L_{412} + q^2[4] L_{4111}\\ &+ \qbin{4}{2} L_{322}(q) + q^2[4] L_{3211} + q[4] L_{3112} +q^4 L_{31111}. \end{split} \end{equation} } Since $S^J(q) = L_{j_1}(q) \star_q (L_{j_2}(q) \star_q (\dots (L_{j_{r-1}}(q) \star_q L_{j_r}(q))\dots))$, Formula~\ref{LLL} implies the following. \begin{corollary} The coefficient $E_I^J(q)$ is in ${\mathbb N}[q]$. \end{corollary} \begin{corollary} \label{rec-ei1} Recall that $e_I(q)$ is the coefficient of $L_I(q)$ in $S^{1^n}(q)$. Then, for any composition $I=(i_1,\dots,i_r)$, \begin{equation} \begin{split} e_{1+i_1,i_2,\dots,i_r}(q) & = [r]_q e_I + \sum_{k=1}^n q^{k-1} e_{i_1,\dots,i_k+i_{k+1},\dots,i_r}(q),\\ e_{1,i_1,i_2,\dots,i_r}(q) & = e_{I}(q). \end{split} \end{equation} Conversely, this property and the trivial initial conditions determine completely the $e_{I}$. \end{corollary} \begin{proof} This follows from the fact that $S^{1^n}(q) = L_{1}(q) \star_q (\dots (L_{1}(q) \star_q L_{1}(q))\dots)$, by putting $p=1$ into ~(\ref{LLL}). \end{proof} \subsubsection{Towards a combinatorial interpretation of $E_I^J(q)$} In Theorem~\ref{thm-Lig}, we will give a combinatorial interpretation of the coefficients $E_I^J(q)$ expressing $S^J(q)$ in terms of the $L_I(q)$'s. But first we need a new combinatorial algorithm sending a permutation to a composition. Let $\sigma$ be a permutation in ${\mathfrak S}_n$. We compute a composition ${\rm LC}(\sigma)$ of $n$ as follows. \begin{itemize} \item Consider the Lehmer code of its inverse ${\rm Lh}(\sigma)$, that is, the word whose $i$th letter is the number of letters of $\sigma$ to the left of $i$ and greater than $i$. \item Fix $S=\emptyset$ and read ${\rm Lh}(\sigma)$ from right to left. At each step, if the entry $k$ is strictly greater than the size of $S$, add the ($k$-\#(S))-th element of the sequence $[1,n]$ with the elements of $S$ removed . \item The set $S$ is the descent set of a composition $C$, and ${\rm LC}(\sigma)$ is the mirror image $\bar C$ of $C$. \end{itemize} {\footnotesize For example, with $\sigma=(637124985)$, the Lehmer code of its inverse is ${\rm Lh}(\sigma)=(331240010)$. Then $S$ is $\emptyset$ at first, then the set $\{1\}$ (second step), then the set $\{1,4\}$ (fifth step), then the set $\{1,4,2\}$ (eighth step). Hence $C$ is $(1,1,2,5)$, so that ${\rm LC}(\sigma)=(5,2,1,1)$. } One can find in Section~\ref{sec-patt} the permutations of ${\mathfrak S}_3$ and ${\mathfrak S}_4$ arranged by rows according to their ${\rm LC}$ statistics and by columns according to their recoil compositions. \subsubsection{A left ${\bf Sym}_q$-module} Let $\sim$ be the equivalence relation on ${\mathfrak S}_n$ defined by $\sigma\sim\tau$ whenever ${\rm LC}(\sigma)={\rm LC}(\tau)$. Let ${\mathcal M}$ be the quotient of ${\bf FQSym}$~\cite{NCSF6} by the subspace \begin{equation} {\mathcal V}=\{{\bf F}_\sigma -{\bf F}_\tau|\sigma\sim\tau\}\,. \end{equation} For a composition $I$, set $\kappa(I)={\rm maj}(\bar I)$, and for a permutation, $\kappa(\sigma)=\kappa({\rm LC}(\sigma))$. Let ${\mathcal F}_I$ denote the equivalence class of $q^{\kappa(\sigma)}{\bf F}_\sigma$. Denote by $\circ_q$ be the $q$-product of ${\bf FQSym}$ inherited from ${\bf WQSym}$. More precisely, if ${\bf F}'_\sigma=q^{{\rm inv}(\sigma)}{\bf F}_\sigma$ and $\phi_q({\bf F}_\sigma)={\bf F}'_\sigma$, then \begin{equation} {\bf F}'_\sigma \, \circ_q {\bf F}'_\tau = \phi_q({\bf F}_\sigma {\bf F}_\tau). \end{equation} This is the same structure as the one considered in \cite{NCSF6}. In particular, in the basis ${\bf G}_\sigma={\bf F}_{\sigma^{-1}}$, the product is given by the $q$-convolution \begin{equation} {\bf G}_\alpha\circ_q{\bf G}_\beta= \sum_{\gf{\gamma=u\cdots v}{{\rm Std}(u)=\alpha,\, {\rm Std}(v)=\beta}} q^{{\rm inv}(\gamma)-{\rm inv}(\alpha)-{\rm inv}(\beta)}{\bf G}_\gamma\,. \end{equation} \begin{lemma} The quotient vector space ${\mathcal M}$ is a left ${\bf Sym}$-module for the $q$-product of ${\bf FQSym}$, that is, \begin{equation} F\equiv G \mod {\mathcal V}\ \Longrightarrow S_p\circ_q F\equiv S_p\circ_q G \mod {\mathcal V}\,. \end{equation} \end{lemma} \noindent \it Proof -- \rm Let $\sigma^{-1}\sim\tau^{-1}\in{\mathfrak S}_l$ and $n=p+l$. We need to compare the codes of the permutations appearing in the $q$-convolutions \begin{equation} U={\bf G}_{12\cdots p} \circ_q {\bf G}_\sigma\ \text{and}\ V={\bf G}_{12\cdots p}\circ_q {\bf G}_\tau\,. \end{equation} For a subset $S=\{s_1<s_2<\ldots <s_p\}$ of $[n]$, let $\sigma_S$ and $\tau_S$ be the elements of $U$ and $V$ whose prefix of length $p$ is $s_1s_2\cdots s_p$. Then, the codes of $\sigma_S$ and $\tau_S$ coincide on the first $p$ positions, and are equivalent on the last $l$ ones, so that $\sigma_S\sim \tau_S$. Moreover, $\sigma_S$ and $\tau_S$ arise with the same power of $q$, so we have a module for the $q$-structure as well. \qed {\footnotesize For example, \begin{equation} \begin{split} {\bf F}_{12} \circ_q q{\bf F}_{132} &= q {\bf F}_{12354} + q^2 {\bf F}_{13254} + q^3 {\bf F}_{13524} + q^4 {\bf F}_{13542} + q^3 {\bf F}_{31254}\\ & + q^4 {\bf F}_{31524} + q^5 {\bf F}_{31542} + q^5 {\bf F}_{35124} + q^6 {\bf F}_{35142} + q^7 {\bf F}_{35412}\\ & = {\mathcal F}_{41} + q{\mathcal F}_{41} + {\mathcal F}_{311} + {\mathcal F}_{221} + q^2{\mathcal F}_{41}\\ & + q{\mathcal F}_{311} + q{\mathcal F}_{221} + q^2 {\mathcal F}_{311} + q^2{\mathcal F}_{221} + q {\mathcal F}_{2111}. \end{split} \end{equation} \begin{equation} \begin{split} {\bf F}_{12} \circ_q q {\bf F}_{312} &= q {\bf F}_{12534} + q^2 {\bf F}_{15234} + q^3 {\bf F}_{15324} + q^4 {\bf F}_{15342} + q^3 {\bf F}_{51234}\\ & + q^4 {\bf F}_{51324} + q^5 {\bf F}_{51342} + q^5 {\bf F}_{53124} + q^6 {\bf F}_{53142} + q^7 {\bf F}_{53412}\\ & = {\mathcal F}_{41} + q{\mathcal F}_{41} + {\mathcal F}_{311} + {\mathcal F}_{221} + q^2{\mathcal F}_{41}\\ & + q{\mathcal F}_{311} + q{\mathcal F}_{221} + q^2 {\mathcal F}_{311} + q^2{\mathcal F}_{221} + q {\mathcal F}_{2111}. \end{split} \end{equation} \begin{equation} \begin{split} {\bf F}_{12} \circ_q q {\bf F}_{213} &= q {\bf F}_{12435} + q^2 {\bf F}_{14235} + q^3 {\bf F}_{14325} + q^4 {\bf F}_{14352} + q^3 {\bf F}_{41235}\\ & + q^4 {\bf F}_{41325} + q^5 {\bf F}_{41352} + q^5 {\bf F}_{43125} + q^6 {\bf F}_{43152} + q^7 {\bf F}_{43512}\\ & = {\mathcal F}_{41} + q{\mathcal F}_{41} + {\mathcal F}_{311} + {\mathcal F}_{221} + q^2{\mathcal F}_{41}\\ & + q{\mathcal F}_{311} + q{\mathcal F}_{221} + q^2 {\mathcal F}_{311} + q^2{\mathcal F}_{221} + q {\mathcal F}_{2111}. \end{split} \end{equation} } We have now: \begin{lemma} \label{lem-LL} The left $q$-product of a ${\mathcal F}_I$ by a complete function is given by~(\ref{LLL}): \begin{equation} S_p\circ_q {\mathcal F}_I = \sum_{\genfrac{}{}{0pt}{}{J\raff p{\,\triangleright} I}{j_1\geq p}} q^{st(J,p{\,\triangleright} I) + \binom{l(J)-l(I)}{2} - \binom{l(I)}{2}} \qbin{l(I)+p-1}{l(J)-1} {\mathcal F}_J. \end{equation} \end{lemma} \noindent \it Proof -- \rm Let us first show that this is true at $q=1$. Let $\sigma$ be such that ${\rm LC}(\sigma^{-1})=I$. By definition of ${\rm LC}$, the permutations $\tau$ occuring in ${\bf G}_{12\cdots p}\circ_q{\bf G}_\sigma$ satisfy $\overline{{\rm LC}(\tau^{-1})} \succeq \overline{{\rm LC}(\sigma^{-1})}$, and the codes of those permutations have the form \begin{equation} s_1s_2\cdots s_p t_1t_2\cdots t_l\,, \end{equation} where $t=t_1t_2\cdots t_l$ is the code of $\sigma$ and $s_1\le s_2\le\ldots\le s_p$. The compositions $J$ such that $l(J)-l(I)$ has a fixed value $m$ will all be obtained by fixing the last $m$ values $s_p,\ldots,s_{p-m+1}$ in a way depending on the code $t$, the first $p-m$ being allowed to be any weakly increasing sequence \begin{equation} s_1\le s_2\le\ldots\le s_p\le l(J)-1\,,\ \text{which leaves } \binom{p+l(I)-1}{l(J)-1}\ \text{choices.} \end{equation} Now, in the $q$-convolution ${\bf G}_{12\ldots p}\circ_q{\bf G}_\sigma$, these permutations $\tau$ occur with a coefficient $q^{{\rm inv}(\tau)-{\rm inv}(\sigma)}$, so that the coefficient of ${\mathcal F}_J$ is, up to a power of $q$, the $q$-binomial coefficient $\qbin{p+l(I)-1}{l(J)-1}$. By our choice of the normalization ${\mathcal F}_I=q^{\kappa(\sigma)}{\bf F}_\sigma$, this power of $q$ is the same as in~(\ref{LLL}). \qed {\footnotesize As one can check on the previous examples, we have indeed \begin{equation} {\mathcal F}_{2} \circ_q {\mathcal F}_{21} = (1+q+q^2) {\mathcal F}_{41} + (1+q+q^2) {\mathcal F}_{311} + (1+q+q^2) {\mathcal F}_{221} + q {\mathcal F}_{2111}. \end{equation} } By Lemmas~\ref{lem-LL} and~(\ref{LLL}), the two bases ${\mathcal F}$ and $L(q)$ have the same multiplication formula, so that $E_I^J(q)$ is also the coefficient of ${\mathcal F}_I$ in the expansion of $S^J(q)$. Hence \begin{theorem} \label{thm-Lig} Let $I$ and $J$ be two compositions of $n$. Let ${\rm PP}(I,J)$ be the set of permutations whose ${\rm LC}$ statistic is $I$ and whose recoil composition is finer than $J$. Then, \begin{equation} E_I^J(q) = q^{-{\rm maj}(\overline{{\rm LC}(\sigma)})} \sum_{\sigma\in {\rm PP}(I,J)} q^{{\rm inv}(\sigma)}. \end{equation} \end{theorem} \subsection{The transition matrix $M(R(q),L(q))$} The last transition matrix which remains to be computed is the one from $R(q)$ to $L(q)$. \subsubsection{First examples} We have the following matrices for $n=3,4$: \begin{equation*} RL_3 = M_3(R(q),L(q)) = \left( \begin{matrix} 1 & . & . & . \\ . & 1+q & 1 & . \\ . & . & 1 & . \\ . & . & . & 1 \end{matrix} \right) \end{equation*} \begin{equation*} RL_4 = \left( \begin{matrix} 1 & . & . & . & . & . & . & . \\ . & 1+q+q^2 & 1+q & . & 1 & q & . & . \\ . & . & 1+q & . & 1 & . & . & . \\ . & . & q & 1+q+q^2 & . & 1+q & 1 & . \\ . & . & . & . & 1 & . & . & . \\ . & . & . & . & . & 1+q & 1 & . \\ . & . & . & . & . & . & 1 & . \\ . & . & . & . & . & . & . & 1 \end{matrix} \right) \end{equation*} \subsubsection{Combinatorial interpretation} The coefficient of $L_I(q)$ in $R_J(q)$ will be denoted by $F_I^J(q)$. >From the characterization in Theorem~\ref{thm-Lig} of $M(S(q),L(q))$ in terms of permutations we obtain: \begin{theorem} Let $I$ and $J$ be two compositions. Let ${\rm PP}'(I,J)$ be the set of permutations whose ${\rm LC}$ statistic is $I$ and whose recoil composition is $J$. The coefficient $F_I^J$ of $L_I(q)$ in the expansion of $R_J(q)$ is given by \begin{equation} q^{-{\rm maj}(\overline{{\rm LC}(\sigma)})} \sum_{\sigma\in {\rm PP}'(I,J)} q^{{\rm inv}(\sigma)}. \end{equation} \end{theorem} \section{The PASEP and type A permutation tableaux}\label{PASEP} Permutation tableaux (of type A) are certain fillings of Young diagrams with $0$'s and $1$'s which are in bijection with permutations (see \cite{SW} for two bijections). They are a distinguished subset of Postnikov's (type A) $\hbox{\rotatedown{$\Gamma$}}$-diagrams~\cite{Postnikov}, which index cells of the totally non-negative part of the Grassmannian. Apart from this geometric connection, permutation tableaux are of interest as they are closely connected to a model from statistical physics called the partially asymmetric exclusion process (PASEP) \cite{CW}. More precisely, the PASEP with $n$ sites is a model in which particles hop back and forth (and in and out) of a one-dimensional lattice, such that at most one particle may occupy a given site (the probability of hopping left is $q$ times the probability of hopping right.) See \cite{CW} for full details. Therefore there are $2^n$ possible states of the PASEP. There is a simple bijection from a state $\tau$ of the PASEP to a Young diagram $\lambda(\tau)$ whose semiperimeter is $n+1$. The main result of \cite{CW} is that the steady state probability that the PASEP is in configuration $\tau$ is equal to the $q$-enumeration of permutation tableaux of shape $\lambda(\tau)$ divided by the $q$-enumeration of all permutation tableaux of semiperimeter $n+1$. In this section we will give an explicit formula for the $q$-enumeration of permutation tableaux of a given shape. So in particular this is an explicit formula for the steady state probability of each state of the PASEP. Additionally, by results of~\cite{SW}, this formula counts permutations with a given set of weak excedances according to {\it crossings}; it also counts permutations with a given set of {\it descent bottoms} according to occurrences of the pattern $2-31$. \subsection{Permutation tableaux} Regard the following $(k, n-k)$ rectangle (here $k=3$ and $n=8$) \begin{equation} \tableaux{\\{}&{}&{}&{}&{} \\ {}&{}&{}&{}&{} \\ {}&{}&{}&{}&{}\\&} \end{equation} as a poset $Q^A_{k,n}$: the elements of the poset are the boxes, and box $b$ is less than $b'$ if $b$ is southwest of $b'$. We then define a {\it type $A$ Young diagram} contained in a $(k,n-k)$ rectangle to be an order ideal in the poset $Q^A_{k,n}$. This corresponds to the French notation for representing Young diagrams. We will sometimes refer to such a Young diagram by the partition $\lambda$ given by the lengths of the rows of the order ideal. Note that we allow partitions to have parts of size $0$. As in \cite{SW}, we define a type A {\em permutation tableau} $\mathcal{T}$ to be a type A Young diagram $Y_\lambda$ together with a filling of the boxes with $0$'s and $1$'s such that the following properties hold: \begin{enumerate} \item Each column of the diagram contains at least one $1$. \item There is no $0$ which has a $1$ below it in the same column {\em and} a $1$ to its left in the same row. \end{enumerate} \noindent We call such a filling a \emph{valid} filling of $Y_\lambda$. Here is an example of a type A permutation tableau. \begin{equation} \tableaux{\\{0}&{1}&{1}&& \\ {1}&{0}&{1}&{1}& \\ {1}&{0}&{1}&{0}&{1}\\&} \end{equation} \noindent Note that if we forget the requirement (1) in the definition of type A permutation tableaux then we recover the description of a (type A) $\hbox{\rotatedown{$\Gamma$}}$-diagram~\cite{Postnikov}, an object which represents a cell in the totally nonnegative part of a Grassmannian. In that case, the total number of $1$'s corresponds to the dimension of the cell. We define the {\it rank} $\mathrm{rank}(\mathcal{T})$ of a permutation tableau (of type A) $\mathcal{T}$ with $k$ columns to be the total number of $1$'s in the filling minus $k$. (We subtract $k$ since there must be at least $k$ $1$'s in a valid filling of a tableau with $k$ columns.) \subsection{Enumeration of permutation tableaux by shape} Starting from a partition with $k$ rows and $n-k$ columns, one encodes it as a composition $I=(i_1,\dots,i_k)$ of $n$ as follows: $i_1-1$ is the number of columns of length $k$, $i_2-1$ is the number of columns of length $k-1$, and \dots, $i_k-1$ is the number of columns of length $1$. Let $\ell(I)$ denote the number of parts of $I$. Then the number ${\rm PT}^A_I$ of permutation tableaux of shape corresponding to $I$ is given by a simple formula coming from combinatorics of noncommutative symmetric functions. Indeed, according to~\cite[Proposition 9.2]{Tev}, \begin{equation} L_1^n = \sum_{I\vDash n} g_I \Psi_I, \end{equation} where \begin{equation} g_I = \prod_{k=1}^{l(I)} (l(I)-k+1)^{i_k}. \end{equation} Hence, the coefficient $e_J$ of \begin{equation} L_1^n = \sum_{J\vDash n} e_J L_J \end{equation} is given by \begin{equation} e_J = \sum_{I\raff J} (-1)^{l(I)-l(J)} \prod_{k=1}^{l(I)} (l(I)-k+1)^{i_k}. \end{equation} Moreover, from~\cite{HNTT}, Theorem 5.1, we known that $e_I$ is the number of permutations such that $\operatorname{GC}(\sigma)=I$. Finally, since permutation tableaux of a given shape are in bijection with permutations with given descent bottoms~\cite{SW} and that $\operatorname{GC}$ does the same up to reverse complement of the permutations, this number is also the number of permutation tableaux of shape $I$. \begin{theorem} \label{Theorem1} \begin{equation} {\rm PT}^A_I = \sum_{J\raffi I} (-1)^{\ell(I)-\ell(J)} \mathrm{Fact}(J), \end{equation} where the sum is over the compositions $J$ coarser than $I$ and where $\mathrm{Fact}$ is defined by \begin{equation} \mathrm{Fact}(j_1,\dots,j_p) := p^{j_1} (p-1)^{j_2} \dots 2^{j_{p-1}} 1^{j_p}. \end{equation} \end{theorem} \qed For example with $I=(3,4,1)$, we get \begin{equation} {\rm PT}^A_{341} = 3^3 2^4 1^1 - 2^7 1^1 - 2^3 1^5 + 1^8 = 297. \end{equation} \subsubsection{$q$-enumeration of permutation tableaux according to their shape} In this section, we make the connection between the coefficients $e_I(q)$ previously seen, and the $q$-enumeration of permutation tableaux. Recall that $e_I(q)$ is the coefficient of $L_I(q)$ in $S^{1^n}(q)$. We saw in Corollary \ref{rec-ei1} that for all compositions $I=(i_1,\dots,i_r)$, the following hold: \begin{itemize} \item $e_{(1,i_1,i_2,\dots,i_r)}(q) = e_{I}(q).$ \item $e_{(1+i_1,i_2,\dots,i_r)}(q) = [r]_q e_I + \sum_{k=1}^{r-1} q^{k-1} e_{(i_1,\dots,i_k+i_{k+1},\dots,i_r)}(q)$ \end{itemize} \noindent It is possible to transform this result into a $q$-enumeration of permutation tableaux by their rank. Let \begin{equation} {\rm PT}^A_I(q) := \sum_T q^{\mathrm{rank}(T)}, \end{equation} where the sum is over all permutation tableaux whose shape corresponds to $I$. The following result generalizes Theorem \ref{Theorem1}. Its proof follows directly from Proposition~\ref{ei1}, Corollary~\ref{rec-ei1}, and Lemma~\ref{lem-ptA} below. \begin{theorem} \label{Theorem2} Let $I$ be a composition. Then, \begin{equation} \label{eqptA} {\rm PT}^A_I(q) = e_I(q) = \sum_{J\raffi I} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_A(J), \end{equation} where $\mathrm{QFact}_A$ is recalled to be \begin{equation} \mathrm{QFact}_A(j_1,\dots,j_p) := [p]_q^{j_1} [p-1]_q^{j_2} \dots [2]_q^{j_{p-1}} [1]_q^{j_p}. \end{equation} \end{theorem} By the results of~\cite{CW}, Theorem \ref{Theorem2} gives an explicit formula for the steady state probabilities in the partially asymmetric exclusion process (PASEP). More specifically, consider the PASEP on a one-dimensional lattice of $n$ sites where particles hop right with probability $dt$, hop left with probability $q dt$, enter from the left at a rate $dt$, and exit to the right at a rate $dt$. Let us number the $n$ sites from {\it right to left} with the numbers $1$ through $n$. Then we have the following result. \begin{corollary} Recall the notation of Theorem~\ref{Theorem2}. Let $I$ be a composition of $n+1$, and let $Z_n$ denote the partition function for the PASEP. Let $\tau$ denote the state of the PASEP in which all sites of $\operatorname{Des}(I)$ are occupied by a particle and all sites of $[n-1]\setminus \operatorname{Des}(I)$ are empty. Then the probability that in the steady state, the PASEP is in state $\tau$, is \begin{equation*}\frac{\sum_{J\raffi I} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_A(J)}{Z_n}. \end{equation*} \end{corollary} By the results of \cite{SW}, this is also an explicit formula enumerating permutations with a fixed set of {\it weak excedances} according to the number of {\it crossings}; equivalently, an explicit formula enumerating permutations with a fixed set of {\it descent bottoms} according to the number of occurences of the {\it generalized pattern} $2-31$. See \cite{SW} for definitions. More specifically, let $I$ be a composition of $n+1$, let $DB(I)$ be the descent set of the reverse composition of $I$, and let $W(I) = \{1\} \cup \{1 + DB(I) \}$. Here $1+DB(I)$ denotes the set obtained by adding $1$ to each element of $DB(I)$. If $\sigma$ is a permutation, let $(2-31)\sigma$ denote the number of occurrences of the pattern $2-31$ in $\sigma$, and let ${cr}(\sigma)$ denote the number of {\it crossings} of $\sigma$. Let $T_I(q) = \sum_{\sigma} q^{(2-31)\sigma}$ be the sum over all permutations in $S_{n+1}$ whose set of descent bottoms is $DB(I)$. And let $T'_I(q) = \sum_{\sigma} q^{{cr}(\sigma)}$ be the sum over all permutations in $S_{n+1}$ whose set of weak excedances is $W(I)$. \begin{corollary} \begin{equation*}T_I(q) = T'_I(q) = {\sum_{J\raffi I} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_A(J)}. \end{equation*} \end{corollary} For example with $I=(3,4,1)$, the compositions coarser than $I$ are $(3,4,1)$, $(7,1)$, $(3,5)$, and $(8)$, so we get \begin{equation} \begin{split} {\rm PT}^A_{341}(q) &= \frac{1}{q^2}\left( \frac{[3]_q^3 [2]_q^4}{q} - \frac{[2]_q^7}{q} - \frac{[2]_q^3}{1} + \frac{1}{1} \right)\\ &= q^7 + 7 q^6 + 24 q^5 + 52 q^4 + 76 q^3 + 75 q^2 + 47 q + 15. \end{split} \end{equation} The descent set $D(I)$ of $I$ is $\{3,7\}$, which corresponds to the following state of the PASEP: $\tau=\bullet \circ \circ \circ \bullet \circ \circ$. Therefore the probability that in the steady state, the PASEP is in state $\tau$, is $\frac{q^7 + 7 q^6 + 24 q^5 + 52 q^4 + 76 q^3 +75q^2 + 47 q + 15}{Z_7}$. The polynomial $q^7 + 7 q^6 + 24 q^5 + 52 q^4 + 76 q^3 +75q^2 + 47 q + 15$ also enumerates the permutations in $S_8$ with set of descent bottoms $\{1,5\}$ according to occurrences of the pattern $2-31$. And it enumerates permutations in $S_8$ with weak excedances in positions $\{1,2,6\}$ according to crossings. The reader might want to compare~(\ref{eqptA}) with~(\ref{eqei}). \begin{lemma} \label{lem-ptA} Let $I=(i_1,\dots,i_r)$ be a composition. Then \begin{equation} {\rm PT}^A_{(1,i_1,i_2,\dots,i_r)}(q) = {\rm PT}^A_{I}(q), \end{equation} \begin{equation} {\rm PT}^A_{(1+i_1,i_2,\dots,i_r)}(q) = [r]_q {\rm PT}^A_I + \sum_{k=1}^n q^{k-1} {\rm PT}^A_{(i_1,\dots,i_k+i_{k+1},\dots,i_r)}(q). \end{equation} \end{lemma} \noindent \it Proof -- \rm First note that $PT^A_{(1,i_1,i_2,\dots,i_r)}(q)= PT^A_{I}(q)$: this just says that the $q$-enum\-eration of permutation tableaux of shape $\lambda$ is the same as the $q$-enumeration of permutation tableaux of shape $\lambda'$, where $\lambda'$ is obtained from $\lambda$ by adding a row of length $0$. Therefore we just need to prove the second equality. Let $\lambda=(\lambda_1,\dots,\lambda_r)$ be a partition. Then, in terms of partitions, the statement translates as: \begin{equation} {\rm PT}^A_{\lambda}(q) = [r] PT^A_{(\lambda_1-1, \lambda_2-1, \dots, \lambda_r-1)}(q) + \sum_{k=1}^r q^{k-1} PT^A_{(\lambda_1,\dots,\lambda_{r-k}, \widehat{\lambda_{r-k+1}},\lambda_{r-k+2}-1,\dots,\lambda_{r}-1)}(q), \end{equation} where $\widehat{\lambda_{r-k+1}}$ means that this part has been removed. To this aim, we need to introduce the notion of a {\it restricted} zero. We say that a zero in a tableau is {\it restricted} if there is a $1$ below it in the same column. Note that every entry to the left of and in the same row as the restricted zero must also be zero. We will prove the recurrence by examining the various possibilities for the set $S$ of $r$ boxes of the Young diagram $\lambda$ which are rightmost in their row. We will partition (most of) the permutation tableaux with shape $\lambda$ based on the position of the highest restricted zero among $S$. We will label rows of the Young diagram from top to bottom, from $1$ to $r$. Consider the set of tableaux obtained via the following procedure: choose a row $k$ for $1 \leq k \leq r-1$, and fill it entirely with $0$'s. Also fill each box of $S$ in row $\ell$ for any $\ell>k$ with a $1$. Now ignore row $k$ and the filled boxes of $S$, and fill the remaining boxes (which can be thought of as boxes of a partition of shape $\lambda':=(\lambda_1,\dots,\lambda_{k-1},\widehat{\lambda_k}, \lambda_{k+1}-1,\dots,\lambda_r-1)$) in any way which gives a legitimate permutation tableau of shape $\lambda'$ (see Figure \ref{Step1}.) Note that if we add back the ignored boxes, we will increase the rank of the first tableau by $k-1$. So the $q$-enumeration of the tableaux under consideration is exactly $\sum_{k=1}^r q^{k-1} PT_{(\lambda_1,\dots,\lambda_{r-k}, \widehat{\lambda_{r-k+1}},\lambda_{r-k+2}-1,\dots,\lambda_{r}-1)}(q).$ \begin{figure}\label{Step1} \end{figure} Let us denote the columns of $\lambda$ which contain a north-east corner of the Young diagram $\lambda$ as $c_1,\dots,c_h$; we will call them \emph{corner columns}. Denote the lengths of those columns by $C_1,\dots,C_h$, so $C_1>\dots >C_h$. And denote the differences of their lengths by $d_1:=C_1-C_2,\dots,d_{h-1}:=C_{h-1}-C_h,d_h:=C_h$. Clearly, our procedure constructs all permutation tableaux of shape $\lambda$ with the following description: at least one box of $S$ is a restricted zero. Furthermore, if we choose the restricted zero of $S$ (say in box $b$) which is in the lowest row (say row $k$), then every box of $S$ in a row above $k$ is filled with a $1$. Equivalently, each corner column $c_j$ left of $b$ has its top $d_j$ boxes filled with $1$'s, and contains at least $d_j+1$ ones total; and the corner column containing $b$ contains at least $d+1$ ones total, where $d$ is the number of boxes above $b$ in the same column. The permutation tableaux of shape $\lambda$ which this procedure has {\it not constructed} are those tableaux such that either no box of $S$ is a restricted zero, or else there {\it is} a box of $S$ which is a restricted zero. Let $b$ denote the lowest such box. The condition that all boxes of $S$ above $b$ must be $1$'s is violated. Let $W$ denote this set of tableaux. The following construction gives rise to all permutation tableaux in $W$ (See Figure \ref{Step2}.) \begin{figure}\label{Step2} \end{figure} Choose a corner column $c_j$ and a number $m$ such that $1 \leq m \leq d_j$. Fill the top $m$ boxes of $c_j$ with $1$'s and the remaining boxes with $0$'s. For each $i<j$, fill the top $d_i$ boxes of column $c_i$ with $1$'s. Now ignore the boxes that have been filled, and choose any filling of the remaining boxes -- which form a partition of shape $\lambda'':=(\lambda_1-1,\lambda_2-1,\dots,\lambda_r-1)$ -- which gives a legitimate permutation tableau of shape $\lambda''$. Note that adding back the boxes we had ignored will add $d_1+\dots+d_{h-1}+m$ to the rank of the tableau of shape $\lambda'$. Since the quantity $d_1+\dots+d_{h-1}+m$ can range between $0$ and $r-1$, the rank of the tableaux in $W$ is $ [r] PT_{(\lambda_1-1, \lambda_2-1, \dots, \lambda_r-1)}(q)$. \qed Note that we could give an alternative (direct) proof of Theorem~\ref{Theorem2} by using the following recurrences for permutation tableaux (which had been observed in~\cite{SW}). See Figure \ref{ARecur} for an illustration of the second recurrence. \begin{lemma}\label{useful} The following recurrences for type A permutation tableaux hold.\\ \begin{itemize} \item $PT^A_{(i_2,i_3,\dots,i_n)}(q) = PT^A_{(1,i_2,i_3,\dots,i_n)}(q)$\\ \item $PT^A_{(i_1,i_2,\dots,i_n)}(q) = q PT^A_{(i_1 - 1, i_2+1,i_3,\dots,i_n)}(q) + PT^A_{(i_1 -1,i_2,\dots,i_n)}(q)\\ + PT^A_{(1,i_1+i_2-1,i_3,\dots,i_n)}(q)$. \end{itemize} \end{lemma} \begin{figure}\label{ARecur} \end{figure} \section{Permutation tableaux and enumeration formulas in type B} One can also define~\cite{LW} type B $\hbox{\rotatedown{$\Gamma$}}$-diagrams and permutation tableaux, where the Type B $\hbox{\rotatedown{$\Gamma$}}$-diagrams index cells in the odd orthogonal Grassmannian, and type B permutation tableaux are in bijection with signed permutations. In this section, we will enumerate permutation tableaux of type B of a fixed shape, according to rank. This formula can be given an interpretation in terms of signed permutations. To define type $B_n$ Young diagrams, regard the following shape \begin{equation} \tableaux{\\{}&{}&{}&{}\\{}&{}&{}\\{}&{}\\{}\\&} \end{equation} as representing a poset $Q^B_n$ (here $n=4$): the elements of the poset are the boxes, and box $b$ is less than $b'$ if $b$ is southwest of $b'$. We then define a {\it type $B_n$ Young diagram} to be an order ideal in the poset $Q^B_n$. As in \cite{LW}, we define a type B {\em permutation tableau} $\mathcal{T}$ to be a type B Young diagram $Y_\lambda$ together with a filling of the boxes with $0$'s and $1$'s such that the following properties hold: \begin{enumerate} \item Each column of the diagram contains at least one $1$. \item There is no $0$ which has a $1$ below it in the same column {\em and} a $1$ to its left in the same row. \item If a diagonal box contains a $0$, every box in that row must contain a $0$. \end{enumerate} \noindent Here is an example of a type B permutation tableau. \begin{equation} \tableaux{\\{1}\\{0}&{0}&{1}\\ {0}&{0}&{0}\\ {1}&{1}\\ {1}\\&} \end{equation} Note that if we forget requirement (1) in the definition of a type B permutation tableaux then we recover the description of a type B $\hbox{\rotatedown{$\Gamma$}}$-diagram~\cite{LW}, an object which represents a cell in the totally nonnegative part of an odd orthogonal Grassmannian. As before, we define the \emph{rank} $\mathrm{rank}(\mathcal{T})$ of a permutation tableau $\mathcal{T}$ (of type B) with $k$ columns to be the total number of $1$'s in the filling minus $k$. Starting from a type B Young diagram $Y_{\lambda}$ inside a staircase of height $n+1$, we encode it as a composition of $n$ as follows. If $k$ is the width of the widest row of $Y_{\lambda}$, then $I=(i_1,\dots,i_{k+1})$ is defined by: $i_1+1$ is the number of rows of length $k$, $i_2$ is the number of rows of length $k-1$, \dots, $i_{k+1}$ is the number of rows of length $0$. We now explain how to enumerate type B permutation tableaux of a fixed shape according to their rank. Define $\mathrm{QFact}_B$ by \begin{equation} \begin{split} \mathrm{QFact}_B(j_1,\dots,j_p) :=& \mathrm{QFact}_A(j_1,\dots,j_p) \prod_{t=1}^{p-1} \left(1+q^{t}\right) \\ =& [p]_q^{j_1} [p-1]_q^{j_2} \dots [2]_q^{j_{p-1}} [1]_q^{j_p} \prod_{t=1}^{p-1} \left(1+q^{t}\right). \end{split} \end{equation} \begin{theorem} \label{TheoremB} Let $I$ be a composition. \begin{equation} \label{eqptB} {\rm PT}^B_I(q) = \sum_{J\raffi I} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_B(J). \end{equation} where $p$ is the length of $J$. \end{theorem} Note that the formula enumerating type B permutation tableaux is very similar to the formula enumerating type A permutation tableaux. As an example, suppose we want to enumerate according to rank the type $B$ permutation tableaux that have the following shape: \begin{equation} \tableaux{\\{}\\{}&{}\\ {}&{}\\ {}\\ &} \end{equation} We take $n=4$ (we would get the same answer for any $n>4$), and $k=2$ since the widest row has width $2$. Then the corresponding composition is $I=(1,2,0)$. We then get \begin{align*} {\rm PT}^B_{(1,2,0)}(q)&=q^{-2}(q^{-1}[3]_q [2]_q^2 (1+q) (1+q^2) -q^{-1}[2]_q^{3}(1+q) - [2]_q (1+q) +1)\\ &= q^4+4q^3+8q^2+10q+6. \end{align*} We will prove Theorem \ref{TheoremB} directly: we first prove some recurrences for type B permutation tableaux, and then prove that the formula in Theorem \ref{TheoremB} satisfies the same recurrences. \begin{lemma}\label{Brecur} The following recurrences for type B permutation tableaux hold. \begin{equation} {\rm PT}^B_{(0,i_2,i_3,\dots,i_k)}(q) = {\rm PT}^B_{(i_2,i_3,\dots,i_k)}(q),\\ \end{equation} \begin{equation} \begin{split} {\rm PT}^B_{(i_1,i_2,i_3,\dots,i_k)}(q) =& {\rm PT}^B_{(i_1-1,i_2,i_3,\dots,i_k)}(q)+ qPT^B_{(i_1-1,i_2+1,i_3,\dots,i_k)}(q)\\ &+ PT^B_{(0,i_1+i_2-1,i_3,\dots,i_k)}(q). \end{split} \end{equation} \end{lemma} \noindent \it Proof -- \rm The first recurrence says that enumerating permutation tableaux of a shape which has a unique row of maximal width is the same as enumerating permutation tableaux of the shape obtained from the first shape by deleting the rightmost column. This is clear, since the rightmost column will have only one box which must be filled with a $1$. \begin{figure}\label{BRecur} \end{figure} To see that the second recurrence holds, see Figure \ref{BRecur}. Consider the topmost box $b$ of the rightmost column of an arbitrary type B permutation tableau of shape corresponding to $(i_1,\dots,i_k)$. Since $i_1>0$, the rightmost column has at least two boxes. If $b$ contains a $0$, then by definition of type B permutation tableaux, there is a $1$ below it in the same column -- which implies that the entire row containing $b$ must be filled with $0$'s. We can delete that entire row and what remains will be a type B permutation tableau (of smaller shape). If $b$ contains a $1$ and there is another $1$ in the same column, then we can delete the box $b$ and what remains will be a type B permutation tableau. If $b$ contains a $1$ and there is no other $1$ in the same column, then let $b'$ denote the bottom box of that column. By the definition of type B permutation tableau, the entire row of $b'$ is filled with $0$'s. If we delete the entire row of $b'$ and every box below and in the same column as $b$, then what remains will be a type B permutation tableau. \qed Now we prove Theorem \ref{TheoremB}. \noindent \it Proof -- \rm[of the theorem] Let \begin{equation} g_I(q) := \sum_{J\raffi I} (-1/q)^{\ell(I)-\ell(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_B(J). \end{equation} We want to prove that ${\rm PT}^B_I(q) = g_I(q)$. We claim that it is enough to prove the following two facts: \begin{enumerate} \item $g_{(0,i_2,i_3,\dots,i_n)}(q) = g_{(i_2,i_3,\dots,i_n)}(q)$ \item $g_{(i_1,i_2,\dots,i_n)}(q) = q g_{(i_1-1,i_2+1,i_3,\dots,i_n}(q) + g_{(i_1 -1,i_2,i_3,\dots,i_n)}(q) + g_{(0,i_1+i_2-1,i_3,\dots,i_n)}(q)$ when $i_1>0$. \end{enumerate} By Lemma \ref{Brecur}, both of these recurrences are true for $PT^B_I(q)$. And the two recurrences together clearly determine $g_{I}(q)$ for any composition $I$, which is why it suffices to prove these recurrences. Consider the first recurrence. To prove it, we will pair up the terms that occur in \begin{equation} g_{0,i_2,\dots,i_n}(q) := \sum_{J\raffi I} (-1/q)^{\ell(I)-\ell(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_B(J), \end{equation} pairing each composition of the form $J:=(0,j_1,j_2,\dots,j_r)$ with the composition $J':=(j_1,j_2,\dots,j_r)$. Note that $\ell(J)=\ell(J')+1$ and ${\rm st'}(I,J) = {\rm st'}(I,J')+1$. Also \begin{equation} \mathrm{QFact}_B(0,j_1,j_2,\dots,j_r) = \mathrm{QFact}_B(j_1,j_2,\dots,j_r) (1+q^r) \end{equation} so that \begin{equation} \mathrm{QFact}_B(0,j_1,j_2,\dots,j_r) - \mathrm{QFact}_B(j_1,j_2,\dots,j_r) = q^{r} \mathrm{QFact}_B(j_1,j_2,\dots,j_r). \end{equation} And now it follows from the fact that \begin{equation} {\rm st'}((0,I),(0,J)) = q^{r-1} {\rm st'}(I,J), \end{equation} that the contribution to $g_{0,i_2,\dots,i_n}(q)$ by the pair of compositions $J$ and $J'$ is exactly the contribution to $g_{i_2,\dots,i_n}(q)$ by the composition $(j_1,\dots,j_r)$. So $g_{(0,i_2,i_3,\dots,i_n)}(q) = g_{(i_2,i_3,\dots,i_n)}(q)$. Now let us turn our attention to the second recurrence. We prove the second recurrence by showing that each term of $g_{(i_1,\dots,i_n)}(q)$ comes from either one term each from $qg_{(i_1-1,i_2+1,i_3,\dots,i_n)}(q)$ and $g_{(i_1 -1,i_2,\dots,i_n)}(q)$, or one term each from $qg_{(i_1-1,i_2+1,i_3,\dots,i_n)}(q)$ and $g_{(i_1 -1,i_2,\dots,i_n)}(q)$ and two terms from $g_{(0,i_1+i_2-1,i_3,\dots,i_n)}(q)$. Let us denote the relevant compositions by $I:=(i_1,\dots,i_n)$, $I':=(i_1-1,i_2+1,i_3,\dots,i_n)$, $I'':=(i_1-1,i_2,\dots,i_n)$ and $I''':=(0,i_1+i_2-2,i_3,\dots,i_n)$. First, consider the terms of $g_{(i_1,\dots,i_n)}(q)$ corresponding to compositions $J$ such that the first part of $J$ is $i_1$, \emph{i.e.}, $J$ has the form $(i_1,j_2,j_3,\dots,j_r)$. Let us compare this term to the terms of $q g_{(i_1-1,i_2+1,i_3,\dots,i_n)}(q)$ and $g_{(i_1-1,i_2,\dots,i_n)}(q)$ corresponding to the partitions $J':=(i_1-1,j_2+1,j_3,\dots,j_r)$ and $J'':=(i_1-1,j_2,j_3,\dots,j_r)$, respectively. All three terms have the same sign and the same ${\rm st'}$: ${\rm st'}(I,J)={\rm st'}(I',J')={\rm st'}(I'',J'')$. And now it is easy to see that $q \mathrm{QFact}_B(J')+\mathrm{QFact}_B(J'') = \mathrm{QFact}_B(J)$: \begin{equation} \begin{split} & q [r]^{i_1-1} [r-1]^{j_2+1} [r-2]^{j_3} \dots + [r]^{i_1-1} [r-1]^{j_2} [r-2]^{j_3} \dots\\ &= (q [r-1] +1)([r]^{i_1-1} [r-1]^{j_2} [r-2]^{j_3} \dots)\\ & = [r]^{i_1} [r-1]^{j_2} [r-2]^{j_3} \dots. \end{split} \end{equation} Note that all terms contain the extra factor $\prod_{t=1}^{r-1} (1+q^t)$. Therefore the term corresponding to $J$ is equal to the sum of the terms corresponding to $J'$ and $J''$. Now consider each term of $g_{(i_1,\dots,i_n)}(q)$ which corresponds to a composition $J$ such that the first part of $J$ is {\it not} $i_1$, \emph{i.e.}, $J$ has the form $(j_1,j_2,j_3,\dots,j_r)$ where $j_1 = i_1+i_2+\dots +i_k$ where $k \geq 2$. Let us compare this to the following four terms: the term of $q g_{(i_1-1,i_2+1,i_3,\dots,i_n}(q)$ corresponding to the composition $J':=J$; the term of $g_{(i_1 -1,i_2,\dots,i_n)}(q)$ corresponding to the composition $J'':=(j_1-1,j_2,\dots,j_r)$; and the two terms of $g_{(0,i_1+i_2-1,i_3,\dots,i_n)}(q)$ corresponding to the compositions $J''':=(0,j_1-1,j_2,\dots,j_r)$ and $J^{(4)}:=J''$. Note that the terms corresponding to $J, J', J''$, and $J^{(4)}$ have the same sign, while the term corresponding to $J'''$ has the opposite sign. And all five terms have the same ${\rm st'}$ statistic. The quantity $\mathrm{QFact}_B$ is nearly the same for every term, and if we divide each term by $\mathrm{QFact}_B(J'')$, it remains to verify the equation: $[r+1]_q = q[r+1]_q+1-(1+q^{r+1}) +1$. This is clearly true. We have now accounted for all terms involved in the recurrence. This completes the proof of the theorem. \qed It is very likely that colored Hopf algebra analogues of ${\bf WQSym}$, ${\bf FQSym}$, ${\bf Sym}$ already defined in~\cite{MR,Poi,NT,BH,NTqthooks,NTcolored} could be used to justify the $q$-enumeration of permutation tableaux of type $B$. Based on preliminary calculations, we believe that the type B analogue of the matrices from Section \ref{FirstExamples} are given by computing the transition matrix between the $S^I$ and two new bases $\Psi^B_I$ and $L^B(q)$. Here \begin{equation} \Psi^B_I := \frac{\Psi_I}{\prod_{i=2}^{r}(1+q^{i-1})} \end{equation} and $L^B(q)$ is defined by having $M_{L(q),\Psi}$ as transition matrix from the $\Psi^B$. Note also that this interpretation would immediately generalize to colored algebras with any number of colors and not only to two colors. \section{Appendix -- Conjectures} We define the {\it descent tops} (also called the {\it Genocchi descent set}) of a permutation $\sigma\in S_n$ as $\operatorname{GDes}(\sigma):=\{i\in [2,n] \ \vert \ \sigma(j)=i \Rightarrow \sigma(j+1) < \sigma(j) \}$. In other words, $\operatorname{GDes}(\sigma)$ is the set of values of the descents of $\sigma$. We also define the {\it Genocchi composition of descents} $\operatorname{GC}(\sigma)$ as the integer composition $I$ of $n$ whose descent set is $\{d-1 \ \vert \ d\in \operatorname{GDes}(\sigma) \}$. >From Theorem 5.1 of~\cite{HNTT}, it is easy to see that $E_I^J(1)$ is equal to the number of packed words $w$ such that \begin{equation} \operatorname{GC}({\rm Std}(w))=I \qquad\text{and}\qquad {\rm ev}(w)=J. \end{equation} So $E_I^J(q)$ is the generating function of a statistic in $q$ over this set of words. We propose the following conjecture: \begin{conjecture} \label{conj-initx} Let $I$ and $J$ be compositions of $n$ and let $W''(I,J)$ be the set of packed words $w$ such that \begin{equation} \operatorname{GC}({\rm Std}(w))=I \qquad\text{and}\qquad {\rm ev}(w)=J. \end{equation} Then \begin{equation} E_I^J(q) = \sum_{w\in W''(I,J)} q^{{\rm totg}(w)}, \end{equation} where ${\rm totg}$ is the number of occurrences of the patterns $21-1$ and $31-2$ in $w$. \end{conjecture} {\footnotesize For example, the coefficient $2+2q+q^2$ in row $(3,1)$ and column $(2,1,1)$ comes from the fact that the five words $1132$, $1231$, $1312$, $2311$, and $3112$ respectively have $0$, $0$, $1$, $1$, and $2$ occurrences of the previous patterns. } There should exist a connection between the ${\rm sinv}$ statistic and the pattern counting on special packed words but we have not been able to find it. Note that packed words $w$ are in bijection with pairs $(\sigma,J)$ where $\sigma$ is a permutation and $J$ a composition finer than the recoil composition of $\sigma$. Since the patterns $31-2$ in ${\rm Std}(w)$ come from patterns $21-1$ or $31-2$ in $w$, Conjecture~\ref{conj-initx} is equivalent to \begin{conjecture} \label{cor-E} Let $I$ and $J$ be compositions of $n$ and let $P''(I,J)$ be the set of permutations $\sigma$ such that \begin{equation} \operatorname{GC}(\sigma)=I \qquad\text{and}\qquad \operatorname{DC}(\sigma^{-1})\raffi J. \end{equation} Then \begin{equation}\label{pattern-equation} E_I^J(q) = \sum_{w\in P'(I,J)} q^{{\rm tot}(\sigma)}, \end{equation} where ${\rm tot}(\sigma)$ is the number of occurrences of the pattern $31-2$ in $\sigma$. \end{conjecture} If we apply Sch\"utzenberger's involution to permutations, that is, $\sigma\mapsto \omega\sigma\omega$, where $\omega=n\cdots 21$ (also known as taking the {\it reverse complement}), the statistic descent tops is transformed into descent bottoms, and patterns $31-2$ are transformed into patterns $2-31$. In that case it follows from results of \cite{SW} that for $J={1^n}$, the sum in equation (\ref{pattern-equation}) gives the $q$-enumeration of permutation tableaux of a given shape. Therefore if we assume Conjecture~\ref{cor-E}, Theorem~\ref{Theorem2} implies the following. \begin{conjecture} \label{ptai} When $K=1^n$, \begin{equation} E_I^K(q) = {\rm PT}^A_I(q) = \sum_{J\raffi I} (-1/q)^{l(I)-l(J)} q^{-{\rm st'}(I,J)} \mathrm{QFact}_A(J). \end{equation} \end{conjecture} Going from $S(q)$ to $R(q)$ is simple, and allows us to reformulate Conjecture~\ref{ptai} as follows: \begin{conjecture} Let $I$ and $J$ be two compositions of $n$. Let ${\rm PP}'(I,J)$ be the set of permutations $\sigma$ such that $\operatorname{GC}(\sigma)=I$ and $\operatorname{DC}(\sigma^{-1})=J$. Then \begin{equation} F_I^J(q) = \sum_{\sigma\in {\rm PP}(I,J)} q^{{\rm tot}(\sigma)}. \end{equation} \end{conjecture} {\footnotesize For example, the coefficient $1+q+q^2$ in row $(3,1)$ and column $(3,1)$ comes from the fact that the words $1243$, $1423$, $4123$ respectively have $0$, $1$ and $2$ occurrences of the pattern $31-2$. } \section{Tables} \label{sec-patt} Here are the transition matrices from $R(q)$ to $L(q)$ (the matrices of the coefficients $F_I^J(q)$) for $n=3$ and $n=4$, where the numbers have been replaced by the corresponding list of permutations having given recoil composition and ${\rm LC}$-composition. To save space and for better readability, $0$ has been omitted. \begin{equation} \begin{array}{|c||c|c|c|c|} \hline \text{\rm ${\rm LC}\backslash Rec$}& 3 & 21 & 12 & 111 \\[.1cm] \hline \hline 3 & 123 & & & \\[.1cm] \hline 21 & & \empd{132}{312} & 213 & \\[.1cm] \hline 12 & & & 231 & \\[.1cm] \hline 111 & & & & 321 \\[.1cm] \hline \end{array} \end{equation} \begin{equation} \begin{array}{|c||c|c|c|c|c|c|c|c|} \hline \text{\rm ${\rm LC}\backslash Rec$}& 4 & 31 & 22 & 211 & 13 & 121 & 112 & 1111 \\[.1cm] \hline \hline 4& 1234 & & & & & & & \\[.1cm] \hline 31& & \empd{1243,\ 1423}{4123} & \empd{1324}{3124} & & 2134 & 2143 & & \\[.1cm] \hline 22& & & \empd{1342}{3142} & & 2314 & & & \\[.1cm] \hline 211& & & 3412 & \empd{1432,\ 4132}{4312} & & \empd{2413}{4213} & 3214 & \\[.1cm] \hline 13& & & & & 2341 & & & \\[.1cm] \hline 121& & & & & & \empd{2431}{4231} & 3241 & \\[.1cm] \hline 112& & & & & & & 3421 & \\ \hline 1111& & & & & & & & 4321 \\ \hline \end{array} \end{equation} \footnotesize \end{document}
arXiv
Sub-seasonal Levee Deformation Observed Using Satellite Radar Interferometry to Enhance Flood Protection Advanced analysis of satellite data reveals ground deformation precursors to the Brumadinho Tailings Dam collapse Stephen Grebby, Andrew Sowter, … Renoy Girindran Spatiotemporal deformation patterns of the Lake Urmia Causeway as characterized by multisensor InSAR analysis Sadra Karimzadeh, Masashi Matsuoka & Fumitaka Ogushi Modeling the two- and three-dimensional displacement field in Lorca, Spain, subsidence and the global implications Jose Fernandez, Juan F. Prieto, … Jordi J. Mallorquí Hourly rainfall data from rain gauge networks and weather radar up to 2020 across the Hawaiian Islands Yu-Fen Huang, Maxime Gayte, … Thomas W. Giambelluca Perspectives on the prediction of catastrophic slope failures from satellite InSAR Tommaso Carlà, Emanuele Intrieri, … Nicola Casagli The cyclic expansion and contraction characteristics of a loess slope and implications for slope stability Hengxing Lan, Xiaoxia Zhao, … John J. Clague Continuous, semi-automatic monitoring of ground deformation using Sentinel-1 satellites Federico Raspini, Silvia Bianchini, … Nicola Casagli Rapid flood and damage mapping using synthetic aperture radar in response to Typhoon Hagibis, Japan Cheryl W. J. Tay, Sang-Ho Yun, … Emma M. Hill Spatiotemporal characteristics of ground microtremor in advance of rockfalls Yi-Rong Yang, Tzu-Tung Lee & Tai-Tien Wang Işıl E. Özer ORCID: orcid.org/0000-0002-2022-11481, Stephan J. H. Rikkert ORCID: orcid.org/0000-0002-5073-51231, Freek J. van Leijen ORCID: orcid.org/0000-0002-2582-92671, Sebastiaan N. Jonkman ORCID: orcid.org/0000-0003-0162-82811 & Ramon F. Hanssen ORCID: orcid.org/0000-0002-6067-75611 Scientific Reports volume 9, Article number: 2646 (2019) Cite this article Levees are critical in providing protection against catastrophic flood events, and thus require continuous monitoring. Current levee inspection methods rely on limited information obtained by visual inspection, resulting in infrequent, localized, mostly qualitative and subjective assessments. This hampers the timely detection of problematic locations and the assessment of levee safety in general. Satellite radar interferometry yields weekly observations of levee conditions with high precision which complement current inspection methods. Here we show that levees are susceptible to short-term swelling and shrinkage associated with meteorological conditions, and assess how deformations can be related to the geohydrological properties and the safety of the levee. Our findings allow to understand the sub-seasonal behaviour of the levee in greater detail and to predict swelling and shrinkage due to variation of the loading conditions. This will improve the detection of anomalous levee responses which contributes to the development of reliable early warning methods using continuous deformation monitoring. Failures of flood defence systems often lead to significant human, economic, social, and environmental losses, with the risk being highest in densely populated areas1,2. Earthen levees form a large part of the existing flood defence systems. Depending on the natural or man-induced driving mechanisms, levees can fail due to hydraulic failures (e.g., overtopping) and/or geotechnical failures (e.g., instability, erosion)3,4. Consequently, the monitoring of these levees is critical in achieving safety standards and avoiding catastrophic flooding events. Being able to identify if, where, and when a failure would suddenly occur is an important aspect to consider with respect to safety. However, current conventional levee inspection methods mostly rely on expert observers3,5, which result in infrequent, qualitative and labour intensive assessments6,7,8. During these inspections, the integrity of the structures is assessed using visual inspection parameters, in order to check the presence of any damage, cracks, seepage, animal burrows, or irregular vegetation on the levees9. Thus, visual inspection by expert's judgement is not effective in detecting small (mm- to cm-level) and gradual changes in the structural behaviour of the levee10, which may be indicative of an imminent failure. Common remote sensing (e.g. LiDAR, thermal infrared) and in-situ monitoring methods (e.g. levelling, GNSS, creep/strain meters) are also costly and time-consuming, and are therefore usually only applied to locations considered to be at high risk by a visual inspection. Hence, there is a need for innovative and cost-effective complementary techniques, especially in countries with an extensive amount of flood defence infrastructure, such as the Netherlands, China, UK, and US among many others. These techniques could be most beneficial if they contribute to detecting which locations are most prone to sudden failures and estimating far in advance whether a levee would fail. This can be done by analysing the response of a levee under normal loading conditions in order to identify locations at risk of failing during extreme loading conditions, such as storm, high river discharge or drought. The deformation behaviour of a levee can be divided into two main categories. Long-term, interannual deformation, such as the subsidence of the levee occurring over a period of years, mainly depends on the type and composition of the soil and can be considered irreversible. Apart from this, levees show short-term, sub-seasonal deformation, e.g. due to changing water levels, precipitation, and temperature, occurring over periods of days to weeks depending on the soil and loading conditions. The change in soil volume due to variations in soil moisture content is denoted as the swelling and shrinking behaviour of the soil11, which has been studied for different soil types, such as clay12,13, peat14,15,16 and others17,18,19. Understanding and monitoring levee behaviour over an extended area, however, requires the availability of frequent deformation data with high resolution4, which are not provided by current inspection methods. In recent years, satellite radar interferometry, also known as Interferometric Synthetic Aperture Radar (InSAR)20,21, has become an efficient tool to monitor the surface displacements, referred to as deformations in the rest of the paper. The technique provides millions of observations with meter-level spatial resolution and millimetre-level measurement precision at reasonably low costs2,4. SAR satellites allow day and night monitoring of the Earth's surface in all weather conditions, with up to a sub-weekly repeat interval. Persistent Scatterer InSAR (PS-InSAR) is a processing technique to estimate deformation time-series of points in the interferometric data stack with a coherent reflection over time20. It has been successfully applied on urban areas22, railways23, dams24, highways25, landslides26, tectonic movements27, and land subsidence28. Applications on levees have been limited to interannual deformations to monitor subsidence of the levees4,7,29,30,31. However, as most geotechnical failure mechanisms are related to dynamic levee responses to changes in loading conditions happening on a time scale of days to weeks4, a better understanding of the short-term behaviour of levees is required. Understanding the levee responses in normal conditions can then help to detect or predict the levee response to more extreme conditions, which would increase our capability of detecting anomalies that could identify unsafe situations. In this study, we assess how sub-seasonal patterns due to swelling and shrinkage can be identified from continuous levee deformation observations obtained with PS-InSAR, and we analyse how these patterns are related to meteorological variations and levee safety. By determining whether the observed deformation is in line with the response predicted from loading conditions experienced by levees and relating it to geohydrological properties of levees, it would become possible to identify problematic locations and apply the appropriate countermeasures. Here we focus on earthen canal levees in Delft, located in the Netherlands, where almost 12 million people live in flood prone areas, and reliable flood defences are essential to prevent catastrophic flood events32. Modelling the Swelling and Shrinkage Behaviour of Levees Swelling and shrinkage result from changes in the pore water pressures inside the levee, which are due to variations in hydrological loading conditions. When the soil saturates, the pore water pressure in the soil increases, reducing effective stresses in the soil matrix and results in swelling. In turn, a reduction in pore pressures due to drying leads to shrinkage of the soil33. The swelling and shrinkage behaviour of the soil is especially relevant for the safety of the canal levees. Water levels in these canals are fairly constant and typically exceed surface levels of adjacent polders, posing a continuous flooding threat to the hinterland, even under normal conditions. These canal levees were often built centuries ago and strengthened several times using local peat and clay, amply available materials in the Netherlands34. Changes in precipitation and temperature can lead to significant swelling and shrinkage behaviour of these types of soil. In addition, cracks can form when the levee dries and the soil shrinks. Through these cracks water can enter the levee, reducing the soil strength. Another concern is that other materials or debris may enter the crack, preventing it from closing properly when the soil returns to a wet condition again3. Hence, the resulting changes in the geohydrology of the levee, which is loaded by a fairly constant water level, can directly lead to instability and failure. Many failures of the canal levees in the Netherlands have been recorded due to the heavy rainfall, e.g. a failure close to Wilnis in 187435 or extreme warm and dry weather, such as failures in Zoetermeer in 1947, Oostzaan near Amsterdam in 1990, Bleiswijk in 199036 and near Wilnis in 2003. Hence, extreme conditions, i.e. too dry (high temperature and low precipitation) or highly saturated soil (mainly heavy precipitation with low temperatures) cause a reduction in the soil strength of the levee, which can lead to a failure. In order to analyse the deformation behaviour of the canal levees, we used 168 images recorded from the TerraSAR-X satellite, covering the case study area (see Methods for the details). We first estimate the deformation time-series of each measurement point (hereafter called PS point) on the canal levees of Delft using the PS-InSAR technique21,37 with an approach based on geodetic estimation theory38. Deformation time-series are created spanning a period of 6 years (2009–2015) (Fig. 1(a)). Although the results are visualized by linear deformation velocity [mm/year], every PS point has a complete time-series of deformation estimates in the Line-of-Sight (LOS) direction. Analysis of deformation behaviour of the levees in Delft, the Netherlands. (a) Deformation behaviour of the canal levees has been analysed based on data acquired by TerraSAR-X, descending orbit (2009–2015) and visualized in the deformation velocity [mm/year] map. (b) A part of the levee segment in the monitored area. (c) A comparison is given between time-series of observed deformation d(t) and estimated deformation, \(\hat{d}(t)\) using the steady-state model with an MSE of 4 mm2 and the vPT-model with an MSE of 2.1 mm2 for this specific PS point. The period of summer 2011 is shown in a dashed rectangle. (d) Cumulative precipitation [mm] and average temperature [°C] data used in the vPT-model. The figure was generated using the QGIS software, (version 2.18.27, https://qgis.org). The background image from Map data ©2015 Google is added using the QGIS QuickMapServices (version 0.19.10.1, https://plugins.qgis.org/plugins/quick_map_services/). To examine the swelling and shrinkage behaviour of the levees, we developed a predictive deformation model, called vPT-model (see Methods for the details), considering the meteorological loading conditions (i.e., precipitation and temperature) as indirect indicators of the water content inside the levee. The model is used to analyse the swelling and shrinkage behaviour, and to assess whether these sub-seasonal patterns can be identified in the deformation time-series. Figure 1(b) shows a part of the levee segment in the monitored area, and an associated time-series. A comparison between the time-series of observed deformation from satellite, d(t) and the deformation \(\hat{d}(t)\) estimated using the vPT-model, is given in Fig. 1(c) for a random PS point and compared with the steady-state model. It can be seen how the steady-state model only describes the interannual subsidence trend, whereas the vPT-model, which uses the meteorological data shown in Fig. 1(d), also follows the sub-seasonal swelling-shrinkage variations of the deformation time-series. Nevertheless, deviations from the vPT-model occur, e.g., in December 2010 and in summer 2011, see Fig. 1(c). For example, the summer period of 2011, indicated by a dashed rectangle in Fig. 1(c), was very dry. During this period, surface of the levees was sprayed with water in order to avoid excessive drying of the soil due to the extreme drought conditions. This situation may explain the unexpected deformation of the levee which showed a swelling behaviour not predicted by the model. Application of the vPT deformation model The developed vPT-model has been applied on each PS point along the canal levees in Delft, comprising of 1184 PS points. To assess how well the vPT-model describes the deformation compared to the steady-state model, we apply hypothesis testing to both models independently. Firstly, the significance of the vPT-model is tested using an Overall Model Test (OMT) (see Methods, for the details) and compared with the steady-state model for different values of the assumed variance of the observations, \({\sigma }_{d}^{2}\). A lower value for \({\sigma }_{d}^{2}\) increases the value of the test statistic and thus the probability that the null hypothesis, H0, is rejected. The results are given in Fig. 2, showing the percentage of PS points for which H0 is sustained. It can be seen that, already for \({\sigma }_{d}^{2}=9\,{{\rm{mm}}}^{2}\), the vPT-model is providing a higher number of PS time-series that can be well modelled compared to the steady-state model. By decreasing the value of \({\sigma }_{d}^{2}\), the difference in significance between the two models gets even larger. This higher significance is the result of the improved modelling capability of the vPT-model, which results in better estimates of the observed deformation time-series. Overall Model Test (OMT) results. Percentage of PS points for which vPT-model and steady-state model are sustained for different values of variance of unit weight, σ2. In order to quantify this improved modelling capability, we then evaluate the quality of the estimations by calculating the mean square error (MSE) for each PS time-series. In Fig. 3, the MSE value per PS point for the entire area is given for both steady-state and vPT-model. The reduction in the MSE for the vPT-model compared to the steady-state model can be clearly observed on the two maps, with green points representing the PS time-series showing a low MSE, and red points representing PS time-series giving high MSE. The MSE values are also given in the histograms, which illustrates the error distribution for the PS points considered. The comparison between the two distributions highlights a clear shift of the MSE towards lower values. Modelling results for the deformation behavior of the canal levees in Delft, the Netherlands, cf. Fig. 1. MSE values per PS point and distribution of MSE are shown for (a) steady-state model, (b) vPT-model. The comparison between the two maps shows how the vPT-model generally provides a lower MSE over the entire levee structure. The reduction in the MSE also allows to assume a lower variance \({\sigma }_{d}^{2}\) for the observations. For instance, assuming an a-priori variance of 7 mm2, the vPT-model gives 79% of the PS points with \({\rm{MSE}} < {\sigma }_{d}^{2}\), while the steady-state model gives only 39%. Hence, for a large number of points, the deformation data can be modelled using the vPT-model with higher quality of the estimations. Thus, for those points that have low MSE, the vPT-model can also provide better indications of the swelling and shrinkage patterns. However, deformation estimations with low precision (e.g. MSE above 7 mm2) are still included in the results. In general, a high MSE could be related to several factors, such as radar signal decorrelation20,38 (e.g., vegetation, maintenance), or unmodelled deformation behaviour due for instance to problematic locations or soil compositions responding differently to meteorological changes4. Relation with soil types of the levee The deformation behaviour of a levee depends on its soil characteristics. In order to assess whether any deformation pattern related to the type of soil can be observed from the model parameters, we focus on a levee segment (Fig. 4(a)) for which the soil profiles are provided by the local water authority, Water Board Delfland. Given these soil profiles, we defined three specific levee locations given in Fig. 4, whose main soil type for the first 2 meters of depth are considered. Location A is predominantly made of clay, location B shows different mixtures of clay and sand, and location C has sand as its primary soil type. These three locations have been investigated further based on the vPT-model results for those PS points giving an MSE lower than 7 mm2. First, we compared the reaction time of the soil, i.e. time delay parameters τP and τT. Then, we analysed the scaling coefficients cP and cT, which are expected to provide an indication about the reaction magnitude of the different soil types. The relation between the soil types and the vPT-model parameters. The analysis of the estimated vPT-model parameters for those PS point with MSE lower than 7 mm2 within (a) the selected locations A, B and C (indicated by the rectangles) on the east levee in the study area with their predominant soil types. (b) time delay for P(t), τP [day], (c) time delay for T(t), τT [day], (d) scaling coefficient for P(t), cP [mm/mm], (e) scaling coefficient for T(t), cT [mm/°C]. The figure in (a) was generated using the QGIS software, (version 2.18.27, https://qgis.org). The background image from Map data ©2015 Google is added using the QGIS QuickMapServices (version 0.19.10.1, https://plugins.qgis.org/plugins/quick_map_services/). Considering the swelling-shrinkage behaviour of the types of soil, sandy soil is expected to react faster than clayey soil for the same amount of precipitation received due to larger porosity and higher hydraulic conductivity. On the other hand, clayey soil would react with bigger magnitude compared to the sandy soil due to the organic components in its composition. Large pore volumes in between sand particles would allow the water to drain quicker, which would result in smaller volume changes compared to clayey soil. In order to verify if these expected behaviours are observed in the parameters of the estimated vPT-model, Fig. 4(a) shows the different PS points within the selected locations (indicated by the rectangles), where the estimated values of the vPT-model parameters are given on a colour scale in Fig. 4(b–e). Figure 4(b) shows the values for the time delay τP related to the precipitation data. Location A shows longer τP compared to locations B and C. This is in accordance with the expected faster reaction of sandy soil compared to clayey soil. On the other hand, the values of the τT parameter (Fig. 4(c)) are almost always zero. This is an indication that the levee reacts almost instantaneously with respect to temperature (averaged over 10 days) regardless of the soil type. The few PS points showing higher values for τT are considered to be outliers, for which the parameters might have been poorly estimated due to some of the issues discussed before. The values of the scaling coefficient related to the precipitation data cP provide no clear indication about the reaction magnitude of the different soil types. A better idea is however provided by the scaling coefficient for the temperature cT in Fig. 4(e). In this case, it is more evident that the influence of the temperature on the clayey soil (location A) is stronger than on sandy soil (location C). In general, these results are in accordance with the expected behaviour of different soil types to precipitation and temperature changes, regardless of the fact that only a small number of PS points with low MSE is available in each location considered. The variability of the parameter values within the same locations is most likely due to differences in levee characteristics (e.g. different slopes), and heterogeneity of the soil profiles in between two boring measurement locations and other factors. More quantitative results and a better validation of the vPT-model for different soil types would thus require a larger amount of data available and of better quality, and more detailed information about the soil type at specific locations. Enhancing Flood Protection The strength of the proposed vPT-model lies in the ability of describing not only the interannual subsidence phenomenon, but also the sub-seasonal deformation behaviour using meteorological data, i.e., precipitation and temperature, as indirect indicators of the water content inside the levee. This also allows analysing the influence of the meteorological conditions on the swelling and shrinkage behaviour on a weekly basis, thus enabling predictions of the expected behaviour of the levee due to variations in the loading conditions. Being able to model and predict the sub-seasonal behaviour of a levee based on recorded extreme meteorological events would thus increase our ability of identifying critical situations. In addition, the observed deformation can also be related to changes in geohydrological properties of a levee, such as the moisture content, weight and phreatic line (i.e. groundwater table in an unconfined aquifer). An instability or sliding of the landside slope can in fact occur when the levee loses weight due to drying and/or due to changes in effective stress in the soil. In order to illustrate the potential of satellite monitoring to detect stability problems, we consider the canal levee failure that occurred near Wilnis, the Netherlands, in 2003, as an example. Extreme drying out of this peat levee during the hot summer led to its loss of weight, triggering levee instability which consequently lead to flooding of the neighbourhood behind it39. Using the documented information by Van Baars39 on the weight changes from saturated to unsaturated conditions of the levee and assuming isotropic deformation, we roughly estimate 2 to 10% of shrinkage on the crest before the actual failure. In case of a deformation mostly on the vertical direction, this range can increase up to 20% during a dry period of 100 days (see Methods for the details). Expected deformations in the first phases of the drying process are estimated to be already in the range of few centimetres. While no SAR satellite data were available in the period of failure, this range of deformation is well within the observability capabilities of current SAR sensors and PS-InSAR algorithms. The monitoring and modelling of this kind of deformation behaviour, which normally takes place over a period of several weeks to months and eventually leads to a failure, could help to flag the extreme shrinkage of the levee. This would allow levee managers to apply timely countermeasures, such as watering or installing stability berms, to prevent instability. A related possible future application of the proposed vPT-model is the detection of deformation anomalies in the framework of an early warning system. For example, when the soil has dried, the shrinkage and the volume change may result in cracks in the soil. In this case, the deformation behaviour of the levee will deviate from the expected deformation. This particular event would then be regarded as an anomaly with respect to the normal behaviour of the levee, which should be an indication for a potential weakness and a call for more in-depth analysis of the situation. It is noted that levee deformation and its effect on safety will be highly dependent on local soil and geohydrological conditions of the levee. Further research on these aspects would need to address the relationship between deformation and geohydrological levee properties and the effects on levee stability for a number of representative situations. In conclusion, this study shows that (a) sub-seasonal deformations obtained from monitoring a levee with the PS-InSAR technique can be observed on the time scale of weeks, that (b) these deformations are strongly correlated with the changes in meteorological conditions, that (c) deformation changes in time can be estimated using a relatively simple regression model, and that (d) deformations can be directly related to geohydrological properties and the safety of the levee. Even though the examples are given for the Netherlands, this technology is applicable to other parts of the world, thus supporting levee management especially in countries with extensive flood defence systems. Findings of this study will assist the future development of reliable early warning methods using continuous deformation monitoring, thus enhancing flood protection. This paper focuses on a 10 × 10 km area south of Delft, the Netherlands. The levees used in this study are regional flood defences, as they are situated along regional rivers and canals. The canals are used to drain excess water from the lower-lying polders to the main rivers and the sea. Precipitation [mm/day] and temperature [°C] data are obtained from meteorological station 344 from the Royal Dutch Meteorological Institute near Rotterdam. Both precipitation and temperature are measured hourly with electronic sensors with a precision of 0.1 mm and 0.1 °C, respectively. The distance from the meteorological station to the study area is approximately 4 km. Hence, the measured data are expected to differ slightly from the meteorological conditions at the study area. However, since cumulative and average meteorological values are used in our model, the effect is assumed to be negligible. Soil data Soil profiles were obtained from borings performed in 2011 by Water Board Delfland at 17 different locations along the levee. Taking into account the non-uniformity of the soil compositions and the changes in soil moisture content of unsaturated zone, the dominant soil type from 0 to 2 meters below the surface level is being considered. Deformation data from PS-InSAR processing The case study area has been monitored using data from TerraSAR-X to estimate the deformation time-series of each PS point between 8 April 2009 and 8 January 2015. This satellite provides X-band high resolution data with a wavelength of 31 mm, 3 × 3 m pixel size and a repeat cycle of 11 days. The main principle of satellite radar imaging can be described as follows. Radar sensors transmit pulses of high-frequency electromagnetic waves from space to Earth and record the strength and the fractional phase of the back-scattered signals that are reflected from the surface to construct SAR images. By interfering at least two radar images acquired at different times over the same location, the combined effect of surface deformation, topography and atmospheric signal delay is obtained. In order to estimate and isolate the surface deformation from the other phase contributions, a large stack of SAR images acquired by the same satellite is analysed by interferometric time-series methods. All deformations are projections of the real deformation [mm] onto the Line-of-Sight (LOS) direction. This direction from satellite to object is determined by the heading angle of the satellite, αh, and the incidence angle of the radar, θinc. Various time-series processing techniques can be applied to estimate the deformations from the satellite data. The most suitable approach depends on a number of factors, such as the number of available radar images, satellite characteristics, the area of interest (e.g., surface cover), and the expected deformation signal. Regardless of the specific approach used, PS-InSAR analysis typically includes three main steps; 1) stack processing: creating the multiple interferograms from complex data, 2) PSI analysis: detecting Persistent Scatterers (PS) and separating deformation phase from other contributions (such as topography, atmospheric delay20) and 3) quality assessment: evaluating the quality of the results38. In this study, the interferometric stack processing of the radar data has been performed using the Delft Object-oriented Radar Interferometric Software (DORIS)40. The Delft implementation of Persistent Scatterer InSAR (DePSI)38 has been applied on 168 TerraSAR-X strip-map images in order to estimate the Line-of-Sight (LOS) deformation time-series. The main principles of PS-InSAR and a general overview of the past studies can be found in a review41, whereas another study4 discusses the applicability of the technique to continuous levee monitoring. Deformation Modelling In order to describe this deformation behaviour of earth-filled levees, we consider its relation with respect to those meteorological data, i.e. precipitation and temperature, which are expected to give an indication of soil moisture changes. For this reason, the steady-state model, which considers the interannual trend, due to the long-term irreversible behaviour of the levee (e.g. subsidence), is extended with the introduction of precipitation, P, and temperature, T, time-series. In this way, it is also possible to evaluate the sub-seasonal and reversible behaviour of the levee, i.e. its swelling and shrinkage. Hence, the proposed model, hereafter called vPT-model, is defined as $$d(t)={d}_{{\rm{V}}}(t)+{d}_{{\rm{PT}}}(t),$$ where the first term corresponds to the steady-state model, $${d}_{{\rm{V}}}(t)=v\cdot t+b,$$ with v the slope in [mm/day] and b the intercept in [mm] of the long-term linear trend. This intercept accounts for the atmospheric signal delay and scattering noise in the master acquisition, which is common in all single-master interferograms38. The second term describes the swelling-shrinkage behaviour of the levee as a linear combination of precipitation and temperature time-series. We expect the soil to react to variations in precipitation and temperature only after a certain period of time. This requires a regression model which includes a time delay τ between the meteorological data and the observed levee deformation. The second term of equation (1) is then defined as $${d}_{{\rm{PT}}}(t)={c}_{P}(P(t-{\tau }_{P})-{\delta }_{P})+{c}_{T}(T(t-{\tau }_{T})-{\delta }_{T}),$$ where the time-series at time t are indicated as d(t) in [mm] for deformation, P(t) for the cumulative precipitation, in [mm], over a time interval ΔtP, starting at t − ΔtP and ending at t, and T(t) for the average temperature, in [°C], over a time interval ΔtT, starting at t − ΔtT and ending at t. The offsets for precipitation and temperature time-series are represented by δP in [mm] and δT in [°C], respectively, while the time delay parameters for P(t) and T(t) with respect to d(t) are denoted as τP and τT, with their units in [day]. Lastly, cP in [mm/mm] and cT in [mm/°C] are the scaling coefficients of the linear combination, between d(t) and P(t) and between d(t) and T(t), respectively. Model parameter estimation Soil deformation is expected to result from cumulative and smooth variations in precipitation and temperature. For this reason, the mean temperature and the cumulative precipitation data over time periods ΔtT = 10 days and ΔtP = 30 days are considered, respectively. The time period for the cumulative precipitation was chosen to be longer than the time resolution of the deformation data to take into account for the long-term effect of precipitation. The vPT-model in equation (1) can be simplified as $$d(t)=v\cdot t+{c}_{P}(P(t-{\tau }_{P}))+{c}_{T}(T(t-{\tau }_{T}))-\delta $$ where d(t), P(t) and T(t) are the deformation and meteorological data preprocessed as described above, and the global offset coefficient is defined as δ = (cPδP + cTδT − b). This model is non-linear due to the products cPτP and cTτT. For this reason, we use the cross-correlation method42 to estimate the τP and τT parameters. This approach is used to shift P(t) and T(t) with respect to d(t) (after removing the steady-state trend v⋅t) and to compare the two records at each possible time delay, where \({\hat{\tau }}_{P}\) and \({\hat{\tau }}_{T}\) are selected as the values providing the maximum absolute value in the cross-correlation function. Hence, the precipitation and temperature time-series are aligned to the deformation data (i.e. shifted by \({\hat{\tau }}_{T}\) and \({\hat{\tau }}_{P}\), respectively). After the time alignment, the vPT-model is simplified as $$d(t)=v\cdot t+{c}_{P}\tilde{P}(t)+{c}_{T}\tilde{T}(t)+\delta ,$$ where the aligned time-series are defined by $$\tilde{P}(t)=P(t-{\hat{\tau }}_{P}),\,\,\tilde{T}(t)=T(t-{\hat{\tau }}_{T}).$$ The optimal values for the linear parameters of the vPT-model are estimated by minimizing the mean square error between the deformation data and the model estimate $$\mathop{min}\limits_{x}||{\boldsymbol{d}}-{\boldsymbol{Ax}}{||}_{2}^{2},$$ $${\boldsymbol{x}}=[v,{c}_{P},{c}_{T},\delta ]^{\prime} ,\,{\boldsymbol{A}}=[{\boldsymbol{t}},\tilde{{\boldsymbol{P}}},\tilde{{\boldsymbol{T}}},{\bf{1}}]^{\prime} $$ where d is the m × 1 vector containing the LOS deformation observations, A is the m × n design matrix whose columns are the time vector t, the vectors \(\tilde{{\boldsymbol{P}}}\) and \(\tilde{{\boldsymbol{T}}}\) containing respectively the aligned precipitation \(\tilde{P}\) and temperature data \(\tilde{T}\), 1 is a vector containing only ones, and x is the n × 1 vector of the model parameters (the symbol ′ indicates the transpose). The estimated optimal values of x are then given by the least squares solution43 as $$\hat{{\boldsymbol{x}}}={({\boldsymbol{A}}{\boldsymbol{^{\prime} }}{{\boldsymbol{Q}}}_{d}^{-1}{\boldsymbol{A}})}^{-1}{\boldsymbol{A}}{\boldsymbol{^{\prime} }}{{\boldsymbol{Q}}}_{d}^{-1}{\boldsymbol{d}},$$ where the covariance matrix Qd specifies the dispersion of the measured deformation data. The observations in the time-series are assumed to be uncorrelated, each having a fixed variance of unit weight \({\sigma }_{d}^{2}\). Thus, the variance matrix can be factorized as \({{\boldsymbol{Q}}}_{d}={\sigma }_{d}^{2}{{\boldsymbol{I}}}_{m}\), with Im an m × m identity matrix. Once the model parameters are obtained as explained above, they are used in the vPT-model of equation (4) to obtain the estimated (adjusted) deformation time-series \(\hat{{\boldsymbol{d}}}={\boldsymbol{A}}\hat{{\boldsymbol{x}}}\). The error between the measured deformation time-series and the estimated one is given by the residual \(\hat{{\boldsymbol{e}}}={\boldsymbol{d}}-\hat{{\boldsymbol{d}}}\) and the quality of the estimation is then evaluated by the mean square error (MSE). Hypothesis testing: Overall Model Test (OMT) The Overall Model Test (OMT)44 is used to check the validity of the models. Testing is usually performed by comparing a null hypothesis, H0, versus an alternative hypothesis, Ha, where H0 represents the model under investigation and Ha corresponds to the case where no restrictions are imposed on the observations, as in $$\begin{array}{cc}{H}_{0}:E\{{\boldsymbol{d}}\}={\boldsymbol{Ax}}, & D\{{\boldsymbol{d}}\}={{\boldsymbol{Q}}}_{d}\\ {H}_{a}:E\{{\boldsymbol{d}}\}\in {{\mathbb{R}}}^{m}, & D\{{\boldsymbol{d}}\}={{\boldsymbol{Q}}}_{d},\end{array}$$ where E{⋅} and D{⋅} denote expectation and dispersion, respectively. To reject or sustain H0 depends on the test statistic, $${T}_{q}={{\hat{{\boldsymbol{e}}}}^{{\rm{^{\prime} }}}}_{0}{{\boldsymbol{Q}}}_{d}^{-1}{\hat{{\boldsymbol{e}}}}_{0},$$ which follows a central χ2-distribution with q = m − n degrees of freedom, and corresponds to a weighted sum-of-squares of the least squares residual vector \({\hat{{\boldsymbol{e}}}}_{0}\) under H0. Given a chosen level of significance α, the critical value kα follows from the χ2-distribution to test the null hypothesis, $${\rm{reject}}\,{H}_{0}\,{\rm{if}}\,{T}_{q} > {k}_{\alpha }\mathrm{.}$$ Rejection of H0 indicates that the deformation behaviour is not significantly well described by the chosen model, given the assumed level of significance. For InSAR time-series, α is typically defined in the range of 0.2% < α < 2%, as we prefer to stick to a relatively simple null hypothesis if possible45. In this study, we assumed an α value of 1%, but in practice this is a decision to be taken by the local authorities. The variance of unit weight for TerraSAR-X with a 31 mm wavelength is conservatively assumed to be \({\sigma }_{d}^{2}={3}^{2}\,{{\rm{mm}}}^{2}\) 38,45. Deformation estimation for the levee failure at Wilnis, the Netherlands In the assessment of the failed levee at Wilnis39, two scenarios for changes of the phreatic line in the peat levee are considered: (a) fully saturated (phreatic line is at the crest), and (b) unsaturated (phreatic line drops 1 m below the crest level). Relevant information and the cross section of the levee can be found in the original study39. Here we estimate the expected deformation of the crest in case the phreatic line drops from scenario (a) to scenario (b), using the data documented by Van Baars39. In scenario (a), the volume of saturated soil, Vsoil-sat = 1 m3. The gravimetric water content, Θ, is described as the ratio between the mass of the water, Mw, and the mass of solids, Msolids. For the saturated peat soil at the Wilnis levee, Θ was varying between 600% to 800%, and the unit weight of the saturated peat soil, γsat = 11 kN/m3, and the unit weight of unsaturated peat soil, γunsat = 5 kN/m3 39,46,47. The ratio between mass and volume of the solids is assumed to be constant. In the case of fully saturated soil (scenario (a)), the mass of soil, Msoil-sat, is equal to sum of mass of water, Mw-sat and mass of solids, Msolid-sat. Thus, for the given Θ of the saturated soil, Msoil-sat = γsat.Vsoil-sat = 11 kN. Using an average Θ of 700%, Mw-sat = 9.63 kN and Msolid-sat = 1.38 kN. Considering the unit weight of water, γw = 9.81 kN/m3, the volume of water in the saturated soil is calculated as Vw-sat = Mw-sat/γw = 0.98 m3 and the volume of solids in the saturated soil is then Vsolid-sat = Vsoil-sat − Vw-sat = 0.02 m3. After a dry period of approximately 100 days, soil samples taken from the crest of the levee show that Θ of the unsaturated soil (scenario (b)) was around 200%39,46. In this case, the mass of water, Mw-unsat reduces to 2.75 kN and the corresponding volume of water in the unsaturated soil is Vw-unsat = 0.28 m3. The mass of solids in the unsaturated soil remains unchanged, Msolid-unsat = 1.38 kN. Hence, the volume change of the soil, ΔV, between the two scenarios is estimated as 0.175 m3 in the case of a 1 m drop in the phreatic line with the reduction of Θ from 700% to 200%. This correspond to a deformation of 6% of the total height (i.e., 6 cm) assuming an isotropic shrinkage of the soil. For an initial Θ of 800% and 600%, about 2% to 10% of shrinkage can be observed, respectively. In case of an anisotropic deformation, mostly on the vertical direction, this range can increase up to approximately 20% in a drought period of 100 days. Jonkman, S. N. Global perspectives on loss of human life caused by floods. Nat. Hazards 34, 151–175 (2005). Özer, I. E., van Damme, M., Schweckendiek, T. & Jonkman, S. N. On the importance of analyzing flood defense failures. In Proc. 3rd Eur. Conf. Flood Risk Manag., vol. 7, 03013, https://doi.org/10.1051/e3sconf/20160703013 (Lyon, France, 2016). Sharp, M. et al. The International Levee Handbook (CIRIA, London, England, 2013). Özer, I. E., van Leijen, F., Jonkman, S. N. & Hanssen, R. F. Applicability of satellite radar imaging to monitor the conditions of levees. J. Flood Risk Manag. e12509, https://doi.org/10.1111/jfr3.12509 (2018). Cundill, S. L., van der Meijde, M. & Hack, H. R. G. Investigation of remote sensing for potential use in dike inspection. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 7, 733–746 (2014). Rijkswaterstraat. Hydraulische randvoorwaarden 2001 voor het toetsen van primaire waterkeringen. Tech. Report (in Dutch), Rijkswaterstraat, Delft, The Netherlands (2001). Hanssen, R. F. & van Leijen, F. J. Monitoring water defense structures using radar interferometry. In Proc. 2008 IEEE Radar Conf., 1–4 (Rome, Italy, 2008). Bakkenist, S. W. & Zomer, W. S. Inspectie van waterkeringen: een overzicht van meettechnieken. Tech. Report (in Dutch) 2010–31, STOWA, Amersfoort, The Netherlands (2010). Bakkenist, S. [dl.2]: Inspectiewijzers waterkeringen: technische informatie uitvoering inspecties. Tech. Report (in Dutch), STOWA, Amersfoort, The Netherlands (2012). Tarrant, O., Hambidge, C., Hollingsworth, C., Normandale, D. & Burdett, S. Identifying the signs of weakness, deterioration, and damage to flood defence infrastructure from remotely sensed data and mapped information. J. Flood Risk Manag. 10, 1–14 (2017). Peng, X. & Horn, R. Identifying six types of soil shrinkage curves from a large set of experimental data. Soil Sci. Soc. Am. J. 77, 372–381 (2013). Al-Homoud, A., Basma, A., Husein Malkawi, A. & Al Bashabsheh, M. Cyclic swelling behavior of clays. J. Geotech. Eng. 121, 562–565 (1995). Tripathy, S., Rao, K. S. & Fredlund, D. Water content-void ratio swell-shrink paths of compacted expansive soils. Can. Geotech. J. 39, 938–959 (2002). Price, J. S. & Schlotzhauer, S. M. Importance of shrinkage and compression in determining water storage changes in peat: the case of a mined peatland. Hydrol. Process. 13, 2591–2601 (1999). Camporese, M., Ferraris, S., Putti, M., Salandin, P. & Teatini, P. Hydrological modeling in swelling/shrinking peat soils. Water Resour. Res. 42 (2006). Gebhardt, S., Fleige, H. & Horn, R. Shrinkage processes of a drained riparian peatland with subsidence morphology. J. Soils Sediments 10, 484–493 (2010). Peng, X. & Horn, R. Modeling soil shrinkage curve across a wide range of soil types. Soil Sci. Soc. Am. J. 69, 584–592 (2005). Peng, X. & Horn, R. Anisotropic shrinkage and swelling of some organic and inorganic soils. Eur. J. Soil Sci. 58, 98–107 (2007). Leong, E. & Wijaya, M. Universal soil shrinkage curve equation. Geoderma 237, 78–87 (2015). Hanssen, R. F. Radar Interferometry: Data Interpretation and Error Analysis, vol. 2 (Springer Science + Business Media, Dordrecht, the Netherlands, 2001). Ferretti, A., Prati, C. & Rocca, F. Permanent scatterers in SAR interferometry. IEEE Trans. Geosci. Remote. Sens. 39, 8–20 (2001). Gernhardt, S., Adam, N., Eineder, M. & Bamler, R. Potential of very high resolution SAR for persistent scatterer interferometry in urban areas. Ann. GIS 16, 103–111 (2010). Chang, L., Dollevoet, R. P. & Hanssen, R. F. Nationwide railway monitoring using satellite SAR interferometry. IEEE. J. Sel. Top. Appl. Earth Obs. Remote. Sens. 10, 596–604 (2017). Perissin, D. & Wang, T. Time-series InSAR applications over urban areas in China. IEEE. J. Sel. Top. Appl. Earth Obs. Remote. Sens. 4, 92–100 (2011). Perissin, D., Wang, Z. & Lin, H. Shanghai subway tunnels and highways monitoring through Cosmo-SkyMed persistent scatterers. ISPRS J. Photogramm. Remote. Sens. 73, 58–67 (2012). Cigna, F., Bianchini, S. & Casagli, N. How to assess landslide activity and intensity with persistent scatterer interferometry (PSI): the PSI-based matrix approach. Landslides 10, 267–283 (2013). Hooper, A., Segall, P. & Zebker, H. Persistent scatterer interferometric synthetic aperture radar for crustal deformation analysis, with application to Volcán Alcedo, Galápagos. J. Geophys. Res. Solid Earth 112 (2007). Cigna, F. et al. Monitoring land subsidence and its induced geological hazard with synthetic aperture radar interferometry: A case study in Morelia, Mexico. Remote. Sens. Environ. 117, 146–161 (2012). Dixon, T. H. et al. Space geodesy: subsidence and flooding in New Orleans. Nature 441, 587–588 (2006). Barends, F., Dillingh, D., Hanssen, R. & van Onselen, K. Bodemdaling langs de Nederlandse kust: case Hondsbossche en Pettermer zeewering, (in Dutch) 4–5 (IOS Press, Amsterdam, the Netherlands, 2008). Brooks, B. A. et al. Contemporaneous subsidence and levee overtopping potential, Sacramento-San Joaquin Delta, California. San Francisco Estuary Watershed Sci. 10 (2012). Jorissen, R., Kraaij, E. & Tromp, E. Dutch flood protection policy and measures based on risk assessment. In Proc. 3rd Eur. Conf. Flood Risk Manag., vol. 7, 20016 (Lyon, France, 2016). Mitchell, J. K. et al. Fundamentals of Soil Behavior, vol. 3 (John Wiley & Sons, New York, US, 2005). TeBrake, W. H. Taming the waterwolf: hydraulic engineering and water management in the Netherlands during the Middle Ages. Technol. Cult. 43, 475–499 (2002). Grundmann, P. Een incident tijdens de droog making in 1874. (in Dutch). De Proostkoerier 3, 4–9 (1996). Vonk, B. Some aspects of the engineering practice regarding peat in small polders. In den Haan, E., Termaat, R. & Edil, T. (eds) Advances in understanding and modelling mechanical behaviour of peat, 389–402 (A.A. Balkema, Rotterdam, the Netherlands, 1994). Kampes, B. M. Radar interferometry, vol. 12 (Springer, Dordrecht, the Netherlands, 2006). van Leijen, F. J. Persistent scatterer interferometry based on geodetic estimation theory. Ph.D. thesis, Delft University of Technology, Delft, the Netherlands (2014). Van Baars, S. The horizontal failure mechanism of the Wilnis peat dyke. Géotechnique 55, 319–323 (2005). Kampes, B. M., Hanssen, R. F. & Perski, Z. Radar interferometry with public domain tools. In Proc. FRINGE 2003, vol. 3 (Frascati, Italy, 2003). Crosetto, M., Monserrat, O., Cuevas-González, M., Devanthéry, N. & Crippa, B. Persistent scatterer interferometry: A review. ISPRS J. Photogramm. Remote. Sens. 115, 78–89 (2016). Zabihi Naeini, E., Hoeber, H., Poole, G. & Siahkoohi, H. R. Simultaneous multivintage time-shift estimation. Geophysics 74, V109–V121 (2009). Teunissen, P. J. G. Adjustment Theory; An Introduction, first edn (VSSD, Delft, the Netherlands, 2000). Teunissen, P. J. G. Testing Theory; An Introduction, second edn (VSSD, Delft, the Netherlands, 2006). Chang, L. & Hanssen, R. F. A probabilistic approach for InSAR time-series postprocessing. IEEE Trans. Geosci. Remote. Sens. 54, 421–430 (2016). Dekker, J. et al. Achtergrondrapport sterkteonderzoek oorzaak kadeverschuiving Wilnis. Tech. Report (in Dutch) CO-411242-0025, GeoDelft (2004). Bezuijen, A., Kruse, G. & Van, M. Failure of peat dikes in the Netherlands. In Proc. Int. Conf. Soil Mech. Geotech. Eng., vol. 16, 1857 (Osaka, Japan, 2005). This research was performed as part of the NWO TTW project SAFElevee (project:13861). The authors would like to thank the German Aerospace Centre (DLR) for providing the TerraSAR-X data and Water Board Delfland for making the soil data available. Dr. Phil Vardon, Dr. Anne-Catherine Dieudonné and Dr. Giacomo Vairetti are acknowledged for their feedbacks. Delft University of Technology, Faculty of Civil Engineering and Geosciences, Stevinweg 1, 2628 CN, Delft, The Netherlands Işıl E. Özer, Stephan J. H. Rikkert, Freek J. van Leijen, Sebastiaan N. Jonkman & Ramon F. Hanssen Işıl E. Özer Stephan J. H. Rikkert Freek J. van Leijen Sebastiaan N. Jonkman Ramon F. Hanssen S.N.J. and R.F.H. conceived the project. I.E.Ö. conducted the data analysis. I.E.Ö., S.J.H.R., F.J.L., S.N.J. and R.F.H. interpreted the results. I.E.Ö. wrote the original paper while all the authors contributed significantly to the final version. Correspondence to Işıl E. Özer. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Özer, I.E., Rikkert, S.J.H., van Leijen, F.J. et al. Sub-seasonal Levee Deformation Observed Using Satellite Radar Interferometry to Enhance Flood Protection. Sci Rep 9, 2646 (2019). https://doi.org/10.1038/s41598-019-39474-x Remote sensing of Earth hazards
CommonCrawl
Computer Aided Verification International Conference on Computer Aided Verification CAV 2019: Computer Aided Verification pp 200-218 | Cite as Automated Hypersafety Verification Azadeh Farzan Anthony Vandikas We propose an automated verification technique for hypersafety properties, which express sets of valid interrelations between multiple finite runs of a program. The key observation is that constructing a proof for a small representative set of the runs of the product program (i.e. the product of the several copies of the program by itself), called a reduction, is sufficient to formally prove the hypersafety property about the program. We propose an algorithm based on a counterexample-guided refinement loop that simultaneously searches for a reduction and a proof of the correctness for the reduction. We demonstrate that our tool Weaver is very effective in verifying a diverse array of hypersafety properties for a diverse class of input programs. Download conference paper PDF A hypersafety property describes the set of valid interrelations between multiple finite runs of a program. A k-safety property [7] is a program safety property whose violation is witnessed by at least k finite runs of a program. Determinism is an example of such a property: non-determinism can only be witnessed by two runs of the program on the same input which produce two different outputs. This makes determinism an instance of a 2-safety property. The vast majority of existing program verification methodologies are geared towards verifying standard (1-)safety properties. This paper proposes an approach to automatically reduce verification of k-safety to verification of 1-safety, and hence a way to leverage existing safety verification techniques for hypersafety verification. The most straightforward way to do this is via self-composition [5], where verification is performed on k memory-disjoint copies of the program, sequentially composed one after another. Unfortunately, the proofs in these cases are often very verbose, since the full functionality of each copy has to be captured by the proof. Moreover, when it comes to automated verification, the invariants required to verify such programs are often well beyond the capabilities of modern solvers [26] even for very simple programs and properties. The more practical approach, which is typically used in manual or automated proofs of such properties, is to compose k memory-disjoint copies of the program in parallel (instead of in sequence), and then verify some reduced program obtained by removing redundant traces from the program formed in the previous step. This parallel product program can have many such reductions. For example, the program formed from sequential self-composition is one such reduction of the parallel product program. Therefore, care must be taken to choose a "good" reduction that admits a simple proof. Many existing approaches limit themselves to a narrow class of reductions, such as the one where each copy of the program executes in lockstep [3, 10, 24], or define a general class of reductions, but do not provide algorithms with guarantees of covering the entire class [4, 24]. We propose a solution that combines the search for a safety proof with the search for an appropriate reduction, in a counterexample-based refinement loop. Instead of settling on a single reduction in advance, we try to verify the entire (possibly infinite) set of reductions simultaneously and terminate as soon as some reduction is successfully verified. If the proof is not currently strong enough to cover at least one of the represented program reductions, then an appropriate set of counterexamples are generated that guarantee progress towards a proof. Our solution is language-theoretic. We propose a way to represent sets of reductions using infinite tree automata. The standard safety proofs are also represented using the same automata, which have the desired closure properties. This allows us to check if a candidate proof is in fact a proof for one of the represented program reductions, with reasonable efficiency. Our approach is not uniquely applicable to hypersafety properties of sequential programs. Our proposed set of reductions naturally work well for concurrent programs, and can be viewed in the spirit of reduction-based methods such as those proposed in [11, 21]. This makes our approach particularly appealing when it comes to verification of hypersafety properties of concurrent programs, for example, proving that a concurrent program is deterministic. The parallel composition for hypersafety verification mentioned above and the parallel composition of threads inside the multi-threaded program are treated in a uniform way by our proof construction and checking algorithms. In summary: We present a counterexample-guided refinement loop that simultaneously searches for a proof and a program reduction in Sect. 7. This refinement loop relies on an efficient algorithm for proof checking based on the antichain method of [8], and strong theoretical progress guarantees. We propose an automata-based approach to representing a class of program reductions for k-safety verification. In Sect. 5 we describe the precise class of automata we use and show how their use leads to an effective proof checking algorithm incorporated in our refinement loop. We demonstrate the efficacy of our approach in proving hypersafety properties of sequential and concurrent benchmarks in Sect. 8. 2 Illustrative Example We use a simple program Mult, that computes the product of two non-negative integers, to illustrate the challenges of verifying hypersafety properties and the type of proof that our approach targets. Consider the multiplication program in Fig. 1(i), and assume we want to prove that it is distributive over addition. Program Mult (i) and the parallel composition of three copies of it (ii). In Fig. 1(ii), the parallel composition of Mult with two copies of itself is illustrated. The product program is formed for the purpose of proving distributivity, which can be encoded through the postcondition \(x_1 = x_2 + x_3\). Since a, b, and c are not modified in the program, the same variables are used across all copies. One way to prove Mult is distributive is to come up with an inductive invariant \(\phi _{ijk}\) for each location in the product program, represented by a triple of program locations \((\ell _i, \ell _j, \ell _k)\), such that \( true \implies \phi _{111}\) and \(\phi _{666} \implies x_1 = x_2 + x_3\). The main difficulty lies in finding assignments for locations such as \(\phi _{611}\) that are points in the execution of the program where one thread has finished executing and the next one is starting. For example, at \((\ell _6, \ell _1, \ell _1)\) we need the assignment \(\phi _{611} \leftarrow x_1 = (a + b) * c\) which is non-linear. However, the program given in Fig. 1(ii) can be verified with simpler (linear) reasoning. The program on the right is a semantically equivalent reduction of the full composition of Fig. 1(ii). Consider the program P = (Copy 1 || (Copy 2; Copy 3)). The program on the right is equivalent to a lockstep execution of the two parallel components of P. The validity of this reduction is derived from the fact that the statements in each thread are independent of the statements in the other. That is, reordering the statements of different threads in an execution leads to an equivalent execution. It is easy to see that \(x_1 = x_2 + x_3\) is an invariant of both while loops in the reduced program, and therefore, linear reasoning is sufficient to prove the postcondition for this program. Conceptually, this reduction (and its soundness proof) together with the proof of correctness for the reduced program constitute a proof that the original program Mult is distributive. Our proposed approach can come up with reductions like this and their corresponding proofs fully automatically. Note that a lockstep reduction of the program in Fig. 1(ii) would not yield a solution for this problem and therefore the discovery of the right reduction is an integral part of the solution. 3 Programs and Proofs A non-deterministic finite automaton (NFA) is a tuple \(A = (Q, \varSigma , \delta , q_0, F)\) where Q is a finite set of states, \(\varSigma \) is a finite alphabet, \(\delta \subseteq Q \times \varSigma \times Q\) is the transition relation, \(q_0 \in Q\) is the initial state, and \(F \subseteq Q\) is the set of final states. A deterministic finite automaton (DFA) is an NFA whose transition relation is a function \(\delta : Q \times \varSigma \rightarrow Q\). The language of an NFA or DFA A is denoted \(\mathcal {L}_{}(A)\), which is defined in the standard way [18]. 3.1 Program Traces \(\mathcal {S}t\) denotes the (possibly infinite) set of program states. For example, a program with two integer variables has \(\mathcal {S}t= \mathbb {Z} \times \mathbb {Z}\). \(\mathcal {A}\subseteq \mathcal {S}t\) is a (possibly infinite) set of assertions on program states. \(\varSigma \) denotes a finite alphabet of program statements. We refer to a finite string of statements as a (program) trace. For each statement \(a \in \varSigma \) we associate a semantics \(\llbracket a\rrbracket \subseteq \mathcal {S}t\times \mathcal {S}t\) and extend \(\llbracket -\rrbracket \) to traces via (relation) composition. A trace \(x \in \varSigma ^*\) is said to be infeasible if \(\llbracket x\rrbracket (\mathcal {S}t) = \emptyset \), where \(\llbracket x\rrbracket (\mathcal {S}t)\) denotes the image of \(\llbracket x\rrbracket \) under \(\mathcal {S}t\). To abstract away from a particular program syntax, we define a program as a regular language of traces. The semantics of a program P is simply the union of the semantics of its traces \(\llbracket P\rrbracket = \bigcup _{x \in P} \llbracket x\rrbracket \). Concretely, one may obtain programs as languages by interpreting their edge-labelled control-flow graphs as DFAs: each vertex in the control flow graph is a state, and each edge in the control flow graph is a transition. The control flow graph entry location is the initial state of the DFA and all its exit locations are final states. 3.2 Safety There are many equivalent notions of program safety; we use non-reachability. A program P is safe if all traces of P are infeasible, i.e. \(\llbracket P\rrbracket (\mathcal {S}t) = \emptyset \). Standard partial correctness specifications are then represented via a simple encoding. Given a precondition \(\phi \) and a postcondition \(\psi \), the validity of the Hoare-triple \(\{\phi \}P\{\psi \}\) is equivalent to the safety of \([\phi ] \cdot P \cdot [\lnot \psi ]\), where [] is a standard assume statement (or the singleton set containing it), and \(\cdot \) is language concatenation. We use determinism as an example of how k-safety can be encoded in the framework defined thus far. If P is a program then determinism of P is equivalent to safety of Open image in new window where \(P_1\) and \(P_2\) are copies of P operating on disjoint variables, Open image in new window is a shuffle product of two languages, and \([\phi ]\) is an assume statement asserting that the variables in each copy of P are equal. A proof is a finite set of assertions \(\varPi \subseteq \mathcal {A}\) that includes \( true \) and \( false \). Each \(\varPi \) gives rise to an NFA \(\varPi _{NFA} = (\varPi , \mathcal {S}t, \delta _\varPi , true , \{ false \})\) where \(\delta _\varPi (\phi _{pre}, a) = \{ \phi _{post} \mid \llbracket a\rrbracket (\phi _{pre}) \subseteq \phi _{post} \}\). We abbreviate \(\mathcal {L}_{}(\varPi _{NFA})\) as \(\mathcal {L}_{}(\varPi )\). Intuitively, \(\mathcal {L}_{}(\varPi )\) consists of all traces that can be proven infeasible using only assertions in \(\varPi \). Thus the following proof rule is sound [12, 13, 17]: When \(P \subseteq \mathcal {L}_{}(\varPi )\), we say that \(\varPi \) is a proof for P. A proof does not uniquely belong to any particular program; a single \(\varPi \) may prove many programs correct. 4 Reductions The set of assertions used for a proof is usually determined by a particular language of assertions, and a safe program may not have a (safety) proof in that particular language. Yet, a subset of the program traces may have a proof in that assertion language. If it can be proven that the subset of program runs that have a safety proof are a faithful representation of all program behaviours (with respect to a given property), then the program is correct. This motivates the notion of program reductions. Definition 4.1 (semantic reduction). If for programs P and \(P'\), \(P'\) is safe implies that P is safe, then \(P'\) is a semantic reduction of P (written \(P' \preceq P\)). The definition immediately gives rise to the following proof rule for proving program safety: This generic proof rule is not automatable since, given a proof \(\varPi \), verifying the existence of the appropriate reduction is undecidable. Observe that a program is safe if and only if \(\emptyset \) is a valid reduction of the program. This means that discovering a semantic reduction and proving safety are mutually reducible to each other. To have decidable premises for the proof rule, we need to formulate an easier (than proving safety) problem in discovering a reduction. One way to achieve this is by restricting the set of reductions under consideration from all reductions (given in Definition 4.1) to a proper subset which more amenable to algorithmic checking. Fixing a set \(\mathcal {R}\) of (semantic) reductions, we will have the rule: Proposition 4.2 The proof rule SafeRed2 is sound. The core contribution of this paper is that it provides an algorithmic solution inspired by the above proof rule. To achieve this, two subproblems are solved: (1) Given a set \(\mathcal {R}\) of reductions of a program P and a candidate proof \(\varPi \), can we check if there exists a reduction \(P' \in \mathcal {R}\) which is covered by the proof \(\varPi \)? In Sect. 5, we propose a new semantic interpretation of an existing notion of infinite tree automata that gives rise to an algorithmic check for this step. (2) Given a program P, is there a general sound set of reductions \(\mathcal {R}\) that be effectively represented to accommodate step (1)? In Sect. 6, we propose a construction of an effective set of reductions, representable by our infinite tree automata, using inspirations from existing partial order reduction techniques [15]. 5 Proof Checking Given a set of reductions \(\mathcal {R}\) of a program P, and a candidate proof \(\varPi \), we want to check if there exists a reduction \(P' \in \mathcal {R}\) which is covered by \(\varPi \). We call this proof checking. We use tree automata to represent certain classes of languages (i.e sets of sets of strings), and then use operations on these automata for the purpose of proof checking. Language \(\{a\}\) as an infinite tree. The set \(\varSigma ^*\) can be represented as an infinite tree. Each \(x \in \varSigma ^*\) defines a path to a unique node in the tree: the root node is located at the empty string \(\epsilon \), and for all \(a \in \varSigma \), the node located at xa is a child of the node located at x. Each node is then identified by the string labeling the path leading to it. A language \(L \subseteq \varSigma ^*\) (equivalently, \(L : \varSigma ^* \rightarrow \mathbb {B}\)) can consequently be represented as an infinite tree where the node at each x is labelled with a boolean value \(B \equiv (x \in L)\). An example is given in Fig. 2. It follows that a set of languages is a set of infinite trees, which can be represented using automata over infinite trees. Looping Tree Automata (LTAs) are a subclass of Büchi Tree Automata where all states are accept states [2]. The class of Looping Tree Automata is closed under intersection and union, and checking emptiness of LTAs is decidable. Unlike Büchi Tree Automata, emptiness can be decided in linear time [2]. A Looping Tree Automaton (LTA) over \(|\varSigma |\)-ary, \(\mathbb {B}\)-labelled trees is a tuple \(M = (Q, \varDelta , q_0)\) where Q is a finite set of states, \(\varDelta \subseteq Q \times \mathbb {B}\times (\varSigma \rightarrow Q)\) is the transition relation, and \(q_0\) is the initial state. Intuitively, an LTA \(M = (Q, \varDelta , q_0)\) performs a parallel and depth-first traversal of an infinite tree L while maintaining some local state. Execution begins at the root \(\epsilon \) from state \(q_0\) and non-deterministically picks a transition \((q_0, B, \sigma ) \in \varDelta \) such that B matches the label at the root of the tree (i.e. \(B = (\epsilon \in L)\)). If no such transition exists, the tree is rejected. Otherwise, M recursively works on each child a from state \(q' = \sigma (a)\) in parallel. This process continues infinitely, and L is accepted if and only if L is never rejected. Formally, M's execution over a tree L is characterized by a run \(\delta ^* : \varSigma ^* \rightarrow Q\) where \(\delta ^*(\epsilon ) = q_0\) and Open image in new window for all \(x \in \varSigma ^*\). The set of languages accepted by M is then defined as Open image in new window . Theorem 5.2 Given an LTA M and a regular language L, it is decidable whether Open image in new window . The proof, which appears in [14], reduces the problem to deciding whether \(\mathcal {L}_{}(M) \cap \mathcal {P}(L) \ne \emptyset \). LTAs are closed under intersection and have decidable emptiness checks, and the lemma below is the last piece of the puzzle. Lemma 5.3 If L is a regular language, then \({\mathcal {P}(L)}\) is recognized by an LTA. Counterexamples. Theorem 5.2 effectively states that proof checking is decidable. For automated verification, beyond checking the validity of a proof, we require counterexamples to fuel the development of the proof when the proof does not check. Note that in the simple case of the proof rule safe, when \(P \not \subseteq \mathcal {L}_{}(\varPi )\) there exists a counterexample trace \(x \in P\) such that \(x \notin \mathcal {L}_{}(\varPi )\). With our proof rule SafeRed2, things get a bit more complicated. First, note that unlike the classic case (safe), where a failed proof check coincides with the non-emptiness of an intersection check (i.e. \(P \cap \overline{\mathcal {L}_{}(\varPi )} \not = \emptyset \)), in our case, a failed proof check coincides with the emptiness of an intersection check (i.e. \(\mathcal {R} \cap {\mathcal {P}(\mathcal {L}_{}(\varPi ))} = \emptyset \)). The sets \(\mathcal {R}\) and \({\mathcal {P}(\mathcal {L}_{}(\varPi ))}\) are both sets of languages. What does the witness to the emptiness of the intersection look like? Each language member of \(\mathcal {R}\) contains at least one string that does not belong to any of the subsets of our proof language. One can collect all such witness strings to guarantee progress across the board in the next round. However, since LTAs can represent an infinite set of languages, one must take care not end up with an infinite set of counterexamples following this strategy. Fortunately, this will not be the case. Let M be an LTA and let L be a regular language such that \(P \not \subseteq L\) for all \(P \in \mathcal {L}_{}(M)\). There exists a finite set of counterexamples C such that, for all \(P \in \mathcal {L}_{}(M)\), there exists some \(x \in C\) such that \(x \in P\) and \(x \notin L\). The proof appears in [14]. This theorem justifies our choice of using LTAs instead of more expressive formalisms such as Büchi Tree Automata. For example, the Büchi Tree Automaton that accepts the language \(\{ \{x\} \mid x \in \varSigma ^* \}\) would give rise to an infinite number of counterexamples with respect to the empty proof (i.e. \(\varPi = \emptyset \)). The finiteness of the counterexample set presents an alternate proof that LTAs are strictly less expressive than Büchi Tree Automata [27]. 6 Sleep Set Reductions We have established so far that (1) a set of assertions gives rise to a regular language proof, and (2) given a regular language proof and a set of program reductions recognizable by an LTA, we can check the program (reductions) against the proof. The last piece of the puzzle is to show that a useful class of program reductions can be expressed using LTAs. Recall our example from Sect. 2. The reduction we obtain is sound because, for every trace in the full parallel-composition program, an equivalent trace exists in the reduced program. By equivalent, we mean that one trace can be obtained from the other by swapping independent statements. Such an equivalence is the essence of the theory of Mazurkiewicz traces [9]. We fix a reflexive symmetric dependence relation \(D \subseteq \varSigma \times \varSigma \). For all \(a, b \in \varSigma \), we say that a and b are dependent if \((a, b) \in D\), and say they are independent otherwise. We define \(\sim _D\) as the smallest congruence satisfying \(xaby \sim _D xbay\) for all \(x, y \in \varSigma ^*\) and independent \(a, b \in \varSigma \). The closure of a language \(L \subseteq \varSigma ^*\) with respect to \(\sim _D\) is denoted \([L]_D\). A language L is \(\sim _D\)-closed if \(L = [L]_D\). It is worthwhile to note that all input programs considered in this paper correspond to regular languages that are \(\sim _D\)-closed. An equivalence class of \(\sim _D\) is typically called a (Mazurkiewicz) trace. We avoid using this terminology as it conflicts with our definition of traces as strings of statements in Sect. 3.1. We assume D is sound, i.e. \(\llbracket ab\rrbracket = \llbracket ba\rrbracket \) for all independent \(a, b \in \varSigma \). (D-reduction). A program \(P'\) is a D-reduction of a program P, that is \(P' \preceq _D P\), if \([P']_D = P\). Note that the equivalence relation on programs induced by \(\sim _D\) is a refinement of the semantic equivalence relation used in Definition 4.1. If \(P' \preceq _D P\) then \(P' \preceq P\). Ideally, we would like to define an LTA that accepts all D-reductions of a program P, but unfortunately this is not possible in general. (corollary of Theorem 67 of [9]). For arbitrary regular languages \(L_1, L_2 \in \varSigma ^*\) and relation D, the proposition Open image in new window is undecidable. The proposition is decidable only when \(\overline{D}\) is transitive, which does not hold for a semantically correct notion of independence for a parallel program encoding a k-safety property, since statements from the same thread are dependent and statements from different program copies are independent. Therefore, we have: Assume P is a \(\sim _D\)-closed program and \(\varPi \) is a proof. The proposition Open image in new window is undecidable. In order to have a decidable premise for proof rule SafeRed2 then, we present an approximation of the set of D-reductions, inspired by sleep sets [15]. The idea is to construct an LTA that recognizes a class of D-reductions of an input program P, whose language is assumed to be \(\sim _D\)-closed. This automaton intuitively makes non-deterministic choices about what program traces to prune in favour of other \(\sim _D\)-equivalent program traces for a given reduction. Different non-deterministic choices lead to different D-reductions. Exploring from x with sleep sets. Consider two statements \(a,b \in \varSigma \) where \((a,b) \not \in D\). Let \(x,y \in \varSigma ^*\) and consider two program runs xaby and xbay. We know \(\llbracket xbay\rrbracket = \llbracket xaby\rrbracket \). If the automaton makes a non-deterministic choice that the successors of xa have been explored, then the successors of xba need not be explored (can be pruned away) as illustrated in Fig. 3. Now assume \((a,c) \in D\), for some \(c \in \varSigma \). When the node xbc is being explored, we can no longer safely ignore a-transitions, since the equality \(\llbracket xbcay\rrbracket = \llbracket xabcy\rrbracket \) is not guaranteed. Therefore, the a successor of xbc has to be explored. The nondeterministic choice of what child node to explore is modelled by a choice of order in which we explore each node's children. Different orders yield different reductions. Reductions are therefore characterized as an assignment \(R : \varSigma ^* \rightarrow \mathcal {L}in(\varSigma )\) from nodes to linear orderings on \(\varSigma \), where \((a, b) \in R(x)\) means we explore child xa after child xb. Given \(R : \varSigma ^* \rightarrow \mathcal {L}in(\varSigma )\), the sleep set \(\mathrm {sleep}_R(x) \subseteq \varSigma \) at node \(x \in \varSigma ^*\) defines the set of transitions that can be ignored at x: $$\begin{aligned} \mathrm {sleep}_R(\epsilon )&= \emptyset \end{aligned}$$ $$\begin{aligned} \mathrm {sleep}_R(xa)&= (\mathrm {sleep}_R(x) \cup R(x)(a)) \setminus D(a) \end{aligned}$$ Intuitively, (1) no transition can be ignored at the root node, since nothing has been explored yet, and (2) at node x, the sleep set of xa is obtained by adding the transitions we explored before a (R(x)(a)) and then removing the ones that conflict with a (i.e. are related to a by D). Next, we define the nodes that are ignored. The set of ignored nodes is the smallest set \(\mathrm {ignore}_R : \varSigma ^* \rightarrow \mathbb {B}\) such that $$\begin{aligned} x \in \mathrm {ignore}_R&\implies xa \in \mathrm {ignore}_R \end{aligned}$$ $$\begin{aligned} a \in \mathrm {sleep}_R(x)&\implies xa \in \mathrm {ignore}_R \end{aligned}$$ Intuitively, a node xa is ignored if (1) any of its ancestors is ignored (\(\mathrm {ignore}_R(x)\)), or (2) a is one of the ignored transitions at node x (\(a \in \mathrm {sleep}_R(x)\)). Finally, we obtain an actual reduction of a program P from a characterization of a reduction R by removing the ignored nodes from P, i.e. \(P \setminus \mathrm {ignore}_R\). For all \(R : \varSigma ^* \rightarrow \mathcal {L}in(\varSigma )\), if P is a \(\sim _D\)-closed program then \(P \setminus \mathrm {ignore}_R\) is a D-reduction of P. The set of all such reductions is \(\mathrm {reduce}_D(P) = \{ P \setminus \mathrm {ignore}_R \mid R : \varSigma ^* \rightarrow \mathcal {L}in(\varSigma ) \}\). For any regular language P, \(\mathrm {reduce}_D(P)\) is accepted by an LTA. Interestingly, every reduction in \(\mathrm {reduce}_D(P)\) is optimal in the sense that each reduction contains at most one representative of each equivalence class of \(\sim _D\). Fix some \(P \subseteq \varSigma ^*\) and \(R : \varSigma ^* \rightarrow \mathcal {L}in(\varSigma )\). For all \((x, y) \in P \setminus \mathrm {ignore}_R\), if \(x \sim _D y\) then \(x = y\). Counterexample-guided refinement loop. 7 Algorithms Figure 4 illustrates the outline of our verification algorithm. It is a counterexample-guided abstraction refinement loop in the style of [12, 13, 17]. The key difference is that instead of checking whether some proof \(\varPi \) is a proof for the program P, it checks if there exists a reduction of the program P that \(\varPi \) proves correct. The algorithm relies on an oracle Interpolate that, given a finite set of program traces C, returns a proof \(\varPi '\), if one exists, such that \(C \subseteq \mathcal {L}_{}(\varPi ')\). In our tool, we use Craig interpolation to implement the oracle Interpolate. In general, since program traces are the simplest form of sequential programs (loop and branch free), any automated program prover that can handle proving them may be used. The results presented in Sects. 5 and 6 give rise to the proof checking sub routine of the algorithm in Fig. 4 (i.e. the light grey test). Given a program DFA \(A_P = (Q_P, \varSigma , \delta _P, q_{P0}, F_P)\) and a proof DFA \(A_\varPi = (Q_\varPi , \varSigma , \delta _\varPi , q_{\varPi 0}, F_\varPi )\) (obtained by determinizing \(\varPi _{NFA}\)), we can decide Open image in new window by constructing an LTA \(M_{P\varPi }\) for \(\mathrm {reduce}_D(\mathcal {L}_{}(A_P)) \cap {\mathcal {P}(\mathcal {L}_{}(A_\varPi ))}\) and checking emptiness (Theorem 5.2). 7.1 Progress The algorithm corresponding to Fig. 4 satisfies a weak progress theorem: none of the counterexamples from a round of the algorithm will ever appear in a future counterexample set. This, however, is not strong enough to guarantee termination. Alternatively, one can think of the algorithm's progress as follows. In each round new assertions are discovered through the oracle Interpolate, and one can optimistically hope that one can finally converge on an existing target proof \(\varPi ^*\). The success of this algorithm depends on two factors: (1) the counterexamples used by the algorithm belong to \(\mathcal {L}_{}(\varPi ^*)\) and (2) the proof that Interpolate discovers for these counterexamples coincide with \(\varPi ^*\). The latter is a typical known wild card in software model checking, which cannot be guaranteed; there is plenty of empirical evidence, however, that procedures based on Craig Interpolation do well in approximating it. The former is a new problem for our refinement loop. In a standard algorithm in the style of [12, 13, 17], the verification proof rule dictates that every program trace must be in \(\mathcal {L}_{}(\varPi ^*)\). In our setting, we only require a subset (corresponding to some reduction) to be in \(\mathcal {L}_{}(\varPi ^*)\). This means one cannot simply rely on program traces as appropriate counterexamples. Theorem 5.4 presents a solution to this problem. It ensures that we always feed Interpolate some counterexample from \(\varPi ^*\) and therefore guarantee progress. (Strong Progress). Assume a proof \(\varPi ^*\) exists for some reduction \(P^* \in \mathcal {R}\) and Interpolate always returns some subset of \(\varPi ^*\) for traces in \(\mathcal {L}_{}(\varPi ^*)\). Then the algorithm will terminate in at most \(|\varPi ^*|\) iterations. Theorem 7.1 ensures that the algorithm will never get into an infinite loop due to a bad choice of counterexamples. The condition on Interpolate ensures that divergence does not occur due to the wrong choice of assertions by Interpolate and without it any standard interpolation-based software model checking algorithm may diverge. The assumption that there exists a proof for a reduction of the program in the fixed set \(\mathcal {R}\) ensures that the proof checking procedure can verify the target proof \(\varPi ^*\) once it is reached. Note that, in general, a proof may exist for a reduction of the program which is not in \(\mathcal {R}\). Therefore, the algorithm is not complete with respect to all reductions, since checking the premises of SafeRed1 is undecidable as discussed in Sect. 4. 7.2 Faster Proof Checking Through Antichains The state set of \(M_{P\varPi }\), the intersection of program and proof LTAs, has size \(|Q_P \times \mathbb {B}\times \mathcal {P}(\varSigma ) \times Q_\varPi |\), which is exponential in \(|\varSigma |\). Therefore, even a linear emptiness test for this LTA can be computationally expensive. Antichains have been previously used [8] to optimize certain operations over NFAs that also suffer from exponential blowups, such as deciding universality and inclusion tests. The main idea is that these operations involve computing downwards-closed and upwards-closed sets according to an appropriate subsumption relation, which can be represented compactly as antichains. We employ similar techniques to propose a new emptiness check algorithm. Antichains. The set of maximal elements of a set X with respect to some ordering relation \(\sqsubseteq \) is denoted \({\text {max}}(X)\). The downwards-closure of a set X with respect to \(\sqsubseteq \) is denoted \({\lfloor X\rfloor }\). An antichain is a set X where no element of X is related (by \(\sqsubseteq \)) to another. The maximal elements \({\text {max}}(X)\) of a finite set X is an antichain. If X is downwards-closed then \({\lfloor {\text {max}}(X)\rfloor } = X\). The emptiness check algorithm for LTAs from [2] computes the set of inactive states (i.e. states which generate an empty language) and checks if the initial state is inactive. The set of inactive states of an LTA \(M = (Q, \varDelta , q_0)\) is defined as the smallest set \(\mathrm {inactive}(M)\) satisfying Alternatively, one can view \(\mathrm {inactive}(M)\) as the least fixed-point of a monotone (with respect to \(\subseteq \)) function \(F_M : {\mathcal {P}(Q)} \rightarrow {\mathcal {P}(Q)}\) where Therefore, \(\mathrm {inactive}(M)\) can be computed using a standard fixpoint algorithm. If \(\mathrm {inactive}(M)\) is downwards-closed with respect to some subsumption relation \((\sqsubseteq ) \subseteq Q \times Q\), then we need not represent all of \(\mathrm {inactive}(M)\). The antichain \(\max (\mathrm {inactive}(M))\) of maximal elements of \(\mathrm {inactive}(M)\) (with respect to \(\sqsubseteq \)) would be sufficient to represent the entirety of \(\mathrm {inactive}(M)\), and can be exponentially smaller than \(\mathrm {inactive}(M)\), depending on the choice of relation \(\sqsubseteq \). A trivial way to compute \(\max (\mathrm {inactive}(M))\) is to first compute \(\mathrm {inactive}(M)\) and then find the maximal elements of the result, but this involves doing strictly more work than the baseline algorithm. However, observe that if \(F_M\) also preserves downwards-closedness with respect to \(\sqsubseteq \), then $$\begin{aligned} \max (\mathrm {inactive}(M)) =&\max ({\text {lfp}}(F_M)) \\ =&\max ({\text {lfp}}(F_M \circ {\lfloor -\rfloor } \circ \max )) = {\text {lfp}}(\max \circ F_M \circ {\lfloor -\rfloor }) \end{aligned}$$ That is, \(\max (\mathrm {inactive}(M))\) is the least fixed-point of a function \(F^{\max }_M : {\mathcal {P}(Q)} \rightarrow {\mathcal {P}(Q)}\) defined as \(F^{\max }_M(X) = \max (F_M({\lfloor X\rfloor }))\). We can calculate \(\max (\mathrm {inactive}(M))\) efficiently if we can calculate \(F^{\max }_M(X)\) efficiently, which is true in the special case of the intersection automaton for the languages of our proof \({\mathcal {P}(\mathcal {L}_{}(\varPi ))}\) and our program \(\mathrm {reduce}_D(P)\), which we refer to as \(M_{P\varPi }\). We are most interested in the state space of \(M_{P\varPi }\), which is \(Q_{P\varPi } = (Q_P \times \mathbb {B}\times {\mathcal {P}(\varSigma )}) \times Q_\varPi \). Observe that states whose \(\mathbb {B}\) part is \(\top \) are always active: \(((q_P, \top , S), q_\varPi ) \notin \mathrm {inactive}(M_{P\varPi })\) for all \(q_P \in Q_P\), \(q_\varPi \in Q_\varPi \), and \(S \subseteq \varSigma \). The state space can then be assumed to be \(Q_{P\varPi } = (Q_P \times \{\bot \} \times {\mathcal {P}(\varSigma )}) \times Q_\varPi \) for the purposes of checking inactivity. The subsumption relation defined as the smallest relation \(\sqsubseteq _{P\varPi }\) satisfying $$ S \subseteq S' \implies ((q_P, \bot , S), q_\varPi ) \sqsubseteq _{P\varPi } ((q_P, \bot , S'), q_\varPi ) $$ for all \(q_P \in Q_P\), \(q_\varPi \in Q_\varPi \), and \(S, S' \subseteq \varSigma \), is a suitable one since: \(F_{M_{P\varPi }}\) preserves downwards-closedness with respect to \(\sqsubseteq _{P\varPi }\). The function \(F^{\max }_{M_{P\varPi }}\) is a function over relations $$ F^{\max }_{M_{P\varPi }} : {\mathcal {P}((Q_P \times \{\bot \} \times {\mathcal {P}(\varSigma )}) \times Q_\varPi )} \rightarrow {\mathcal {P}((Q_P \times \{\bot \} \times {\mathcal {P}(\varSigma )}) \times Q_\varPi )} $$ but in our case it is more convenient to view it as a function over functions $$ F^{\max }_{M_{P\varPi }} : (Q_P \times \{\bot \} \times Q_\varPi \rightarrow {\mathcal {P}({\mathcal {P}(\varSigma )})}) \rightarrow (Q_P \times \{\bot \} \times Q_\varPi \rightarrow {\mathcal {P}({\mathcal {P}(\varSigma )})}) $$ Through some algebraic manipulation and some simple observations, we can define \(F^{\max }_{M_{P\varPi }}\) functionally as follows. For all \(q_P \in Q_P\), \(q_\varPi \in Q_\varPi \), and \({X : Q_P \times \{\bot \} \times Q_\varPi \rightarrow }\)\( {{\mathcal {P}({\mathcal {P}(\varSigma )})}}\), $$\begin{aligned} q'_P&= \delta _P(q_P, a)&X \sqcap Y&= {\text {max}} \{ x \cap y \mid x \in X \wedge y \in Y \} \\ q'_\varPi&= \delta _\varPi (q_\varPi , a)&X \sqcup Y&= {\text {max}} (X \cup Y) \end{aligned}$$ $$ S' = {\left\{ \begin{array}{ll} \{ (S \cup D(a)) \setminus \{a \} \} &{} \text {if }R(a) \setminus D(a) \subseteq S \\ \emptyset &{} \text {otherwise} \end{array}\right. } $$ A full justification appears in [14]. Formulating \(F^{\max }_{M_{P\varPi }}\) as a higher-order function allows us to calculate \(\max (\mathrm {inactive}(M_{P\varPi }))\) using efficient fixpoint algorithms like the one in [22]. Algorithm 1 outlines our proof checking routine. \(\textsc {Fix} : ((A \rightarrow B) \rightarrow (A \rightarrow B)) \rightarrow (A \rightarrow B)\) is a procedure that computes the least fixpoint of its input. The algorithm simply computes the fixpoint of the function \(F^{\max }_{M_{P\varPi }}\) as defined in Lemma 7.4, which is a compact representation of \(\mathrm {inactive}(M_{P\varPi })\) and checks if the start state of \(M_{P\varPi }\) is in it. Counterexamples. Theorem 5.4 states that a finite set of counterexamples exists whenever Open image in new window does not hold. The proof of emptiness for an LTA, formed using rule Inactive above, is a finite tree. Each edge in the tree is labelled by an element of \(\varSigma \) (obtained from the existential in the rule) and the paths through this tree form the counterexample set. To compute this set, then, it suffices to remember enough information during the computation of \(\mathrm {inactive}(M)\) to reconstruct the proof tree. Every time a state q is determined to be inactive, we must also record the witness \(a \in \varSigma \) for each transition \((q, B, \sigma ) \in \varDelta \) such that \(\sigma (a) \in \mathrm {inactive}(M)\). In an antichain-based algorithm, once we determine a state q to be inactive, we simultaneously determine everything it subsumes (i.e. \(\sqsubseteq q\)) to be inactive as well. If we record unique witnesses for each and every state that q subsumes, then the space complexity of our antichain algorithm will be the same as the unoptimized version. The following lemma states that it is sufficient to record witnesses only for q and discard witnesses for states that q subsumes. Fix some states \(q, q'\) such that \(q' \sqsubseteq _{P\varPi } q\). A witness used to prove q is inactive can also be used to prove \(q'\) is inactive. Note that this means that the antichain algorithm soundly returns potentially fewer counterexamples than the original one. 7.3 Partition Optimization The LTA construction for \(\mathrm {reduce}_D(P)\) involves a nondeterministic choice of linear order at each state. Since \(|\mathcal {L}in(\varSigma )|\) has size \(|\varSigma |!\), each state in the automaton would have a large number of transitions. As an optimization, our algorithm selects ordering relations out of \(\mathcal {P}art(\varSigma )\) (instead of \(\mathcal {L}in(\varSigma )\)), defined as \(\mathcal {P}art(\varSigma ) = \{ \varSigma _1 \times \varSigma _2 \mid \varSigma _1 \uplus \varSigma _2 = \varSigma \}\) where \(\uplus \) is disjoint union. This leads to a sound algorithm which is not complete with respect to sleep set reductions and trades the factorial complexity of computing \(\mathcal {L}in(\varSigma )\) for an exponential one. 8 Experimental Results To evaluate our approach, we have implemented our algorithm in a tool called Weaver written in Haskell. Weaver accepts a program written in a simple imperative language as input, where the property is already encoded in the program in the form of assume statements, and attempts to prove the program correct. The dependence relation for each input program is computed using a heuristic that ensures \(\sim _D\)-closedness. It is based on the fact that the shuffle product (i.e. parallel composition) of two \(\sim _D\)-closed languages is \(\sim _D\)-closed. Weaver employs two verification algorithms: (1) The total order algorithm presented in Algorithm 1, and (2) the variation with the partition optimization discussed in Sect. 7.3. It also implements multiple counterexample generation algorithms: (1) Naive: selects the first counterexample in the difference of the program and proof language. (2) Progress-Ensuring: selects a set of counterexamples satisfying Theorem 5.4. (3) Bounded Progress-Ensuring: selects a few counterexamples (in most cases just one) from the set computed by the progress-ensuring algorithm. Our experimentation demonstrated that in the vast majority of the cases, the bounded progress ensuring algorithm (an instance of the partition algorithm) is the fastest of all options. Therefore, all our reports in this section are using this instance of the algorithm. For the larger benchmarks, we use a simple sound optimization to reduce the proof size. We declare the basic blocks of code as atomic, so that internal assertions need not be generated for them as part of the proof. This optimization is incomplete with respect to sleep set reductions. Benchmarks. We use a set of sequential benchmarks from [24] and include additional sequential benchmarks that involve more interesting reductions in their proofs. We have a set of parallel benchmarks, which are beyond the scope of previous hypersafety verification techniques. We use these benchmarks to demonstrate that our technique/tool can seamlessly handle concurrency. These involve proving concurrency specific hypersafety properties such as determinism and equivalence of parallel and sequential implementations of algorithms. Finally, since the proof checking algorithm is the core contribution of this paper, we have a contrived set of instances to stress test our algorithm. These involve proving determinism of simple parallel-disjoint programs with various numbers of threads and statements per thread. These benchmarks have been designed to cause a combinatorial explosion for the proof checker and counterexample generation routines. More information on the benchmarks can be found in [14]. Due to space restrictions, it is not feasible to include a detailed account of all our experiments here, for over 50 benchmarks. A detailed table can be found in [14]. Table 1 includes a summary in the form of averages, and here, we discuss our top findings. Experimental results averages for benchmark groups. Group count Proof size Number of refinement rounds Proof construction time Proof checking time Looping programs of [24] 2-safety properties 46.69 s 475.78 s Loop-free programs of [24] Our sequential benchmarks Our parallel benchmarks Proof construction time refers to the time spent to construct \(\mathcal {L}_{}(\varPi )\) from a given set of assertions \(\varPi \) and excludes the time to produce proofs for the counterexamples in a given round. Proof checking time is the time spent to check if the current proof candidate is strong enough for a reduction of the program. In the fastest instances (total time around 0.01 s), roughly equal time is spent in proof checking and proof construction. In the slowest instances, the total time is almost entirely spent in proof construction. In contrast, in our stress tests (designed to stress the proof checking algorithm) the majority of the time is spent in proof checking. The time spent in proving counterexamples correct is negligible in all instances. Proof sizes vary from 4 assertions to 298 for the most complicated instance. Verification times are correlated with the final proof size; larger proofs tend to cause longer verification times. Numbers of refinement rounds vary from 2 for the simplest to 33 for the most complicated instance. A small number of refinement rounds (e.g. 2) implies a fast verification time. But, for the higher number of rounds, a strong positive correlation between the number of rounds and verification time does not exist. For our parallel programs benchmarks (other than our stress tests), the tool spends the majority of its time in proof construction. Therefore, we designed specific (unusual) parallel programs to stress test the proof checker. Stress test benchmarks are trivial tests of determinism of disjoint parallel programs, which can be proven correct easily by using the atomic block optimization. However, we force the tool to do the unnecessary hard work. These instances simulate the worst case theoretical complexity where the proof checking time and number of counterexamples grow exponentially with the number of threads and the sizes of the threads. In the largest instance, more than 99% of the total verification time is spent in proof checking. Averages are not very informative for these instances, and therefore are not included in Table 1. Finally, Weaver is only slow for verifying 3-safety properties of large looping benchmarks from [24]. Note that unlike the approach in [24], which starts from a default lockstep reduction (that is incidentally sufficient to prove these instances), we do not assume any reduction and consider them all. The extra time is therefore expected when the product programs become quite large. 9 Related Work The notion of a k-safety hyperproperty was introduced in [7] without consideration for automatic program verification. The approach of reducing k-safety to 1-safety by self-composition is introduced in [5]. While theoretically complete, self-composition is not practical as discussed in Sect. 1. Product programs generalize the self-composition approach and have been used in verifying translation validation [20], non-interference [16, 23], and program optimization [25]. A product of two programs \(P_1\) and \(P_2\) is semantically equivalent to \(P_1 \cdot P_2\) (sequential composition), but is made easier to verify by allowing parts of each program to be interleaved. The product programs proposed in [3] allow lockstep interleaving exclusively, but only when the control structures of \(P_1\) and \(P_2\) match. This restriction is lifted in [4] to allow some non-lockstep interleavings. However, the given construction rules are non-deterministic, and the choice of product program is left to the user or a heuristic. Relational program logics [6, 28] extend traditional program logics to allow reasoning about relational program properties, however automation is usually not addressed. Automatic construction of product programs is discussed in [10] with the goal of supporting procedure specifications and modular reasoning, but is also restricted to lockstep interleavings. Our approach does not support procedure calls but is fully automated and permits non-lockstep interleavings. The key feature of our approached is the automation of the discovery of an appropriate program reduction and a proof combined. In this case, the only other method that compares is the one based on Cartesian Hoare Logic (CHL) proposed in [24] along with an algorithm for automatic verification based on CHL. Their proposed algorithm implicitly constructs a product program, using a heuristic that favours lockstep executions as much as possible, and then prioritizes certain rules of the logic over the rest. The heuristic nature of the search for the proof means that no characterization of the search space can be given, and no guarantees about whether an appropriate product program will be found. In contrast, we have a formal characterization of the set of explored product programs in this paper. Moreover, CHL was not designed to deal with concurrency. Lipton [19] first proposed reduction as a way to simplify reasoning about concurrent programs. His ideas have been employed in a semi-automatic setting in [11]. Partial-order reduction (POR) is a class of techniques that reduces the state space of search by removing redundant paths. POR techniques are concerned with finding a single (preferably minimal) reduction of the input program. In contrast, we use the same underlying ideas to explore many program reductions simultaneously. The class of reductions described in Sect. 6 is based on the sleep set technique of Godefroid [15]. Other techniques exist [1, 15] that are used in conjunction with sleep sets to achieve minimality in a normal POR setting. In our setting, reductions generated by sleep sets are already optimal (Theorem 6.7). However, employing these additional POR techniques may propose ways of optimizing our proof checking algorithm by producing a smaller reduction LTA. Abdulla, P.A., Aronis, S., Jonsson, B., Sagonas, K.: Source sets: a foundation for optimal dynamic partial order reduction. J. ACM (JACM) 64(4), 25 (2017)MathSciNetCrossRefGoogle Scholar Baader, F., Tobies, S.: The inverse method implements the automata approach for modal satisfiability. In: Goré, R., Leitsch, A., Nipkow, T. (eds.) IJCAR 2001. LNCS, vol. 2083, pp. 92–106. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45744-5_8CrossRefGoogle Scholar Barthe, G., Crespo, J.M., Kunz, C.: Relational verification using product programs. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 200–214. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21437-0_17CrossRefGoogle Scholar Barthe, G., Crespo, J.M., Kunz, C.: Beyond 2-safety: asymmetric product programs for relational program verification. In: Artemov, S., Nerode, A. (eds.) LFCS 2013. LNCS, vol. 7734, pp. 29–43. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35722-0_3CrossRefGoogle Scholar Barthe, G., D'argenio, P.R., Rezk, T.: Secure information flow by self-composition. Math. Struct. Comput. Sci. 21(6), 1207–1252 (2011)Google Scholar Benton, N.: Simple relational correctness proofs for static analyses and program transformations. In: ACM SIGPLAN Notices, vol. 39, pp. 14–25. ACM (2004)Google Scholar Clarkson, M.R., Schneider, F.B.: Hyperproperties. In: 21st IEEE Computer Security Foundations Symposium, pp. 51–65. IEEE (2008)Google Scholar De Wulf, M., Doyen, L., Henzinger, T.A., Raskin, J.-F.: Antichains: a new algorithm for checking universality of finite automata. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 17–30. Springer, Heidelberg (2006). https://doi.org/10.1007/11817963_5CrossRefGoogle Scholar Diekert, V., Métivier, Y.: Partial commutation and traces. In: Rozenberg, G., Salomaa, A. (eds.) Handbook of Formal Languages, pp. 457–533. Springer, Heidelberg (1997). https://doi.org/10.1007/978-3-642-59126-6_8CrossRefGoogle Scholar Eilers, M., Müller, P., Hitz, S.: Modular product programs. In: Ahmed, A. (ed.) ESOP 2018. LNCS, vol. 10801, pp. 502–529. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89884-1_18CrossRefGoogle Scholar Elmas, T., Qadeer, S., Tasiran, S.: A calculus of atomic actions. In: ACM SIGPLAN Notices, vol. 44, pp. 2–15. ACM (2009)Google Scholar Farzan, A., Kincaid, Z., Podelski, A.: Inductive data flow graphs. In: ACM SIGPLAN Notices, vol. 48, pp. 129–142. ACM (2013)Google Scholar Farzan, A., Kincaid, Z., Podelski, A.: Proof spaces for unbounded parallelism. In: ACM SIGPLAN Notices, vol. 50, pp. 407–420. ACM (2015)Google Scholar Farzan, A., Vandikas, A.: Reductions for automated hypersafety verification (2019)Google Scholar Godefroid, P. (ed.): Partial-Order Methods for the Verification of Concurrent Systems: An Approach to the State-Explosion Problem, vol. 1032. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-60761-7CrossRefzbMATHGoogle Scholar Goguen, J.A., Meseguer, J.: Security policies and security models. In: 1982 IEEE Symposium on Security and Privacy, p. 11. IEEE (1982)Google Scholar Heizmann, M., Hoenicke, J., Podelski, A.: Refinement of trace abstraction. In: Palsberg, J., Su, Z. (eds.) SAS 2009. LNCS, vol. 5673, pp. 69–85. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03237-0_7CrossRefGoogle Scholar Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to Automata Theory, Languages, and Computation, 3rd edn. Addison-Wesley Longman Publishing Co. Inc., Boston (2006)zbMATHGoogle Scholar Lipton, R.J.: Reduction: a method of proving properties of parallel programs. Commun. ACM 18(12), 717–721 (1975)MathSciNetCrossRefGoogle Scholar Pnueli, A., Siegel, M., Singerman, E.: Translation validation. In: Steffen, B. (ed.) TACAS 1998. LNCS, vol. 1384, pp. 151–166. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0054170CrossRefGoogle Scholar Popeea, C., Rybalchenko, A., Wilhelm, A.: Reduction for compositional verification of multi-threaded programs. In: Formal Methods in Computer-Aided Design (FMCAD), 2014, pp. 187–194. IEEE (2014)Google Scholar Pottier, F.: Lazy least fixed points in ML (2009) Google Scholar Sabelfeld, A., Myers, A.C.: Language-based information-flow security. IEEE J. Sel. Areas Commun. 21(1), 5–19 (2003)CrossRefGoogle Scholar Sousa, M., Dillig, I.: Cartesian hoare logic for verifying k-safety properties. In: ACM SIGPLAN Notices, vol. 51, pp. 57–69. ACM (2016)Google Scholar Sousa, M., Dillig, I., Vytiniotis, D., Dillig, T., Gkantsidis, C.: Consolidation of queries with user-defined functions. In: ACM SIGPLAN Notices, vol. 49, pp. 554–564. ACM (2014)Google Scholar Terauchi, T., Aiken, A.: Secure information flow as a safety problem. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 352–367. Springer, Heidelberg (2005). https://doi.org/10.1007/11547662_24CrossRefGoogle Scholar Vardi, M.Y., Wolper, P.: Reasoning about infinite computations. Inf. Comput. 115(1), 1–37 (1994)MathSciNetCrossRefGoogle Scholar Yang, H.: Relational separation logic. Theor. Comput. Sci. 375(1–3), 308–334 (2007)MathSciNetCrossRefGoogle Scholar Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. 1.University of TorontoTorontoCanada Farzan A., Vandikas A. (2019) Automated Hypersafety Verification. In: Dillig I., Tasiran S. (eds) Computer Aided Verification. CAV 2019. Lecture Notes in Computer Science, vol 11561. Springer, Cham Share paper
CommonCrawl
Development of a prototype tool for ballast water risk management using a combination of hydrodynamic models and agent-based modeling Flemming T. Hansen1, Michael Potthoff1, Thomas Uhrenholdt1, Hong D. Vo nAff3, Olof Linden2 & Jesper H. Andersen4 WMU Journal of Maritime Affairs volume 14, pages 219–245 (2015)Cite this article We report the development of a prototype tool for modeling the risks of spreading of non-indigenous invasive species via ballast water. The tool constitutes of two types of models: a 3D hydrodynamical model calculates the currents in the North Sea and Danish Straits, and an agent-based model estimates the dispersal of selected model organisms with the prevailing currents calculated by the 3D hydrodynamical model. The analysis is concluded by a postprocessing activity, where scenarios of dispersal are combined into an interim estimate of connectivity within the study area. The latter can be used for assessment of potential risk associated with intentional or unintentional discharges of ballast water. We discuss how this prototype tool can be used for ballast water risk management and outline other functions and uses, e.g., in regard to ecosystem-based management and the implementation of the EU Marine Strategy Framework Directive. Transfers of non-indigenous species may potentially pose a threat to the receiving ecosystem and to the society. Organisms that mass-reproduce when they are transported and released in new environments by humans are "non-indigenous invasive species", sometimes referred to as "alien species" (definitions and discussions of these concepts can be found in Ruiz and Carlton 2003). Ultimately, they may have serious impact on ecosystems both ecologically and economically. Ships' ballast water is a main source of non-indigenous marine organisms and, when released, some of these species have caused dramatic and permanent damage to coastal ecosystems around the world (e.g., Bax et al. 2003; Leppäkoski et al. 2003). Since it has been impossible in practise to control and mitigate the spreading of established invasive species in "new" marine environments, efforts should focus on the prevention of introductions. The trend in biological invasion shows an exponential increase during the last 200 years (Leppäkoski et al. 2003). Nevertheless, the problem of marine bio-invasion and the resulting environmental consequences has taken time to be recognized. One of many reasons is that unlike terrestrial ecosystems, bioinvasions in the marine environment are poorly understood since their consequences may be difficult to observe. Many bioinvasions are considered a threat to the ecosystems, to human health as well as to the economy. The invaded ecosystems are threatened as exotic species may alter ecosystem functions and community structure. In some cases, non-indigenous species cause a dramatic decline in important marine resources by competing with the native species for food, habitat, by predation/parasitism, or due to other indirect interactions. Biological invasions may also interact in combination with other natural and anthropogenic environmental factors such as climate change, habitat destruction and pollution to jeopardize the integrity of the ecosystem. This means that the impacts of future introductions are uncertain as well as unpredictable and current invasions may evolve into unforeseen intricate patterns. More challenges are forecast to come in handling introductions and efforts should be focused on preventing new establishments in order to protect the native biota and its diversity by means of managing human activity. The scope of this work is to demonstrate the usefulness of the combination of hydrodynamical (HD) and agent-based models (ABM) as a tool for understanding the ecological connectivity of marine areas of the North Sea, Kattegat, and the inner Danish straits and their application for risk assessment of invasive species brought to the region (or transported within the region) by ballast water from sea vessels. Ecological connectivity mapping is proposed to provide a theoretical basis for identifying areas where ballast water could be released with the least potential risk for species to spread extensively within the region. In addition, connectivity mapping can provide knowledge on how well each specific part of the region is connected with other parts of the region. The study area is the North Sea, located on the continental shelf of north-western Europe and bordered by England, Scotland, Sweden, Norway, Denmark, Germany, the Netherlands, Belgium, and France. It opens out to the Atlantic Ocean, the English Channel, and towards the less saline Baltic Sea. The Greater North Sea (as defined by OSPAR) has a surface area of 750,000 km2 and a volume of 94,000 km3 and is separated into various areas, the relatively shallow southern North Sea (the Southern Bight and the German Bight), the central North Sea, the northern North Sea, the Norwegian Trench, and the Skagerrak, which is a shallow transition zone between the Baltic and the North Sea (OSPAR 2010). In the eastern part of the North Sea, along the coasts of Denmark and Germany, mean winter sea surface temperature (SST) is usually less than 3 °C. The summer mean SST is 18 °C and declines to 13 °C northwards along eastern Britain (Hayward and Ryland 1995). The annual freshwater river input is ca 300 km3, about one third of that comes from the snow-melt waters of Norway and Sweden and the rest from major rivers while the main source of fresh water supply is through the Baltic Sea (Hayward and Ryland 1995). The most prominent current circulation of the North Sea is a roughly anticlockwise flow where residual currents moves southwards along the east coast of the UK and northwards along the Western European coast. Saline water enters the Baltic Sea through Kattegat at depth while surface flow of brackish water from the Baltic Sea enters the Kattegat and North Sea (OSPAR 2010). The North Sea, which is an economically and ecologically important marine region, is sensitive to a range of human activities. Key issues are nutrient enrichment and eutrophication, contamination with hazardous substances, overfishing, physical modification and an unfavorable biodiversity status (OSPAR 2010; HELCOM 2010). Introduction of non-indigenous species has been identified as an emerging issue. Many areas in the North Sea are valuable habitat for marine life as well as of economic importance for the surrounding states. The shallow and productive North Sea, while being of one of the busiest seas in the world in seaborne trading, is also heavily exploited for its natural resources. Activities such as fishing, dredging, oil and gas exploration, shipping, discharges of nutrients and contaminants have polluted as well as depleted reserves in the area. Increases in awareness for the protection of the environment and resource management in the North Sea have surged during the last decades (Misund and Skjoldal 2005), and one significant issue that has been identified relatively recently is the ramifications of species introduction via ship transportation and accidental releases from aquaculture. The most up-to-date inventory of alien species in the North Sea region lists alien aquatic species of 167 taxa (Gollasch et al. 2009). This calls for a precautionary approach to prevent future arrivals of new species. One of the key vectors in moving aquatic alien species is shipping, e.g., ballast water-mediated species introductions from ships prevail in many regions world-wide. The applied hydrodynamic model is based on the MIKE 3 modeling system developed by DHI. The MIKE 3 model is a dynamic time-dependent 3D baroclinic model for free surface flows. The mathematical foundation of the model are the Reynolds-averaged Navier-Stokes equations in three dimensions, including the effects of turbulence and variable density, together with conservation equations for mass, heat, and salt, an equation of state for the density, a turbulence module and a heat exchange module. The equations are solved on a Cartesian grid by means of the finite difference techniques. The hydrodynamic model provides a full 3D model representation of the water levels, flows, salinity, temperature, and density within the modeling domain. For more information on the MIKE 3 modeling system, reference is made to DHI (2009) and DHI (2011). Model setup A 3D North Sea hydrodynamic model is applied as the basis for the agent-based modeling (ABM) and subsequent connectivity modeling. A period of a full year (2005) has been modeled in order to capture the seasonal and higher frequency variability of the North Sea circulation. The model represents the overall circulation patterns in the North Sea and the Belt Sea comprising of tide, meteorologically and density-driven circulation, freshwater inputs, and stratification. A local 3D hydrodynamic model resolving the Belt Sea in a higher resolution has also been applied. The model domain includes the major part of the North Sea, the Belt Sea, and the Baltic Sea. The model applies a Cartesian grid in UTM-32 projection with a horizontal resolution of 3 nautical miles. In the vertical dimension, a 2-m resolution is used, with a maximum of 110 layers depending on the local water depth. However, the surface layer with surface elevation varying with the actual tide has a typical thickness of 5 m. For areas with depths under level −223 m, the rest of the water column is included in the lowest layer. The model domain is shown in Fig. 1. Model domain and location of selected measurement stations. The thick black lines indicate the open boundaries of the model The model has been run for the period 2000–2008, but the period applied for the present purpose is the year 2005. The model runs with a hydrodynamic time step of 300 s. The model results in terms of 3D fields of, for example, current, salinity, and temperature are saved every 1 h. The forcings on the open boundaries towards the Norwegian Sea and the English Channel include: (1) Astronomical tide along boundary (actual values for 2005), (2) salinity distribution in vertical sections (monthly climatologic from ICES), and (3) temperature distribution in vertical sections (monthly climatologic from ICES). Atmospheric forcings (e.g., wind, air pressure, air temperature, cloudiness, and precipitation (actual 2D maps with 1 h resolution)) were originally delivered by Vejr2, a former meteorological forecast service provider. The wind and air pressure are incorporated in the momentum equations, and the precipitation is used in the mass equation. Wind is also included in the turbulence module. The heat exchange module, which calculates the sea-air heat exchange, makes use of wind, air temperature and cloudiness. The runoff in terms of discharges of freshwater from land to the model domain is represented in the model by 85 source points. The runoff is based on data from SMHI's operational HBV model and on data from Global Runoff Data Centre (GRDC). It is important to note that the applied sources are lumped sources, which means that they represent the main rivers at each location as well as the non-resolved rivers/inflows in the vicinity of the location. This means that the total runoff to the Baltic Sea/North Sea is correctly represented in the 85 model sources. Initial fields of salinity and temperature, i.e., 3D fields of salinity and temperature within the model domain, have been established based on previous model runs. The present hydrodynamic model setup is an updated version of the model used for the BANSAI project (SMHI 2006). Since the North Sea hydrodynamic model is relatively coarse in the Belt Sea (3 nautical miles cell size), a local hydrodynamic model covering the Belt Sea and the Baltic Sea was also applied. This model has a finer resolution (down to 1 km cell size) in the Belt Sea as illustrated in Fig. 2 and thus represents the flow here in more detail. The model was developed for another purpose, but has been made available for the present purpose (FEHY 2012). Section of the computational mesh for the hydrodynamic model covering the inner Danish waters. Colors indicate depth intervals in meters Agent-based models (ABM) have been widely applied in recent years for simulating a variety of phenomena within very diverse disciplines such as biology and ecology, social sciences, industrial process optimization, traffic infrastructure planning, and the financial sector just to mention a few examples. In ecology, ABMs aim at describing the behavior and state of discrete entities such as individual organisms or groups of organisms (∼superindividuals). One key element of ABMs is the ability to simulate how individuals respond (in terms of behavior and state) to a spatially and temporarily varying environment. When studying the aquatic environment and how small individuals spread within an aquatic system, information on water movement is a fundamental need. Here, 3D hydrodynamical models describing the water currents in high temporal and spatial resolution can provide a very detailed basis for these types of studies. By defining agents as discrete entities with an explicit x-y-z coordinate at any given point in time it is possible to link this type of ABM with a hydrodynamical model and thus combine the current driven movement of agents (=advection/dispersion) with biological movement behavior processes such as horizontal swimming, vertical migration, and age-induced settling. This type of ABM is often referred to as a Lagrangian type ABM and has been applied frequently, e.g., in studies examining the spreading and migration of pelagic larvae and fish (e.g., Goodwin et al. 2001; Humston et al. 2000; Cowen et al. 2006). To simulate the potential spread of marine invasive species deriving from ballast water an (Lagrange type) ABM was developed and applied in combination with the hydrodynamical model as described in the previous section. The ABM framework applied is an integrated part of the ecological modeling software ECO Lab, which is an open equation solver for building and executing biological and ecological models of aquatic systems. The developed ABM is applied here to simulate the spread of agents (or organisms) in the entire model area primarily driven by advective processes predicted by the hydrodynamical model. Three model organisms are chosen to represent examples of major groups of marine organisms likely to be introduced as invasive species through the release of ballast water within the North Sea region. Here "groups of marine organisms" refer to organisms which exhibit common behavioral characteristics. The groups of organisms considered here include representatives of a: (1) planktonic species (purely passive drifters), (2) pelagic larvae of a benthic invertebrate species (passive drifters in combination with settling), and (3) fish species (passive drifters in combination with active swimming activity). We mimic the spread of these three types of model organisms in a very simplistic way by: (1) simulating passive drift in combination with a constant mortality rate, (2) simulating passive drift in combination with active settling activity triggered as a function of age, and (3) simulating passive drift in combination with active horizontal swimming behavior including mortality rate. In addition to these three simulations, as a reference, we simulate passive drifting only subject to advection/dispersion processes. The approach described here is merely an attempt to address the spreading mechanisms and potential of small marine organisms in a general way. The approach is deliberately not addressing species-specific spreading. Also, when considering the risk of introducing invasive species in the marine environment species-specific habitat preferences, life histories and environmental tolerances are essential to understand and predict the ability of introduced species to establish a sustainable population. These issues are not addressed here. However, species-specific traits and the implication for establishing sustainable populations can be simulated by extending the current approach combining hydrodynamic modeling and ABM with habitat maps and/or classical concentration-based (Euler type) ecological modeling. The latter describing any necessary dynamical parameter affecting the organism such as salinity, temperature, dissolved oxygen, food abundance, etc. Unlike the hydrodynamical modeling, due to the nature of the phenomena modeled neither calibration nor validation of the ABM is being considered. This means that the modeling exercise here apart from the hydrodynamical modeling is predominantly theoretical. The ABM approach can be seen as an interpreter to describe how common behavioral characteristics and traits may or may not result in significant deviations from passive drift. The results of the model approach should be evaluated as such. ABM formulation Selected simplistic functional behaviors and life histories used for formulation of the ABM include: (1) Movement in terms of dispersion and active swimming, (2) settling, (3) mortality, and (4) longevity. The background for and the ABM formulations used in this study are described in Appendix 1 in the Online Supplementary Material. Please note that this appendix includes additional references not cited in this paper. Connectivity mapping The combination of HD modeling and ABMs (or particle tracking models) has been applied in several studies addressing the degree of connectivity between specific habitats or subregions within marine regions (Cowen et al. 2003, 2006; Paris et al 2005; Christensen et al 2008; Berglunda et al. 2012) However, in most cases, the studies have focused on specific species and connectivity between the species-specific habitats, or in more general connectivity in terms of, e.g., dispersal of passively drifting larvae between specific habitats such as coral reefs, spawning sites, etc. A more general approach applying a combination of hydrodynamical modeling and simple particle tracking has been proposed to establish a framework for identifying connectivities of any sub-region within the South-Western shelf region of Australia (Condie et al. 2006). In short, the particle trajectories from the simulations are treated statistically and translated into probability maps describing the "probability of any two regions within the model domain being connected by the modeled circulation". Probabilities were "computed for a range of dispersion times on a 0.1 degree geographical grid" covering the model domain. The connectivity statistics addresses connectivities in discrete points in time and space as well as statistics aggregated for longer periods (months or quarters). As part of the project, a web-based service was developed where users can select a "source area" from a map interface, select start time, and dispersal duration, and as a result get a connectivity probability map for the selected source area and dispersal duration. In this study, the above-outlined approach has been further developed, e.g., by (1) including not only passive particle tracking but also biological processes by applying ABM techniques and by (2) developing an overall connectivity index or indices compiling all connectivity statistics of each local area into a single value. In order to define "connectivity", we discriminate between two types of connectivities: downstream connectivity and upstream connectivity (Fig. 3). Sketch of the differences between the concepts of downstream connectivity and upstream connectivity Downstream connectivity we define as connectivity between donor areas (or source areas), and surrounding areas (or receiving areas). Here "areas" does not necessarily refer to computational grid cells but rather any areal division of the model domain into, e.g., a regular grid or a number of management units. During a simulation, each agent "visiting" an area at any time will have a distinct trajectory forward in time visiting other areas in the model domain. When simulating a large number of agents, the equivalent large number of trajectories forward in time can be statistically analyzed revealing the probability of areas to supply agents to other areas. This we refer to a downstream connectivity probability. Downstream connectivity answers questions such as "where do the agents go from here?" Upstream connectivity we define as connectivity between receiving areas and source areas. Again "areas" can refer to any areal division of the model domain. During a simulation, each agent "visiting" an area at any time of the simulation will have a distinct trajectory backwards in time having visited other areas in the model domain prior to the registration of the agent in the area analyzed. When simulating a large number of agents, the equivalent large number of trajectories backwards in time can be statistically analyzed revealing the probability of areas to receive agents from other areas. This we refer to as an upstream connectivity probability. Upstream connectivity answers questions such as "where do the agents come from?" Both upstream and downstream connectivity probabilities can be evaluated at any dispersal time ranging from seconds to years depending on the organism and dispersal phenomenon considered. Connectivity probability All agent trajectories stored every 6 h of the 1-year simulation period are analyzed statistically. The model domain is divided into 25 × 25 km quadratic grid cells resulting in approximately 1,000 local areas covering the sea area. This is the spatial resolution at which connectivities will be evaluated. For each agent registered in an area, the future and previous location of each agent is registered/tracked considering four dispersal times (forward and backwards in time). This is repeated for every 6 h. For details on the selection of dispersal times, see the next section. To calculate downstream connectivity for the whole simulation period, the numbers of agents remaining in the area as well as appearing in other areas at each of the four dispersal times are counted. Downstream connectivity probabilities are calculated simply by dividing these numbers for each area by the total number of agents registered in the area analyzed. The outcome is a distribution map for each area showing the distribution of probabilities, i.e., the probability of an agent in an area to be registered sometimes in the future in the same area and in each of the surrounding areas. In cases where mortality is included in the scenario, probabilities are weighted according to the likelihood that an agent survives each of the four dispersal times. To calculate upstream connectivity for the whole simulation period for an area, the numbers of agents originating from the same area as well as each of the surrounding areas at each of the four dispersal times backwards in time are counted. Upstream connectivity probabilities are calculated by dividing these numbers by the total number of agents registered in the area analyzed. Also here, the outcome is a distribution map for each area showing the distribution of probabilities, i.e., the probability of an agent in an area having visited each of the surrounding areas including the probability of agents that have remained in the area analyzed during the dispersal times considered. For scenario 1 (passive drifting), scenario 2 (planktonic organism), and scenario 3 (juvenile fish) these procedures are repeated for all 25 × 25 km areas. For scenario 4, the combination of mortality and settling was not simulated in one simulation. Since mortality is high (0.1 per day) and mean settling age is 30 days, only a very little fraction of introduced agents will reach settling age. To achieve sufficient statistical basis for the connectivity analysis, this would require a very large number of agents in the model simulation. For technical reasons, this was not desirable. Instead mortality was taken into account as part of the postprocessing of the model results. For each agent registered in an area every 6 h, the future location where it settles is registered. For all agents registered in each area at any time for the whole simulation period, the numbers of agents settled in each of the surrounding areas, including the area analyzed, are counted and the probabilities are calculated by dividing these numbers by the total number of agents registered in the area analyzed. To account for mortality (0.1 per day), the number of agents settled in an area is adjusted using the following equation: $$ {N}_{\mathrm{adj}} = N\times {\left(1\hbox{--}\ k\right)}^t $$ N adj : is the adjusted number of settled agents N : the number of agents settled in an area k : the mortality rate per day (=0.1 per day ∼0.1 daily mortality probability), and t : time between the time an agent is registered until it settles. Similarly to the downstream dispersal probabilities, prior to calculating the upstream dispersal probabilities for each area in scenario 4, numbers of agent originating from each of the surrounding areas are adjusted according to the equation above, N adj now being the adjusted number of agents at the point in time where agents are discharged into the water, and t is the time between the time of discharge until the agents is settled. For downstream connectivity probabilities in scenario 4, since agents are registered every 6 h the same agent will be registered every sixth hour starting at the time of its introduction to the model domain until it settles somewhere (on average 30 days after introduction). Every consecutive 6-h time step that the same agent is registered, the time remaining until settling decreases by 6 h. In this way, all the registrations of an agent, each time representing a time and location of release of ballast water, will correspond to the assumption that the distribution of larvae age classes is uniformly distributed in the ballast water at the time of release. Notice that for scenario 4 while downstream connectivity probabilities reflect the assumption that all age classes are evenly represented in the ballast water at time of discharge, upstream connectivity probabilities reflect the case where organisms are age class zero at the time of discharge. The outcome of the connectivity probability mapping for all scenarios as described above consists of connectivity probability matrices equivalent to distance matrices applied, e.g., as a look up table for distances in kilometers between major cities or locations in road atlases. Instead of distances in kilometers, the connectivity matrices include numbers representing probabilities of, in case of downstream connectivity, agents in area A ending up in area B and agents in area B ending up in area A. Contrary to a distance matrix, either direction has different probabilities. A conceptual example is shown in Table 1. Table 1 Example of a downstream connectivity matrix for 4 areas (1–4). Table values represent probabilities of agents in one area (=source area) ending up in another area (=receiving area) The resulting connectivity matrices will consist of approximately two 1,000 × 1,000 matrices for downstream and upstream connectivities respectively, for each set of dispersal times. Connectivity probabilities for each area can be extracted from the two matrices to produce connectivity probability maps for downstream and upstream connectivities respectively. These approximately 2,000 maps can be referred to a multilayer connectivity maps or MCMs and are available as Online Supplementary Material. Ideally, in cases where all agents simulated as passive drifters (as in scenario 1) and where agents are not subject to mortality, the sum of probability values within each probability map will be 1. However, when mortality is included as part of the ABM, the sum of probabilities will be less than 1, with all probabilities (in case of downstream connectivity) representing the probability of an organism being distributed to other areas a specific point in time ahead. Since connectivity probabilities are expected to vary spatially depending on the dispersal time considered, four dispersal times were selected for each analysis in order to cover a range of dispersal situations for a given organism. These four dispersal times were selected for each analysis from three criteria: (1) ecological relevance, (2) a minimum of 10 % of agents at t = 0 remaining at any dispersal time, and (3) seasons evenly reflected in calculation results. Here "ecological relevance" refers to, e.g., that the dispersal time should lie within the expected life duration and/or the duration of the pelagic stage. For scenario 1 (passive drifting) upstream and downstream connectivity probabilities are calculated for different sets of four dispersal times: 2, 4, 6, and 8 days, 3, 9, 15, and 21 days, and 7, 14, 21, and 28 days. The different sets of dispersal times are used for comparison with scenarios 2, 3, and 4 (see below), and in order to evaluate how selections of different dispersal times may influence the calculated connectivities. A maximum of 28 days were applied primarily to ensure that months and seasons were evenly reflected in calculation results. For scenario 2 (planktonic species) upstream and downstream connectivity probabilities are calculated for four dispersal times: 3, 9, 15, and 21 days. These were selected for the following reasons: since a mortality of 0.1 day−1 is applied, 10 % of the agents at t = 0 remains approximately 24 days later. The 24 days were divided evenly into four time periods (0–6 days, 6–12 days, etc.) and the mean day numbers of each of four 6-day periods were selected as the four dispersal times (∼3, 9, 15, and 21 days). For scenario 3 (fish), upstream and downstream connectivity probabilities are calculated for four dispersal times: 1, 2, 3, and 4 weeks. These were selected for the following reasons: since juvenile fish typically has much lower mortality (here, 0.003 day−1) 10 % of agents will remain after approximately 2 years. Thus, to cover the entire time span, this will require a much longer simulation than the 1-year simulation for the current study. In addition, the need to reflect months and seasons evenly results in a maximum of approximately 1 month dispersal time being selected. Based on these considerations, four weekly dispersal times were selected. For scenario 4 (pelagic larvae of benthic invertebrate) upstream and downstream connectivity probabilities are calculated for the same four dispersal times as for scenario 2. Connectivity indices The primary strength of the connectivity probability matrices described in the previous section is to give detailed information on how connected any two areas are within the modeling domain integrated over time. However, more than 1,000 maps require some kind of simplification in order to provide a more overall measure on how well-connected areas are in general without necessarily providing any information on precisely which areas are interconnected. Within ballast water risk assessment, there is a need to distinguish between areas more likely to export organisms far away and/or to a larger area than other areas (∼through downstream connectivity). These areas with high dispersal potential can be perceived as high risk zones where the release of invasive species through ballast water may have an increased potential of reaching optimal habitat conditions thereby increasing the likelihood for establishing a population successfully. Here, we propose a methodology to compile all information from the downstream connectivity probability maps into one single map using a simple and transparent approach. Similarly, in terms of upstream connectivity, it is important to identify areas more likely to receive invasive species than others, i.e., acting as sinks. These areas can be identified from upstream connectivity probability matrices as areas with high probabilities of receiving agents from far away and/or from a large area. It is clear that the proposed simplification of connectivities through the development of connectivity indices assumes that agents are distributed randomly within the entire modeling domain which is not the case when it comes to invasive species from ballast water. However, indices will give indications on where release of ballast water may be more likely to result in a significant spread of organisms, and which areas will be more likely to receive invasive species than others. This type of information is important from a management point of view. For calculation of connectivity indices with the term "momentum" (M) is introduced. Momentum can be calculated in several ways. The momentum can be calculated for each area and its probability map by simply summing the products of connectivity probabilities of each surrounding area and the distances to the areas: $$ M\left(\mathrm{area}\ i\right)=\sum Prob\left(\mathrm{area}\ j\right)\times D\left(\mathrm{area}\ j\right) $$ This approach weights the probabilities with distance only. The results presented in this report apply this formula. However, alternative definitions of "Momentum" may be applied to include multiplication of probabilities with areal coverage instead of distance, or a combination of areal coverage and distance, where areal coverage refers to the size of the area covered by say the 90 % fractile of probabilities. The hydrodynamic model has been validated in terms of water level, salinity and temperature in a number of stations. The comparisons generally show a fairly good comparison between measurement and model. A few examples of this validation are given in Appendix 2 (see locations in Fig. 1). The comparisons demonstrate that the model is able to reproduce the water temperatures well within the model domain. Both the variability and the absolute values are captured by the model. The annual cycle of the thermal stratification is described well by the model. Also, the salinity conditions are described well by the model. The model captures the salinity stratification and the intrusions of saline North Sea water through the Belt Sea to the Arkona and Bornholm basins and further into the Gotland Deep in the Baltic Proper. The vertical structure of the water column in the northern North Sea is illustrated in Fig. 4b, which shows the annual mean salinity and water temperature in a vertical section between Scotland and Norway. The lower salinities of surface waters along the Norwegian coast represent the outflowing brackish Baltic water. a Modeled annual (2005) mean surface currents. b Modeled annual mean water temperature in a vertical section from Scotland to Norway. c The north-south current component in the same vertical section. See Fig. 1 for the location of the vertical section The model includes the tidal currents as well as the meteorologically induced and the baroclinic currents. The modeled annual mean surface currents are shown in Fig. 4a. The mean current may be regarded as the residual currents, which may be expected to be important with respect to connectivity. Significant northward and northeastward residual current is observed in the southeastern and eastern part of the North Sea, whereas the western part displays relatively lower residual currents. In the Belt Sea, an outward (northward) residual surface flow is observed, which transports the brackish Baltic water out into Skagerrak. In Skagerrak, an anticlockwise gyre is observed and a residual flow along the Norwegian coast brings the Baltic surface water into the North Sea and further into the Norwegian Sea. All these features are in accordance with the literature. In the vertical, Fig. 4c shows a marked outward residual current in the upper water column along the Norwegian coast. This represents the outflow of mixed, brackish Baltic water and is consistent with the vertical salinity distribution mentioned above. In other parts of the vertical cross-section, a relatively low, southward residual current is observed. Because the calculation procedures are repeated for each 6-h time step of the 1-year simulation time, each agent will be included multiple times as part of the statistical basis for the analyses. Thus, the total number of agents being analyzed results in a large number of agents as the basis for the statistical probability maps. The statistical analyses of connectivity were done based on results from the ABM simulations carried out for both a regional model for the entire model domain and a local model for the Kattegat, the Belts, and the western part of the Baltic Sea. The statistical basis for the downstream connectivity of scenario 1 is shown in Figs. 5 and 6 for the two models. For the vast majority of the model domain, the statistical basis for each area is more than 1,000 agents. Smaller numbers are found close to the shorelines and in the inner Danish straits. This is partly because a part of the 25 × 25 km squares along shorelines include land and thus the area covered by water is smaller than 25 × 25 km. In addition, the smaller numbers may be a result of more agents being excluded from the simulation because agents closer to land and in shallow areas more likely hit land or seafloor boundaries. Statistical basis for the downstream connectivity probability mapping of scenario 1. Numbers are numbers of agents available for the statistical analysis of the downstream connectivity using the regional model Statistical basis for the downstream connectivity probability mapping of scenario 1. Numbers are numbers of agents available for the statistical analysis of the downstream connectivity using the local model As described in the previous sections, these coastal areas are discarded from the statistical analyses. Also in areas located close to the open boundaries of the model domain, the statistical basis is small because of many agents crossing the open boundaries and subsequently not available for statistical analyses. Because of these issues, the robustness of the methodology based on the current simulations is strongest in the open waters of the North Sea, Kattegat, and the eastern Baltic Sea, and may be less robust in some parts of the coastal and shallow waters, and close to the open model boundaries. Robustness can be improved by applying more agents in the inner Danish straits, by improving the model describing the agent trajectories more correctly close to land and seafloor boundaries, and by extending the model domain further out in order to reduce the impact of open model boundaries on connectivity statistics. The statistical basis shown in Figs. 5 and 6 is for scenario 1 for 1 week dispersal time. Similarly, data on statistical basis can be shown for scenarios 2–4 and for each dispersal time. Connectivity probability maps An example of downstream connectivity probability maps for scenario 1 (simple drifting) for one selected area in the North Sea (indicated by red arrows) for four different dispersal times: 1, 2, 3, and 4 weeks is shown below (Fig. 7). Values are probability values between 0 and 1, and the sum of probabilities in each map is 1 since no mortality is considered. Only values greater than 0.01 are shown. In all maps, the probability distributions are highly influenced by north-going currents along the west coast of Jutland showing that probability values greater than 0.01 are dominating in the northern and northeastern directions. Maps also indicate that there are significant differences in probability maps depending on the dispersal time considered—the longer the dispersal time considered, the further away agents move. Downstream connectivity probability maps for one selected area in the North Sea (indicated by red arrows) for four dispersal times: 1, 2, 3, and 4 weeks. Only probability values larger than 0.01 are shown Four probability maps of the four different dispersal times are combined into one probability map (Fig. 8). Because no mortality is included, each dispersal time weighted equally and probabilities for the four dispersal times are simply averaged. Downstream connectivity probability map for one selected area (the same as in Fig. 7) in the North Sea (indicated by red arrow) with aggregated probability values for 1, 2, 3, and 4 weeks' dispersal times All probability maps presented in Figs. 7 and 8 reflect the hydrodynamical variations during 2005 for scenario 1 where simple drifting is simulated. Similar maps for any area within the model domain can be extracted from the downstream connectivity matrices for scenarios 1–4 representing a given dispersal time or a combination of dispersion times. For scenarios where mortality is included, the connectivity probability maps of the combination of the four dispersal times are weighted according to the probability of agents to survive on each of the four dispersal times. This will result in an aggregated probability map where, e.g., 1 week dispersal probabilities will be weighted more than 2-, 3-, and 4-week probability values. Other dispersal times, e.g., 2, 4, 6, and 8 days, will show probability maps with a much narrower distribution of >0.01 probabilities around each area. These are not presented here. Similarly, maps for upstream connectivity probabilities can be extracted for each area representing the probability of agents registered in an area having come from other areas. Results for the upstream analyses are not presented here. Both upstream and downstream probability connectivity maps for scenario 1 aggregated over time for every 25 × 25 km blocks are available on Online Supplementary Material. Downstream momentum was calculated for the four scenarios based on the aggregated connectivity probabilities (=probabilities evaluated based on multiple dispersal times). Notice that momentums for scenario 3 are based on combined 1-, 2-, 3-, and 4-week dispersal times, while for scenarios 2 and 4, momentums are based on combined 3, 9, 15, and 21 days dispersal times. For comparisons, scenario 1 will be evaluated at each of these set of time scales. Downstream momentum for scenario 1 for 1-, 2-, 3-, and 4-weeks' dispersal times are shown in Figs. 9 and 10. Downstream connectivity indices (momentums) for scenario 1 (passive particle tracking). Indices are based on combined 1-, 2-, 3-, and 4-weeks' dispersal times Downstream connectivity indices (momentums) for scenario 1 (passive particle tracking) presented as deviation from mean value in percentages for 1-, 2-, 3-, and 4-weeks' dispersal times The greatest downstream momentum values are found in the northeastern part of the North Sea between the Danish and Norwegian coasts, along the west coast of the Netherlands and in Kattegat and the Danish belts, while large parts of the western part of the North Sea, German Bight, and the Baltic sea east of Bornholm have low connectivities. Downstream momentum for scenario 2 for 3-, 9-, 15-, 21-days' dispersal times are shown in Figs. 11 and 12. Downstream connectivity indices (momentums) for scenario 2 (planktonic organisms). Indices are based on combined 3-, 9-, 15-, and 21-days' dispersal times Downstream connectivity indices (momentums) for scenario 2 (planktonic organism) presented as deviation from mean value in percentages for 3-, 9-, 15-, and 21-days' dispersal times Downstream momentum values of scenario 2 show a similar distribution pattern as in scenario 1. Notice that dispersal time applied for scenarios 1 and 2 are different. For more accurate comparison, see the following section. Downstream momentum for scenario 3 for 1-, 2-, 3-, 4-weeks' dispersal times are shown in Figs. 13 and 14. Downstream connectivity indices (momentums) for scenario 3 (juvenile fish). Indices are based on combined 1-, 2-, 3-, and 4-weeks' dispersal times Downstream connectivity indices (momentums) for scenario 3 (juvenile fish) presented as deviation from mean value in percentages for 1, 2, 3, and 4 weeks' dispersal times Downstream momentum values for scenario 3 show a similar distribution pattern to that in scenario 1. For more accurate comparison see the following section. Downstream momentums for scenario 4 for dispersal times corresponding to the time duration of the time between the registration of each organism until it settles are shown in Figs. 15 and 16. Downstream connectivity indices (momentums) for scenario 4 (pelagic larvae). Indices are based on dispersal times corresponding to the time duration until each individual settles Downstream connectivity indices (momentums) for scenario 4 (pelagic larvae) presented as deviation from mean value in percentages for dispersal times corresponding to the time duration until each individual settles Downstream momentum values for scenario 4 show a similar overall distribution pattern to those seen in scenarios 1, 2, and 3, however with some deviation. For more accurate comparison see the following section. Importance of dispersal time The selection of dispersal time for evaluating connectivity between marine areas is important simply because the longer time we "follow" the dispersal of an organism, the longer distance the organism is likely to travel, given that the organism is still alive. In terms of connectivity indices, this implies that longer dispersal times give greater connectivity indices. However, in addition to the magnitude of the connectivity indices, the spatial pattern of connectivity indices may change. As an example, connectivity indices were calculated for scenario 1 for two sets of dispersal times: 2, 4, 6, 8 days, and 7, 14, 21, 28 days and deviation from the overall mean value were plotted (Fig. 17). Although the overall patterns of the two plots for the two sets of dispersal times are similar, some discrepancies are evident. Some areas show larger deviation from the mean value when evaluated on a longer time span, while others show smaller deviation from the mean value. Downstream connectivity indices (momentums) for scenario 1 (passive particle tracking) presented as deviation from mean value in percentages for 2-, 4-, 6-, and 8-days (left) and 1-, 2-, 3-, and 4-weeks' (right) dispersal times Importance of biological factors As for the selection of dispersal time when evaluating connectivity between marine areas, biological factors including mortality, settling, etc., are important because the longer an organism lives, the longer we can "follow" the dispersal of an organism and the longer distance the organism is likely to travel if flow conditions are suitable. Thus, when applying connectivity probability maps to predict, for instance, the likelihood that an introduced organism in an area will spread to other specific areas, biological factors are important. Biological factors may also have an effect when comparing the connectivities between areas. In Figs. 18 and 19 are shown the comparisons for scenarios 1 and 2, and scenarios 1 and 3, respectively, for the calculated deviation from the mean value of the connectivity indices. Comparisons between scenarios 1 and 2 are done for 3-, 9-, 15-, 21-days' dispersal time, and comparisons between scenarios 1 and 3 are done for 7-, 14-, 21-, 28-days' dispersal times. Both comparisons show a similar overall pattern in the variability of connectivity indices. However, scenario 2 shows some differences compared to scenario 1. For instance, connectivity indices in the Danish belts show significantly lower deviation from mean than in scenario 1. This indicates that when evaluating the connectivity of specific marine areas for organisms with high mortality, it may be important to consider their mortality when simulating the spread of organisms. Scenario 3 on the contrary shows very little deviation from scenario 1 indicating that swimming behavior simulated as random walk has no major effect on the spatial variability in connectivity. Comparison of downstream connectivity indices (momentums) for scenario 1 (passive particle tracking) and scenario 2 (planktonic organism. Left panel Scenario 2 (planktonic organism). Right panel Scenario 1 (passive particle tracking). Presented as deviation from mean value in percentages for 3-, 9-, 15-, 21-days' dispersal times Comparison of downstream connectivity indices (momentums) for scenario 1 (passive particle tracking) and scenario 3 (juvenile fish). Left panel Scenario 3 (juvenile fish). Right panel Scenario 1 (passive particle tracking). Presented as deviation from mean value in percentages for 1-, 2-, 3-, 4-weeks' dispersal times For scenario 4, comparisons are done with scenario 2, not scenario 1, since scenario 4 is not evaluated on selected dispersal time, but rather dispersal times corresponding to the time duration from the time of registration of an agent in an area until it settles. To get an idea how settling affects the momentum, comparison of "deviations from mean" between scenario 2 and 4 are shown (Fig. 20). Differences between scenarios 2 and 4 are the settling introduced in scenario 4 and that momentums of scenario 2 are evaluated at discrete dispersal times (3, 9, 15 and 21 days). Here, as for scenarios 2 and 3, the main overall patterns in the variability of connectivity indices are maintained, with some deviations locally. Comparison of downstream connectivity indices (momentums) for scenario 2 (planktonic organisms) and scenario 4 (pelagic larvae of benthic invertebrate). Left panel Scenario 4 (pelagic species). Right panel Scenario 2 (planktonic organism). Presented as deviation from mean value in percentages. Dispersal times evaluated for scenario 2 is 3, 9, 15, and 21 days and for scenario 4 the time duration until each individual settles Momentum—upstream Upstream connectivity indices are only shortly presented here since the main focus of this study is to identify areas where the release of ballast water may have a high potential for spreading to other parts of the North Sea region. We will not go into detail on how dispersal time or biological factors may affect the outcome of the upstream connectivity analysis. However, in general, the upstream connectivity indices show similar spatial variability between areas, clearly identifying areas more likely to receive ballast-water-derived organisms from far away. Figure 21 shows the upstream connectivity indices for scenario 1 (passive particle tracking) based on 1-, 2-, 3-, 4-weeks' dispersal times. Upstream connectivity indices (momentums) for scenario 1 (passive particle tracking). Indices are based on combined 1-, 2-, 3-, and 4 weeks' dispersal times Areas with the highest values of upstream connectivity are primarily areas of the deeper part of the waters between northwestern Jutland and Norway and along the southwestern Norwegian coast. The values are highly distinct from other marine areas and this indicates that these areas serve as major sinks or receiving water bodies of passively transported agents. Again, as for the downstream connectivity, lowest values are found in particular in the western part of the North Sea, in the German Bights and the Baltic Sea east of Bornholm. Intermediate values are found in the eastern part of the North Sea, in Kattegat, the Danish straits including Fermarn Belt, and in the western parts of the Baltic Sea. Conclusions and outlook The analyses carried out for this study show that when evaluating the connectivity of marine waters of the North Sea region including Skagerak, Kattegat, the Danish belts, and the western parts of the Baltic Sea, the hydrodynamics seem to play the most important role when considering small organisms with limited ability to perform a significant autonomic movement behavior. Despite some differences in calculated connectivities due to biological factors and the choice of dispersal time applied for the connectivity analyses, the overall pattern of the variability of connectivity indices show, at least at this preliminary stage of analysis and at a regional level, a rather unambiguous indication that biological factors and dispersal time are only of secondary importance when ranking areas according to their degree of connectivity. At the local level, however, under some conditions, biological factors as well as the choice of dispersal time applied may play an important role. We recommend that additional biological factors that could have a potential impact on connectivity of marine areas should be identified and tested to sustain (or contradict) these conclusions. These may include: (1) simulation of diurnal vertical migration of planktonic species, (2) simulation of oriented swimming behavior of juvenile fish, and (3) analyses of differences in connectivity between seasons. Also, the methodology proposed here for calculating connectivity (as momentums) solely depending on the distance traveled by each simulated organism during a number of dispersal time needs to be consolidated. As an example, the inclusion of the area covered by, e.g., a 90 % fractile of connectivity probabilities could be considered. In addition to connectivity indices, the downstream connectivity probability maps for each individual area have a very high potential for predicting the probability of a ballast-water-derived organism ending up at a specific location, e.g., locations identified as being especially sensitive or at locations providing suitable habitat for specific species. Similarly, upstream connectivity probability maps for a specific area (a given habitat, Marine-Protected Areas, etc.) predict which areas may potentially contribute with ballast-water-derived organisms. The primary aim of the methodology presented here is to demonstrate how to apply a combination of agent-based modeling and hydrodynamical modeling to describe and develop measures for the inter-connectivity of marine ecosystems systems and its potential application for ballast water risk assessment. During this work, a number of issues have been identified on how to improve the methodology: (1) the hydrodynamical model was not developed specifically to address these issues, in particular near-shore hydrodynamics are important to avoid agents being "captured" in still water which in most cases may be a model resolution artifact rather than a true hydrodynamical phenomenon, (2) agent-based models may be further developed to be species-specific including habitat preferences, environmental stresses and population dynamics, and (3) inclusion of water quality modeling will improve prediction of the effects on environmental stressors on agents. In case our prototype tool is further developed, we foresee that it may be useful for at least three other purposes: Data layers for assessments of cumulative human pressures and impacts (sensu Halpern et al. 2008). Mapping of cumulative pressures and impacts relies on ecologically relevant data layers, both for pressures and ecosystem components. The downstream connectivity index presented in this study could potentially be regarded as a pressure layer representing likely dispersal routes of introduced alien species. According to the EU Marine Strategy Framework Directive (Anon 2008), which is based on the Ecosystem Approach to management of human activities, all "marine" EU Member States are required to map cumulative pressures (see examples in Korpinen et al. 2012 and Andersen et al. 2013). The mapping carried out so far in Europe does not, as far as we know, take into account aliens species and their potential dispersal routes. It would in our opinion be an important step forward to full implementation of the Ecosystem Approach as both alien species and dispersal routes are included in the next generation of cumulative impact assessments Marine Spatial Planning, including zoning and site selection, i.e., for future designation and design of Marine-Protected Areas taking the identification of so-called "Blue Corridors" into consideration (Martin et al. 2006). We foresee that the connectivity index can be modified and used for estimation of where species-specific "Blue Corridors" might be located. If so, the index developed and presented in this study could be useful for evidence-based design and designation of networks of Marine-Protected Areas in regional seas or at subregional scale. Ballast water risk assessments, especially in regard to exemptions from the Ballast Water Management Convention (BWMC), adopted in 2004 (IMO 2004). Exemption can be granted from ships sailing routinely between specific ports or locations and is based on a risk assessment (RA) using the best available scientific information (MEPC 2007). In our opinion, the approach presented and discussed in this study potentially set a new standard for what can de done when assessing the risk. For example, the modeling setup can be modified to specific local conditions (salinity, temperature, dominating currents, etc.) and target to specific organisms of interests. In conclusion, we have developed a prototype Decision Support Tool for modeling of risks of spreading of introduced alien species via ballast water. The prototype tool reported is based on two types of models and a postprocessing activity: Firstly, a 3D hydrodynamical model calculates the currents in the North Sea and Danish Straits. Secondly, an agent-based model estimates the dispersal of selected model organisms with the current regime calculated by the 3D model. Thirdly, scenarios of dispersal are combined into an interim estimate of connectivity within the study area. The prototype tool should in our opinion be regarded as a platform for further development and testing. However, it can in its present form be used for interim estimates of connectivity and hence as a tool for assessment of potential risk associated to intentional or unintentional discharges of ballast water. The tool can also be used for other purposes, e.g., in regard to ecosystem-based management and the implementation of the EU Marine Strategy Framework Directive. Andersen JH, Stock A, Heinänen S, Mannerla M, Vinther M. (2013): Human uses, pressures and impacts in the eastern North Sea. Aarhus University, DCE - Danish Centre for Environment and Energy. Technical Report from DCE - Danish Centre for Environment and Energy No. 18. 134 pp. http://www2.dmu.dk/Pub/TR18.pdf. Assessed 2 May 2014 Anon. (2008) Directive 2008/56/EC of the European Parliament and the Council of 17 June 2008 establishing a framework for community action in the field of marine environmental policy (Marine Strategy Framework Directive). Official Journal of the European Union, Brussels, L 164/19 Bax N, Williamson A, Aguero M, Gonzalez E, Geeves W (2003) Marine invasive alien species: a threat to global biodiversity. Mar Pol 27(4):313–323 Berglunda M, Jacobia MN, Jonsson PR (2012) Optimal selection of marine protected areas based on connectivity and habitat quality. Ecol Mod 240:105–112 Christensen A, Jensen H, Mosegaard H, St. John M, Schrum C (2008) Sandeel (Ammodytes marinus) larval transport patterns in the North Sea from an individual-based hydrodynamic egg and larval model. Can J Fish Aquat Sci 65:1498–1511 Condie S, Andrewartha J, Mansbridge J, Waring J (2006) Modelling circulation and connectivity on Australia's North West Shelf. North-West Shelf Joint Environmental Management Study, Technical Report no. 6, 71 pp Cowen RK, Paris CB, Olson DB, Fortuna JL (2003) The role of long distance dispersal versus local retention in replenishing marine populations. Gulf and Caribbean Research Supplement. Gulf Caribbean Res 14(2):129–138 Cowen RK, Paris CB, Srinivasan A (2006) Scaling of connectivity in marine populations. Science 311:522 DHI (2009) MIKE 3 Flow Model, Hydrodynamic Module, Scientific Documentation. MIKE by DHI 2009, 58 pp DHI (2011) MIKE 21 and MIKE 3 Flow Model FM, Hydrodynamic and Transport Module, Scientific Documentation. MIKE by DHI 2011, 58 pp FEHY (2012) Fehmarnbelt Fixed Link EIA. Hydrography of the Fehmarnbelt area – Impact Assessment. Report No. E1TR0058 Volume II, 131 pp Gollasch S, Haydar D, Minchin D, Wolff WJ, Reise K (2009) Introduced aquatic species of the North Sea coasts and adjacent brackish waters. Ecol Stud 204:507–528 Goodwin A, Nestler JM, Loucks DP, Chapman RS (2001) Simulating mobile populations in aquatic ecosystems. Journal of Water Resources Planning and Management, November/December Halpern BS, Walbridge S, Selkoe KA, Kappel CV, Micheli F, D'Agrosa C, Bruno JF, Casey KS, Ebert C, Fox HE, Fujita R, Heinemann D, Lenihan HS, Madin EMP, Perry MT, Selig ER, Spalding M, Steneck R, Watson R (2008) A global map of human impacts on marine ecosystems. Science 319:948–952 Hayward PJ, Ryland JS (1995) Handbook of the marine fauna of north-west Europe, XI. Oxford University Press, Oxford, 800 pp HELCOM (2010) Ecosystem Health of the Baltic Sea. HELCOM Initial Holistic Assessment 2003-2007. Balt. Sea Env. Proc. 122. Helsinki Commission. 63 pp. http://www.helcom.fi/stc/files/Publications/Proceedings/bsep122.pdf. Accessed 2 May 2014. Humston R, Ault JS, Lutcavage M, Olson DB (2000) Schooling and migration of large pelagic fishes relative to environmental cues. Fish Oceanogr 9(2):136–146 IMO (2004) Adoption of the final act and any instruments, recommendations and resolutions resulting from the work of the conference. International convention for the control and management of ships' ballast water and sediments, 2004. Adopted 16 February 2004. 36 pp Korpinen S, Meski L, Andersen JH, Laamanen M (2012) Human pressures and their potential impact on the Baltic Sea ecosystem. Ecol Ind 15:105–114 Leppäkoski E, Gollasch S, Olenin S (2003) Invasive aquatic species of Europe. Distribution, impacts and management. Kluwer, Dordrecht Martin G, Makinen A, Andersson Å, Dinesen GE, Kotta J, Hansen J, Herkül K, Ockelmann KW, Nilsson P, Korpinen S (2006) Literature review of the "Blue Corridors" concept and it's applicability to the Baltic Sea. BALANCE Interim report No. 4, 67 pp. http://balance-eu.org/xpdf/balance-interim-report-no-4.pdf. Accessed 2 May 2014 MEPC (2007) Guidelines for risk assessment under regulation A-4 of the BWM convention (G7). Annex 2 adopted 13 July 2007. The Marine Environment Protection Committee, IMO. 16 pp Misund OA, Skjoldal HR (2005) Implementing the ecosystem approach: experiences from the North Sea, ICES, and the Institute of Marine Research. Norway Mar Ecol Prog Ser 300:241–296 OSPAR (2010) Quality Status Report 2010. OSPAR Commission. 175 pp. http://qsr2010.ospar.org/en/index.html. Accessed 2 May 2014 Paris CB, Cowen RK, Claro R, Lindeman KC (2005) Larval transport pathways from Cuban snapper (Lutjanidae) spawning aggregations based on biophysical modelling. Mar Ecol Prog Ser 296:93–106 Ruiz GM, Carlton JT (Eds) (2003) Invasive species: Vectors and management strategies. Island Press, 509 pp SMHI (2006) The year 2005. An environmental status report of the Skagerrak, Kattegat and the North Sea. The Baltic and North Sea Marine Environmental Modelling Assessment Initiative (BANSAI). SMHI, IMR and DHI. Published by SMHI. 28 pp This study is based on work done under The Ballast Water Opportunity project, which is co-funded by the INTERREG IVB North Sea Region Programme of the European Regional Development Fund (ERDF), the Danish Nature Agency and DHI. The authors would like to thank Johnny Reker, Joachim Raben-Levetzau, Flemming Møhlenberg, and Ulrik Berggren for constructive discussions as well as Ciarán Murray for language checking. Hong D. Vo Present address: , Queensland, Australia DHI, Hørsholm, Denmark Flemming T. Hansen, Michael Potthoff & Thomas Uhrenholdt World Maritime University, Malmø, Sweden Olof Linden Aarhus University, Aarhus, Denmark Jesper H. Andersen Flemming T. Hansen Michael Potthoff Thomas Uhrenholdt Correspondence to Flemming T. Hansen. Below is the link to the electronic supplementary material. Appendix 1.1 ABM formulation in this study (PDF 379 kb) Comparison of measured and modelled salinity (PDF 385 kb) Maps of estimated downstream connectivity (PDF 58753 kb) Maps of estimated upstream connectivity (PDF 58357 kb) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. Hansen, F.T., Potthoff, M., Uhrenholdt, T. et al. Development of a prototype tool for ballast water risk management using a combination of hydrodynamic models and agent-based modeling. WMU J Marit Affairs 14, 219–245 (2015). https://doi.org/10.1007/s13437-014-0067-8 Issue Date: October 2015 ballast water agent-based modelling individual-based modelling ecological connectivity water framework directive MIKE 3 ABM Lab
CommonCrawl
atomic units core levels distance (charge density) energy parameters Fermi level interstitial region lattice harmonics local orbitals magnetic moment muffin-tin sphere Here we will describe a few terms often used in the context of FLEUR calculations Almost all input and output in the FLEUR code is given in atomic units, with the exception of the U and J parameters for the LDA+U method in the input-file and the bandstructure and the DOS output-files where the energy unit is eV. energy units: 1 Hartree (htr) = 2 Rydberg (Ry) = 27.21 electron volt (eV) length units: 1 bohr (a.u.) = 0.529177 Ångström = 0.0529177 nm electron mass, charge and Planks constant h / 2 π (ℏ) are unity speed of light = e'^2^'/ℏ 1/ α ; fine-structure constant α: 1/α = 137.036 The band-gap printed in the output ([[out]] file) of the FLEUR code is the energy separation between the highest occupied Kohn-Sham eigenvalue and the lowest unoccupied one. Generally this value differs from the physical band-gap, or the optical band-gap, due to the fact that Kohn-Sham eigenvalues are in a strict sense Lagrange multipliers and not quasiparticle energies (see e.g. Perdew & Levy, PRL 51, 1884 (1983)). States, which are localized near the nucleus and show no or negligible dispersion can be treated in an atomic-like fashion. These core levels are excluded from the valence electrons and not described by the FLAPW basisfunctions. Nevertheless, their charge is determined at every iteration by solving a Dirac equation for the actual potential. Either a radially symmetric Dirac equation is solved (one for spin-up, one for spin-down) or, if @@kcrel=1@@ in the input file, even a magnetic version (cylindrical symmetry) is solved. In an iteration of the self consistency cycle, from a given input charge density, ρ'^in^', a output density, ρ'^out^', is calculated. As a measure, how different these two densities are, the distance of charge densities (short: distance, d) is calculated. It is defined as the integral over the unit cell: {$ d = \int || \rho^{in} - \rho^{out} || d \vec r $}\ and gives an estimate, whether self-consistency is approached or not. Typically, values of 0.001 milli-electron per unit volume (a.u.'^3^') are small enough to ensure that most properties have converged. You can find this value in the out-file, e.g. by @@grep dist out@@. In spin-polarized calculations, distances for the charge- and spin-density are provided, for non-Collinear magnetism calculations even three components exists. Likewise, in an LDA+U calculation a distance of the density matrices is given. To construct the FLAPW basisfunctions such, that only the relevant (valence) electrons are included (and not, e.g. 1s, 2s, 2p for a 3d-metal) we need to specify the energy range of interest. Depending slightly on the shape of the potential and the muffin-tin radius, each energy corresponds to a certain principal quantum number "n" for a given "l". E.g. if for a 3d transition metal all energy parameters are set to the Fermi-level, the basis functions should describe the valence electrons 4s, 4p, and 3d. Also for the vacuum region we define energy parameters, if more than one principal quantum number per "l" is needed, local orbitals can be specified. In a calculation, this is the energy of the highest occupied eigenvalue (or, sometimes it can also be the lowest unoccupied eigenvalue, depending on the "thermal broadening", i.e. numerical issues). In a bulk calculation, this energy is given relative to the average value of the interstitial potential; in a film or wire calculation, it is relative to the vacuum zero. Every part of the unit cell that does not belong to the muffin-tin spheres and not to the vacuum region. Here, the basis (charge density, potential) is described as 3D planewaves. Symmetrized spherical harmonics. According to the point group of the atom, only certain linear combinations of spherical harmonics are possible. A list of these combinations can be found at the initial section of the out-file. To describe states outside the valence energy window, it is recommended to use local orbitals. This can be useful for lower-lying semicore-states, as well as unoccupied states (note, however, that this just enlarges the basis-set and does not cure DFT problems with unoccupied states). The magnetic (spin) moment can be defined as difference between "spin-up" and "spin-down" charge, either in the entire unit cell, or in the muffin-tin spheres. Both quantities can be found in the out-file, the latter one explicitly marked by " --> mm", the former has to be calculated from the charge analysis (at the end of this file). \ The orbital moments are found next to the spin-moments, when SOC is included in the calculation. They are only well defined in the muffin-tin spheres as {$ m_{orb} = \mu_B \sum_i < \phi_i | r \times v | \phi_i > $}.\ The in a collinear calculation, the spin-direction without SOC is arbitrary, but assumed to be in z-direction. With SOC, it is in the direction of the specified spin-quantization axis. The orbital moment is projected on this axis. In a non-collinear calculation, the spin-directions are given explicitely in the input-file. Spherical region around an atom. The muffin-tin radius is an important input parameter. The basis inside the muffin-tin sphere is described in spherical harmonics times a radial function. This radial function is given numerically on a logarithmic grid. The charge density and potential here are also described by a radial function times a the lattice harmonics.
CommonCrawl
\begin{document} \title{Jump processes as Generalized Gradient Flows} \author{Mark A.\ Peletier} \address{M.\ A.\ Peletier, Department of Mathematics and Computer Science and Institute for Complex Molecular Systems, TU Eindhoven, 5600 MB Eindhoven, The Netherlands} \email{M.A.Peletier\,@\,tue.nl} \author{Riccarda Rossi} \address{R.\ Rossi, DIMI, Universit\`a degli studi di Brescia. Via Branze 38, I--25133 Brescia -- Italy} \email{riccarda.rossi\,@\,unibs.it} \author{Giuseppe Savar\'e} \address{G.\ Savar\'e, Dipartimento di Matematica ``F.\ Casorati'', Universit\`a degli studi di Pavia. Via Ferrata 27, I--27100 Pavia -- Italy} \email{giuseppe.savare\,@\,unipv.it} \author{Oliver Tse} \address{O.\ Tse, Department of Mathematics and Computer Science, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands} \email{o.t.c.tse\,@\,tue.nl} \begin{abstract} We have created a functional framework for a class of non-metric gradient systems. The state space is a space of nonnegative measures, and the class of systems includes the Forward Kolmogorov equations for the laws of Markov jump processes on Polish spaces. This framework comprises a definition of a notion of solutions, a method to prove existence, and an archetype uniqueness result. We do this by using only the structure that is provided directly by the dissipation functional, which need not be homogeneous, and we do not appeal to any metric structure. \end{abstract} \maketitle \tableofcontents \section{Introduction} The study of dissipative variational evolution equations has seen a tremendous activity in the last two decades. A general class of such systems is that of \emph{generalized gradient flows}, which formally can be written as \begin{equation} \label{eq:GGF-intro-intro} \dot \rho = {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta \color{black} {\mathsf R}^*(\rho,-{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\rho {\mathsf E}(\rho)) \end{equation} in terms of a \emph{driving functional} ${\mathsf E}$ and a \emph{dual dissipation potential} ${\mathsf R}^* = {\mathsf R}^*(\rho,\upzeta)$, where ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\rho$ denote derivatives with respect to $\upzeta$ and $\rho$. The most well-studied of these are classical gradient flows~\cite{AmbrosioGigliSavare08}, for which $\upzeta \mapsto {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta {\mathsf R}^*(\rho,\upzeta) = \mathbb K(\rho) \upzeta$ \color{black} is a linear operator $\mathbb K(\rho)$, and rate-independent systems~\cite{MielkeRoubicek15}, for which $\upzeta\mapsto {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*(\rho,\upzeta)$ \color{black} is zero-homogeneous. However, various models naturally lead to gradient structures that are neither classic nor rate-independent. For these systems, the map $\upzeta\mapsto {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*(\rho,\upzeta)$ \color{black} is neither linear nor zero-homogeneous, and in many cases it is not even homogeneous of any order. Some examples are \begin{enumerate} \item Models of chemical reactions, where ${\mathsf R}^*$ depends exponentially on $\upzeta$~\cite{Feinberg72,Grmela10,ArnrichMielkePeletierSavareVeneroni12,LieroMielkePeletierRenger17}, \item The Boltzmann equation, also with exponential ${\mathsf R}^*$~\cite{Grmela10}, \item Nonlinear viscosity relations such as the Darcy-Forchheimer equation for porous media flow~\cite{KnuppLage95,GiraultWheeler08}, \item Effective, upscaled descriptions in materials science, where the effective potential~${\mathsf R}^*$ arises through a cell problem, and can have many different types of dependence on~$\upzeta$ \cite{ElHajjIbrahimMonneau09,PerthameSouganidis09,PerthameSouganidis09a,MirrahimiSouganidis13,LieroMielkePeletierRenger17,DondlFrenzelMielke18TR,PeletierSchlottke19TR,MielkeMontefuscoPeletier20TR}, \item Gradient structures that arise from large-deviation principles for sequences of stochastic processes, in particular jump processes~\cite{MielkePeletierRenger14,MielkePattersonPeletierRenger17}. \end{enumerate} The last example is the inspiration for this paper. Regardless whether ${\mathsf R}^*$ is classic, rate-independent, or otherwise, equation~\eqref{eq:GGF-intro-intro} typically is only formal, and it is a major mathematical challenge to construct an appropriate functional framework for this equation. Such a functional framework should give the equation a rigorous meaning, and provide the means to prove well-posedness, stability, regularity and approximation results to facilitate the study of the equation. For classical gradient systems, in which ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*$ is linear and ${\mathsf R}^*$ is quadratic in $\upzeta$ (therefore also called `quadratic' gradient systems) and when ${\mathsf R}^*$ generates a metric space, a rich framework has been created by Ambrosio, Gigli, and Savar\'e~\cite{AmbrosioGigliSavare08}. For rate-independent systems, in which ${\mathsf R}^*$ is $1$-homogeneous in $\upzeta$, the complementary concepts of `Global Energetic solutions' and `Balanced Viscosity solutions' give rise to two different frameworks~\cite{MielkeTheilLevitas02,Dal-MasoDeSimoneMora06,MielkeRossiSavare12a,MRS13,MielkeRoubicek15}. For the examples (1--5) listed above, however, ${\mathsf R}^*$ is not homogeneous in~$\upzeta$, and neither the rate-independent frameworks nor the metric-space theory apply. Nonetheless, the existence of such models of real-world systems with a formal variational-evolutionary structure suggests that there may exist a functional framework for such equations that relies on this structure. In this paper we build exactly such a framework for an important class of equations of this type, those that describe Markov jump processes. We expect the approach advanced here to be applicable to a broader range of systems. \subsection{Generalized gradient systems for Markov jump processes} Some generalized gradient-flow structures of evolution equations are generated by the large deviations of an underlying, more microscopic stochastic process~\cite{AdamsDirrPeletierZimmer11,AdamsDirrPeletierZimmer13,DuongPeletierZimmer13,MielkePeletierRenger14,MielkePeletierRenger16,LieroMielkePeletierRenger17}. This explains the origin and interpretation of such structures, and it can be used to identify hitherto unknown gradient-flow structures~\cite{PeletierRedigVafayi14,GavishNyquistPeletier19TR}. It is the example of Markov \emph{jump} processes that inspires the results of this paper, and we describe this example here; nonetheless, \color{black} the general setup that starts in Section~\ref{ss:assumptions} has wider application. We think of Markov jump processes as jumping from one `vertex' to another `vertex' along an `edge' of a `graph'; we place these terms between quotes because the space $V$ of vertices may be finite, countable, or even uncountable, and similarly the space $E:= V\times V$ of edges may be finite, countable, or uncountable (see Assumption~\ref{ass:V-and-kappa} below). In this paper, $V$ is a standard Borel space. The laws of such processes are time-dependent measures $t\mapsto \rho_t\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ (with ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ the space of positive finite Borel \color{black} \normalcolor measures---see Section~\ref{ss:3.1}). These laws satisfy the Forward Kolmogorov equation \begin{align}\label{eq:fokker-planck} \partial_t\rho_t = Q^*\rho_t, \qquad (Q^*\rho)(\mathrm{d} x) = \int_{y\in V} \rho(\mathrm{d} y) \kappa(y,\mathrm{d} x) - \rho(\mathrm{d} x)\int_{y\in V} \kappa(x,\mathrm{d} y). \end{align} Here $Q^*:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$ is the dual of the infinitesimal generator $Q:\mathrm{B}_{\mathrm b}(V)\to \mathrm{B}_{\mathrm b}(V)$ of the process, which for an arbitrary bounded Borel function $\varphi\in \mathrm{B}_{\mathrm b}(V)$ \normalcolor is given by \begin{equation} \label{eq:def:generator} (Q\varphi)(x) = \int_V [\varphi(y)-\varphi(x)]\,\kappa(x,\mathrm{d} y). \end{equation} The jump kernel $\kappa$ in these definitions characterizes the process: $\kappa(x,\cdot)\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ is the infinitesimal rate of jumps of a particle from the point $x$ to points in $V$. Here we address \color{black} the reversible case, which means that the process has an invariant measure $\pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, i.e., $Q^*\pi=0$, and that the joint measure $\pi(\mathrm{d} x) \kappa(x,\mathrm{d} y)$ is symmetric in $x$ and~$y$. In this paper we consider evolution equations of the form~\eqref{eq:fokker-planck} for the nonnegative measure $\rho$, as well as various linear and nonlinear generalizations. We will view them as gradient systems of the form~\eqref{eq:GGF-intro-intro}, and use this gradient structure to study their properties. The gradient structure for equation~\eqref{eq:fokker-planck} consists of the state space ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, a driving functional $\mathscr E:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\to[0,{+\infty}]$, and a dual dissipation potential $\mathscr R^*:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\times \mathrm{B}_{\mathrm b}(E)\to[0,{+\infty}]$ (where $\mathrm{B}_{\mathrm b}(E)$ denotes the space of bounded Borel functions on $E$). We now describe this structure in formal terms, and making it rigorous is one of the aims of this paper. The functional that drives the evolution is the relative entropy with respect to the invariant measure $\pi$, namely \begin{equation} \label{eq:def:S} \mathscr E(\rho) = \relax\mathscr F_{\upphi}(\rho|\pi):= \begin{cases} \displaystyle \relax \int_{V} \upphi\bigl(u(x)\bigr) \pi(\mathrm{d} x) & \displaystyle \text{ if } \rho \ll \pi, \text{ with } u =\frac{\mathrm{d} \rho }{\mathrm{d} \pi}, \\ {+\infty} & \text{ otherwise}, \end{cases} \end{equation} where for the example of Markov jump processes the `energy density' $\upphi$ is given by \begin{equation} \label{def:phi-log-intro} \upphi(s) := \relax s\log s - s + 1. \end{equation} (In the general development below we consider more general functions $\upphi$, such as those that arise in strongly interacting particle systems; see e.g.~\cite{KipnisOllaVaradhan89,DirrStamatakisZimmer16}). The dissipation potential ${\mathsf R}^*$ is best written in terms of an alternative potential $\mathscr R^*$, \[ {\mathsf R}^*(\rho,\upzeta) := \mathscr R^*(\rho,\dnabla \upzeta). \] Here the `graph gradient' $\dnabla:\mathrm{B}_{\mathrm b}(V) \to \mathrm{B}_{\mathrm b}(E)$ and its negative dual, the `graph divergence operator' $ \odiv:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$, \color{black} are defined as follows: \begin{subequations} \label{eq:def:ona-div} \begin{align} \label{eq:def:ona-grad} (\dnabla \varphi)(x,y) &:= \varphi(y)-\varphi(x) &&\text{for any }\varphi\in \mathrm{B}_{\mathrm b}(V),\\ \normalcolor (\odiv {\boldsymbol j} )(\mathrm{d} x) &:= \int_{y\in V} \bigl[{\boldsymbol j} (\mathrm{d} x,\mathrm{d} y)-{\boldsymbol j} (\mathrm{d} y,\mathrm{d} x)\bigr]\normalcolor &&\text{for any }{\boldsymbol j} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E), \color{black} \label{eq:def:div} \end{align} \end{subequations} \normalcolor and are linked by \begin{equation} \label{eq:nabladiv} \iint_E \dnabla\varphi(x,y)\,{\boldsymbol j} (\mathrm{d} x,\mathrm{d} y)= -\int_V \varphi(x) \,\odiv {\boldsymbol j} (\mathrm{d} x)\quad \text{for every }\varphi\in \mathrm{B}_{\mathrm b}(V). \end{equation} \normalcolor The dissipation functional $\mathscr R^*$ is defined for $\xi\in \mathrm{B}_{\mathrm b}(E)$ by \begin{align} \label{eq:def:R*-intro} &\mathscr R^*(\rho,\xi) := \frac 12 \int_{E} \Psi^*(\xi(x,y)) \, \boldsymbol\upnu_\rho(\mathrm{d} x \,\mathrm{d} y), \end{align} where the function $\Psi^*$ and the `edge' measure $\boldsymbol\upnu_\rho$ will be fixed in \eqref{eq:def:alpha} below. With these definitions, the gradient-flow equation~\eqref{eq:GGF-intro-intro} can be written alternatively as \begin{equation} \label{eq:GF-intro} \partial_t \rho_t = - \odiv \Bigl[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*\Bigl(\rho_t,-\relax \dnabla\upphi'\Bigl(\frac{\mathrm{d} \rho_t}{\mathrm{d}\pi}\Bigr)\Bigr)\Bigr], \end{equation} which can be recognized by observing that \[ \bigl\langle {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*(\rho,\upzeta),\tilde \upzeta\bigr \rangle = \frac{\mathrm{d} }{\mathrm{d} h} \mathscr R^*(\rho,\dnabla \upzeta+h\dnabla \tilde \upzeta)\Big|_{h=0} =\bigl\langle {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*(\rho,\dnabla \upzeta),\dnabla \tilde \upzeta\bigr \rangle =\bigl\langle -\odiv {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*(\rho,\dnabla \upzeta), \tilde \upzeta\bigr \rangle, \] and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} \mathscr E(\rho) = \relax \upphi'(u)$ (which corresponds to $\relax \log u$ for the logarithmic entropy \eqref{def:phi-log-intro}). This $(\odiv,\dnabla)$-duality structure is a common feature in both physical and probabilistic models, and has its origin in the distinction between `states' and `processes'; see~\cite[Sec.~3.3]{PeletierVarMod14TR} and~\cite{Ottinger19} for discussions. For this example of Markov jump processes we consider a class of generalized gradient structures of the type above, given by $\mathscr E$ and $\mathscr R^*$ (or equivalently by the densities $\upphi$, $\Psi^*$, and the measure $\boldsymbol\upnu_\rho$), with the property that equations~\eqref{eq:GGF-intro-intro} and~\eqref{eq:GF-intro} coincide with~\eqref{eq:fokker-planck}. Even for fixed $\mathscr E$ there exists a range of choices for $\Psi^*$ and $\boldsymbol\upnu_\rho$ that achieve this (see also the discussion in~\cite{GlitzkyMielke13,MielkePeletierRenger14}). A simple calculation (see the discussion at the end of Section \ref{ss:assumptions}) shows that, if one chooses for the measure $\boldsymbol\upnu_\rho$ the form \begin{equation} \label{eq:def:alpha} \boldsymbol\upnu_\rho(\mathrm{d} x\,\mathrm{d} y) = \upalpha(u(x),u(y))\, \pi(\mathrm{d} x)\kappa(x,\mathrm{d} y), \end{equation} \color{ddcyan} for a suitable fuction $\upalpha:[0,\infty)\times [0,\infty)\to [0,\infty)$, \color{black} and one introduces the map $\rmF:(0,\infty)\times(0,\infty)\to\mathbb{R}$ \begin{equation} \label{eq:184} \rmF(u,v):= (\Psi^*)'\big[\upphi'(v)-\upphi'(u)\big]\upalpha(u,v)\quad u,v>0, \end{equation} then \eqref{eq:GF-intro} takes the form of the integro-differential equation \begin{equation} \partial_t u_t(x) = \int_{y\in V} \mathrm F\bigl(u_t(x),u_t(y)\bigr)\, \kappa(x,\mathrm{d} y),\label{eq:180} \end{equation} in terms of the density $u_t$ of $\rho_t$ with respect to $\pi$. Therefore, \normalcolor a pair $(\Psi^*,\boldsymbol\upnu_\rho)$ leads to equation~\eqref{eq:fokker-planck} whenever \normalcolor $(\Psi^*,\upphi,\upalpha)$ satisfy the \emph{compatibility property} \begin{equation} \label{cond:heat-eq-2} \rmF(u,v)=v-u \quad \text{for every }u,v>0. \end{equation} The classical quadratic-energy, quadratic-dissipation choice \begin{equation} \label{eq:68} \Psi^*(\xi)=\tfrac 12\xi^2,\quad \upphi(s)=\tfrac 12s^2,\quad \upalpha(u,v)=1 \end{equation} corresponds to the Dirichlet-form approach to \eqref{eq:fokker-planck} in $L^2(V,\pi)$. Here $\mathscr R^*(\rho,{\boldsymbol j})=\mathscr R^*({\boldsymbol j})$ is in fact independent of $\rho$: if one introduces the symmetric bilinear form \begin{equation} \label{eq:185} \llbracket u,v\rrbracket:=\frac 12\iint_E \dnabla u(x,y)\,\dnabla v(x,y)\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y),\quad \llbracket u,u\rrbracket=\frac 12\iint_E \Psi(\dnabla u)\,\mathrm{d} \boldsymbol \teta, \end{equation} with $\boldsymbol \teta (\mathrm{d} x, \mathrm{d} y) = \pi(\mathrm{d} x ) \kappa(x, \mathrm{d} y)$ (cf.\ \eqref{nu-pi} ahead), \color{black} then \eqref{eq:180} can also be formulated as \begin{equation} \label{eq:186} (\dot u_t, v)_{L^2(V,\pi)}+ \llbracket u_t,v\rrbracket=0\quad \text{for every }v\in L^2(V,\pi). \end{equation} \normalcolor Two other choices have received attention in the recent literature. Both of these are based not on the quadratic energy $\upphi(s)=\tfrac 12s^2$, but on the Boltzmann entropy functional $\upphi(s) = s\log s - s + 1$: \begin{subequations} \label{choices} \begin{enumerate} \item The large-deviation characterization~\cite{MielkePeletierRenger14} leads to the choice \begin{equation} \label{choice:cosh} \Psi^*(\xi) := 4\bigl(\cosh (\xi/2) - 1\bigr) \quad \text{and}\quad \upalpha(u,v) := \sqrt{uv}. \end{equation} The corresponding primal dissipation potential $\Psi := (\Psi^*)^*$ is given by \[ \Psi(s) := 2s\log \left(\frac{s+\sqrt{s^2+4}}2 \right) - \sqrt{s^2 + 4} + 4. \] \item The `quadratic-dissipation' choice introduced independently by Maas~\cite{Maas11}, Mielke \cite{Mielke13CALCVAR}, and Chow, Huang, and Zhou~\cite{ChowHuangLiZhou12} for Markov processes on \emph{finite} graphs, \begin{equation} \label{choice:quadratic} \Psi^*(\xi) := \tfrac12 \xi^2, \quad \Psi(s) = \tfrac12 s^2 , \quad \text{and}\quad \upalpha(u,v) := \frac{ u-v }{ \log(u) - \log(v) }. \end{equation} \end{enumerate} \end{subequations} Other examples are discussed in \S \ref{subsec:examples-intro}. With \color{black} the quadratic choice~\eqref{choice:quadratic}, the gradient system fits into the metric-space structure (see e.g.~\cite{AmbrosioGigliSavare08}) and this feature has been used extensively to investigate the properties of general Markov jump processes~\cite{Maas11,Mielke13CALCVAR,ErbarMaas12,Erbar14,Erbar16TR,ErbarFathiLaschosSchlichting16TR}. In this paper, however, we focus on functions $\Psi^*$ that are not homogeneous, as in~\eqref{choice:cosh}, and such that the corresponding structure is not covered by the usual metric framework. On the other hand, there are various arguments why this structure nonetheless has a certain `naturalness' (see Section~\ref{ss:comments}), and these motivate our aim to develop a functional framework based on this structure. \subsection{Challenges} Constructing a `functional framework' for the gradient-flow equation~\eqref{eq:GF-intro} with the choices~\eqref{def:phi-log-intro} and~\eqref{choice:cosh} presents a number of independent challenges. \subsubsection{Definition of a solution} \label{ss:def-sol-intro} As it stands, the formulation of equation~\eqref{eq:GF-intro} and of the functional $\mathcal R^*$ of \eqref{eq:def:R*-intro} presents many difficulties: the definition of $\mathcal R^*$ and the measure $\boldsymbol\upnu_\rho$ when $\rho$ is not absolutely continuous with respect to~$\pi$, the concept of time differentiability for the curve of measures $\rho_t$, \color{black} whether $\rho_t$ is necessarily absolutely continuous with respect to $\pi$ along an evolution, what happens if $\mathrm{d} \rho_t /\mathrm{d} \pi$ vanishes and $\upphi$ is not differentiable at $0$ as in the case of the logarithmic entropy, etcetera. As a result of these difficulties, it is not clear what constitutes a solution of equation~\eqref{eq:GF-intro}, let alone whether such solutions exist. In addition, a good solution concept should be robust under taking limits, and the formulation~\eqref{eq:GF-intro} does not seem to satisfy this requirement either. For quadratic and rate-independent systems, successful functional frameworks have been constructed on the basis of the Energy-Dissipation balance~\cite{Sandier-Serfaty04,Serfaty11,MRS2013,LieroMielkePeletierRenger17,MielkePattersonPeletierRenger17}, and we follow that example here. In fact, the same large-deviation principle that gives rise to the `cosh' structure above formally yields the `EDP' functional \begin{equation} \label{eq:def:mathscr-L} \mathscr L(\rho,{\boldsymbol j} ) := \begin{cases} \displaystyle \int_0^T \Bigl[ \mathscr R(\rho_t, {\boldsymbol j} _t) + \mathscr R^*\Bigl(\rho_t, -\relax\dnabla \upphi'\Bigl(\frac{\mathrm{d} \rho_t}{\mathrm{d}\pi}\Bigr) \Bigr) \Bigr]\mathrm{d} t + \mathscr E(\rho_T) - \mathscr E(\rho_0)\hskip-8cm&\\ &\text{if }\partial_t \rho_t + \odiv {\boldsymbol j} _t = 0 \text{ and } \rho_t \ll \pi \text{ for all $t\in [0,T],$ \normalcolor}\\ {+\infty} &\text{otherwise.} \end{cases} \end{equation} In this formulation, $\mathscr R$ is the Legendre dual of $\mathscr R^*$ with respect to the $\xi$ variable, \normalcolor which can be written in terms of the Legendre dual $\Psi:=\Psi^{**}$ of $\Psi^*$ as \begin{equation} \label{eq:def:R-intro} \mathscr R(\rho,{\boldsymbol j} ) := \frac 12\normalcolor\int_{E} \Psi\left( 2\frac{\mathrm{d} {\boldsymbol j}}{\mathrm{d} \boldsymbol\upnu_\rho}\right)\mathrm{d}\boldsymbol\upnu_\rho. \qquad \end{equation} Along smooth curves $\rho_t=u_t\pi$ with strictly positive densities, the functional $\mathscr L$ is nonnegative, since \normalcolor \begin{align} \notag \normalcolor\frac \mathrm{d}{\mathrm{d} t} \mathscr E(\rho_t) &= \int_V \upphi'(u_t)\partial_t u_t\, \mathrm{d}\pi \normalcolor =\int_V \upphi'(u_t(x)) \partial_t\rho_t(\mathrm{d} x) = - \relax \int_V \upphi'(u_t(x)) (\odiv {\boldsymbol j}_t)(\mathrm{d} x)\\ & = \relax \iint_E \dnabla \upphi'(u_t) (x,y) \,{\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y) = \relax \iint_E \dnabla \upphi'(u_t) (x,y) \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\;\boldsymbol\upnu_{\rho_t} (\mathrm{d} x\,\mathrm{d} y) \label{eq:174} \\ &\geq - \frac 12\normalcolor \iint_E \left[ \Psi\left( 2\, \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right) + \Psi^*\left(-\relax \dnabla \upphi'(u_t) (x,y) \right) \right] \boldsymbol\upnu_{\rho_t}(\mathrm{d} x\,\mathrm{d} y). \label{ineq:deriv-GF} \end{align} After time integration we find that $\mathscr L(\rho,{\boldsymbol j} )$ is nonnegative for any pair $(\rho,{\boldsymbol j} )$. The minimum of $\mathscr L$ is formally achieved at value zero, at pairs $(\rho,{\boldsymbol j} )$ satisfying \begin{align}\label{eq:flux-identity} 2{\boldsymbol j}_t = (\Psi^*)'\left(- \relax \dnabla\upphi'\Bigl(\frac{\mathrm{d} \rho_t}{\mathrm{d}\pi}\Bigr)\right)\boldsymbol\upnu_{\rho_t} \qquad \text{and} \qquad \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \end{align} which is an equivalent way of writing the gradient-flow equation~\eqref{eq:GF-intro}. This can be recognized, as usual for gradient systems, by observing that achieving equality in the inequality~\eqref{ineq:deriv-GF} requires equality in the Legendre duality of $\Psi$ and $\Psi^*$, which reduces to the equations above. \color{ddcyan} \begin{remark} \label{rem:alpha-concave} It is worth noticing that the joint convexity of the functional $\mathscr R$ of \eqref{eq:def:R-intro} (a crucial property for the development of our analysis) is equivalent to the \emph{convexity} of $\Psi$ and \emph{concavity} of the function $\upalpha$. \end{remark} \color{black} \begin{remark} \label{rem:choice-of-2} Let us add a comment concerning the choice of the factor $1/2$ in front of $\Psi^*$ in \eqref{eq:def:R*-intro}, and the corresponding factors $1/2$ and $2$ in \eqref{eq:def:R-intro}. The cosh-entropy combination~\eqref{choice:cosh} satisfies the linear-equation condition $\rmF(u,v) = v-u$ (equation~\eqref{cond:heat-eq-2}) because of the elementary identity \[ 2\,\sqrt{uv} \,\sinh \Bigl(\frac12 \log \frac vu\Bigr) = v-u. \] The factor $1/2$ inside the $\sinh$ can be included in different ways. In~\cite{MielkePeletierRenger14} it was included explicitly, by writing expressions of the form ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} {\mathsf R}^*(\rho,-\tfrac12 {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}{\mathsf E}(\rho))$; in this paper we follow~\cite{LieroMielkePeletierRenger17} and include this factor in the definition of $\mathscr R^*$. \end{remark} \begin{remark} The continuity equation $\partial_t \rho_t + \odiv {\boldsymbol j} _t = 0 $ is invariant with respect to skew-symmetrization of ${\boldsymbol j}$, i.e.\ with respect to the transformation ${\boldsymbol j}\mapsto {\boldsymbol j}^\flat$ with ${\boldsymbol j}^\flat(\mathrm{d} x,\mathrm{d} y):= \frac12 \bigl({\boldsymbol j}(\mathrm{d} x,\mathrm{d} y)-{\boldsymbol j}(\mathrm{d} y,\mathrm{d} x)\bigr)$. \color{black} Therefore we could also write the second integral in \eqref{eq:174} as \begin{align*} & \iint_E \dnabla \upphi'(u_t) (x,y) \frac{\mathrm{d} {\boldsymbol j}^\flat_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\;\boldsymbol\upnu_{\rho_t} (\mathrm{d} x\,\mathrm{d} y) \\ &\qquad\geq - \frac 12 \iint_E \left[ \Psi\left( \frac{\mathrm{d} (2 {\boldsymbol j}^\flat_t)\color{black}}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right) + \Psi^*\left(-\relax \dnabla \upphi'(u_t) (x,y) \right) \right] \boldsymbol\upnu_{\rho_t}(\mathrm{d} x\,\mathrm{d} y). \end{align*} thus replacing $\Psi\left( 2\normalcolor \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right)$ with the lower term $\Psi\left( \frac{\mathrm{d} (2 {\boldsymbol j}^\flat_t)\color{black}}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right)$, cf.\ Remark \ref{rem:skew-symmetric}, \color{black} and obtaining a corresponding equation as \eqref{eq:flux-identity} for $(2{\boldsymbol j}_t^\flat)$ instead of $2{\boldsymbol j}_t$. \color{black} This would lead to a weaker gradient system, since the choice \eqref{eq:def:R-intro} forces ${\boldsymbol j}_t$ to be skew-symmetric, whereas the choice of a dissipation involving only ${\boldsymbol j}^\flat$ would not control the symmetric part of ${\boldsymbol j}$. On the other hand, the evolution equation generated by the gradient system would remain the same. \end{remark} Since at least formally equation~\eqref{eq:GF-intro} is equivalent to the requirement $\mathscr L(\rho,{\boldsymbol j} )\leq0$, we adopt this variational point of view to define solutions to the generalized gradient system $(\mathscr E,\mathscr R,\mathscr R^*)$. This inequality is in fact the basis for the variational Definition~\ref{def:R-Rstar-balance} below. In order to do this in a rigorous manner, however, we will need \begin{enumerate} \item A study of the continuity equation \begin{equation} \label{eq:ct-eq-intro} \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \end{equation} that appears in the definition of the functional $\mathscr L$ (Section~\ref{sec:ct-eq}). \item A rigorous definition of the measure $\boldsymbol\upnu_{\rho_t}$ and of the \normalcolor functional $\mathscr R$ (Definition~\ref{def:R-rigorous}); \item A class $\CER 0T$ of curves ``of finite action'' in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ along which the functional $\mathscr R$ has finite integral \normalcolor (equation~\eqref{adm-curves}); \item An appropriate \normalcolor definition of the \emph{Fisher-information} functional (see Definition~\ref{def:Fisher-information}) \begin{equation}\label{eq:formal-Fisher-information} \rho \mapsto \mathscr{D}(\rho) := \mathscr R^*\bigl(\rho,-\relax \dnabla \upphi'(\mathrm{d} \rho/\mathrm{d} \pi)\bigr); \end{equation} \item A proof of the lower bound $\mathscr L\geq 0$ (Theorem~\ref{th:chain-rule-bound}) via a suitable chain-rule inequality. \end{enumerate} \subsubsection{Existence of solutions} The first test of a new solution concept is whether solutions exist under reasonable conditions. In this paper we provide two existence proofs that complement each other. The first existence proof is based on a reformulation of the equation~\eqref{eq:fokker-planck} as a differential equation in the Banach space $L^1(V,\pi)$, driven by a continuous dissipative operator. \normalcolor Under general compatibility conditions on $\upphi$, $\Psi$, and $\upalpha$, we show that the solution provided by this abstract approach is also \color{black} a solution in the variational sense that we discussed above. The proof is presented in Section~\ref{s:ex-sg} and is quite robust for initial data whose density takes value in a compact interval $[a,b]\subset (0,\infty)$. In order to deal with a more general class of data, we will adopt two different viewpoints. A first possibility is to take advantage of the robust stability properties of the $(\mathscr E,\mathscr R, \mathscr R^*)$ Energy-Dissipation balance \color{black} when the Fisher information $\mathscr{D}$ is lower semicontinuous. A second possibility is to exploit the monotonicity properties of \eqref{eq:180} when the map $\rmF$ in~\eqref{eq:184} exhibits good behaviour at the boundary of $\mathbb{R}_+^2$ and at infinity. Since we believe \color{black} that the variational formulation reveals a relevant structure of such systems and we expect that it may also be useful in dealing with more singular cases and their stability issues, \normalcolor we also present a more intrinsic approach by adapting the well-established `JKO-Min\-i\-miz\-ing-Movement' method to the structure of this equation. This method has been used, e.g., for metric-space gradient flows~\cite{JordanKinderlehrerOtto98,AmbrosioGigliSavare08}, for rate-independent systems~\cite{Mielke05a}, for some non-metric systems with formal metric structure~\cite{AlmgrenTaylorWang93,LuckhausSturzenhecker95}, and also for Lagrangian systems with local transport~\cite{FigalliGangboYolcu11}. This approach relies on the \normalcolor {\em Dynamical-Variational Transport cost} (DVT) $\DVT \tau{\mu}{\nu}$, which is the $\tau$-dependent transport cost between two measures $\mu,\nu\in{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ induced by the dissipation potential $\mathscr R$ via \begin{equation} \label{def:W-intro} \DVT\tau{\mu}{\nu} := \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t \, : \, \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \ \rho_0 = \mu, \text{ and }\rho_ \tau = \nu\right\}. \end{equation} In the Minimizing-Movement scheme a single increment with time step $\tau>0$ is defined by the minimization problem \begin{equation} \label{MM-intro} \rho^n \in \mathop{\rm argmin}_\rho \, \left( \DVT\tau{\rho^{n-1}}\rho + \mathscr E(\rho)\right) . \end{equation} By concatenating such solutions, constructing appropriate interpolations, and proving a compactness result---all steps \color{black} similar to the procedure in~\cite[Part~I]{AmbrosioGigliSavare08}---we find a curve $(\rho_t,{\boldsymbol j}_t)_{t\in [0,T]}$ satisfying the continuity equation \eqref{eq:ct-eq-intro} such that \begin{equation} \label{ineq:soln-rel-gen-slope-intro} \int_0^t \bigl[\mathscr R(\rho_r,{\boldsymbol j}_r) + \mathscr{S}^-(\rho_r)\bigr]\, \mathrm{d} r + \mathscr E(\rho_t) \le \mathscr E(\rho_0)\qquad\text{for all $t\in[0,T]$}, \end{equation} where $\mathscr{S}^-:{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)\to[0,{+\infty})$ is a suitable {\em relaxed slope} of the energy functional $\mathscr E$ with respect to the cost $\mathscr{W}$ (see~\eqref{relaxed-nuovo}). Under a lower-semicontinuity \color{black} \normalcolor condition on $\mathscr{D}$ we show that $\mathscr{S}^-\ge \mathscr{D} $. It then follows that $\rho$ is a solution as defined above (see Definition~\ref{def:R-Rstar-balance}). Section~\ref{s:MM} is devoted to developing the `Minimizing-Movement' approach for general DVTs. This requires establishing \begin{enumerate}[resume] \item Properties of $\mathscr{W}$ that generalize those of the `metric version' $\DVT\tau\mu\nu = \frac1{2\tau}d(\mu,\nu)^2$ (Section~\ref{ss:aprio}); \item A generalization of the `Moreau-Yosida approximation' and of the `De Giorgi variational interpolant' to the non-metric case, and a generalization of their properties (Sections~\ref{ss:MM} and~\ref{ss:aprio}); \item A compactness result as $\tau\to0$, based on the properties of $\mathscr{W}$ (Section~\ref{ss:compactness}); \item A proof of $\mathscr{S}^-\ge \mathscr{D} $ (Corollary~\ref{cor:cor-crucial}). \end{enumerate} This procedure leads to our existence result, Theorem \ref{thm:construction-MM}, of solutions in the sense of Definition \ref{def:R-Rstar-balance}. \subsubsection{Uniqueness of solutions} We prove uniqueness of variational solutions under suitable convexity conditions of $\mathscr{D} $ and $\mathscr E$ (Theorem~\ref{thm:uniqueness}), following an idea by Gigli~\cite{Gigli10}. \normalcolor \subsection{Examples} \label{subsec:examples-intro} We will use the following two guiding examples to illustrate the results of this paper. Precise assumptions are given in Section~\ref{ss:assumptions}. In both examples the state space consists of measures $\rho$ on a standard Borel space $(V,\mathfrak B)$ endowed with a reference Borel measure $\pi$. The kernel $x\mapsto \kappa(x,\cdot)$ is a measurable family of nonnegative measures with uniformly bounded mass, such that the pair $(\pi,\kappa)$ satisfies detailed balance (see Section~\ref{ss:assumptions}). \normalcolor \emph{Example 1: Linear equations \color{ddcyan} driven by the Boltmzann entropy. \color{black}} This is the example that we have been using in this introduction. The equation is the linear equation~\eqref{eq:fokker-planck}, \[ \partial_t\rho_t(\mathrm{d} x) = \int_{y\in V} \rho(\mathrm{d} y) \kappa(y,\mathrm{d} x) - \rho(\mathrm{d} x)\int_{y\in V} \kappa(x,\mathrm{d} y), \] which can also be written in terms of the density $u =\mathrm{d}\rho/\mathrm{d} \pi$ as \[ \partial_t u_t(x) = \int_{y\in V} \bigl[u_t(y)-u_t(x)\bigr] \, \kappa(x,\mathrm{d} y), \] \color{ddcyan} and corresponds to the linear field $\rmF$ of \eqref{cond:heat-eq-2}. Apart from the classical quadratic setting of \eqref{eq:68}, \color{black} two gradient structures for this equation have recently received attention in the literature, both driven by the Boltzmann entropy \eqref{def:phi-log-intro} $\upphi(s) = s\log s - s + 1$ as described in~\eqref{choices}: \begin{enumerate}[label=\textit{(\arabic*)}] \item The `cosh' structure: $\Psi^*(\xi) = 4\bigl(\cosh(\xi/2) \normalcolor -1\bigr)$ and $\upalpha(u,v) = \sqrt{uv}$; \item The `quadratic' structure: $\Psi^*(\xi) = \tfrac12 \xi^2$ and $\upalpha (u,v) = (u-v)/\log(u/v)$. \end{enumerate} However, the approach of this paper applies to more general combinations $(\upphi,\Psi^*,\upalpha)$ that lead to the same equation. \color{ddcyan} Due to the particular structure of \eqref{eq:184}, it is clear that the $1$-homogeneity of the linear map $\rmF$ \eqref{cond:heat-eq-2} and the $0$-homogeneity of the term $\upphi'(v)-\upphi'(u)$ associated with \color{black} the Boltzmann entropy \eqref{def:phi-log-intro} restrict the range of possible $\upalpha$ to \emph{$1$-homogenous functions} like the `mean functions' $\upalpha(u,v) = \sqrt{uv}$ (geometric) and $\upalpha (u,v) = (u-v)/\log(u/v)$ (logarithmic). Confining \color{black} the analysis to concave functions (according to Remark \ref{rem:alpha-concave}), \color{black} we observe that every concave and $1$-homogeneous function $\upalpha$ can be obtained by the concave generating function $\mathfrak f:(0,{+\infty})\to (0,{+\infty})$ \begin{equation} \label{eq:150} \upalpha(u,v)=u\mathfrak f(v/u)=v\mathfrak f(u/v),\quad \mathfrak f(r):=\alpha(r,1),\quad u,v,r>0. \end{equation} The symmetry of $\upalpha $ corresponds to the property \begin{equation} \label{eq:151} r\mathfrak f(1/r)=\mathfrak f(r)\quad\text{for every }r>0, \end{equation} and shows that the function \begin{equation} \label{eq:152} \mathfrak g(s):=\frac{\exp(s)-1}{\mathfrak f(\exp(s))}\quad s\in \mathbb{R}, \text{ is odd}. \end{equation} The concaveness of $\mathfrak f$ also shows that $\mathfrak g$ is increasing, so that we can define \begin{equation} \label{eq:149} \Psi^*(\xi):=\int_0^{\xi} \mathfrak g(s)\,\mathrm{d} s =\int_1^{\exp(\xi)}\frac{r-1}{\mathfrak f (r)}\frac{\mathrm{d} r}r,\quad \xi\in \mathbb{R}, \end{equation} which is convex, even, and superlinear if \begin{equation} \label{eq:153} \upalpha(0,1)=\mathfrak f(0)= \lim_{r\to0}r\mathfrak f\Bigl(\frac1r\Bigr)=0. \end{equation} A natural class of concave and $1$-homogeneous weight functions is provided by the \color{red} {\em Stolarsky means} $\mathfrak c_{p,q}(u,v)$ with appropriate $p,q\in\mathbb{R}$, and any $u,v>0$ \cite[Chapter VI]{Bullen2003handbook}: \[ \upalpha(u,v) = \mathfrak c_{p,q}(u,v) := \begin{cases} \Bigl(\frac pq\frac{v^q-u^q}{v^p-u^p}\Bigr)^{1/(q-p)} &\text{if $p\ne q$, $q\ne 0$},\\ \Bigl( \frac{1}{p}\frac{v^p-u^p}{\log(v) - \log(u)}\Bigr)^{1/p} &\text{if $p\ne 0$, $q= 0$}, \\ e^{-1/p}\Bigl(\frac{v^{v^p}}{u^{v^p}}\Bigr)^{1/(v^p-u^p)} &\text{if $p= q\ne 0$}, \\ \sqrt{uv} &\text{if $p= q= 0$}, \end{cases} \] from which we identify other simpler means, such as the {\em power means} $\mathfrak m_p(u,v) = \mathfrak c_{p,2p}(u,v)$ with $p\in [-\infty, 1]$: \begin{equation} \label{eq:147} \mathfrak m_p(u,v) = \begin{cases} \Big(\frac 12\big(u^p+v^p\big)\Big)^{1/p}&\text{if $0<p\le 1$ or $-\infty<p<0$ and $u,v\neq0$},\\ \sqrt{uv}&\text{if }p=0,\\ \min(u,v)&\text{if }p=-\infty,\\ 0&\text{if }p<0\text{ and }uv=0, \end{cases} \end{equation} and the generalized logarithmic mean $\mathfrak l_p(u,v)=\mathfrak c_{1,p+1}(u,v)$, $p\in[-\infty,-1]$. \color{black} The power means are obtained from the concave generating functions \begin{equation} \label{eq:148} \mathfrak f_p(r):=2^{-1/p}(r^p+1)^{1/p} \quad \text{if }p\neq 0,\quad \mathfrak f_0(r)=\sqrt r,\quad \mathfrak f_{-\infty}(r)=\min(r,1),\quad r>0. \end{equation} We can thus define \begin{equation} \label{eq:149p} \Psi_p^*(\xi):=2^{1/p}\int_1^{\exp \xi} \frac{r-1}{(r^p+1)^{1/p}}\,\frac{\mathrm{d} r}r,\quad \xi\in \mathbb{R}, \quad p\in (-\infty,1]\setminus 0, \end{equation} with the obvious changes when $p=0$ (the case $\Psi_0^*(\xi)=4(\cosh(\xi/2)-1$)) or $p=-\infty$ (the case $\Psi_{-\infty}^*(\xi)= \exp(|\xi|)-|\xi|$). It is interesting to note that the case $p=-1$ (harmonic mean) corresponds to \begin{equation} \label{eq:154} \Psi_{-1}^*(\xi)=\cosh(\xi)-1. \end{equation} We finally note that the arithmetic mean $\upalpha(u,v)=\mathfrak m_1(u,v)=(u+v)/2$ would yield $\Psi_1^*(\xi)=4\log(1/2(1+\rme^\xi))-2\xi$, which is not superlinear. \emph{Example 2: Nonlinear equations.} We consider a combination of \color{black} $\upphi$, $\Psi^*$, and $\upalpha$ such that the function $\rmF$ introduced in \eqref{eq:184} has a continuous extension up to the boundary of $[0,{+\infty})^2$ and satisfies a suitable growth and monotonicity condition (see Section~\ref{s:ex-sg}). The resulting integro-differential equation is given by \eqref{eq:180}. Here is a list of some interesting cases (we will neglect all the issues \color{black} concerning growth and regularity). \begin{enumerate} \item A field of the form $\rmF(u,v)=f(v)-f(u)$ with $f:\mathbb{R}_+\to \mathbb{R}$ monotone corresponds to the equation \[ \partial_t u_t(x) = \int_{y\in V} \bigl(f(u_t(y))-f(u_t(x))\bigr)\, \kappa(x,\mathrm{d} y), \] and can be classically considered in the framework of the Dirichlet forms, i.e.~$\upalpha \equiv \color{black} 1$, $\Psi^*(r)= r^2/2$, with energy $\upphi$ satisfying $\upphi' = f$. \item The case $\rmF(u,v)=g(v-u)$, with $g:\mathbb{R}\to \mathbb{R}$ monotone and odd, yields the equation \[ \partial_t u_t(x) = \int_{y\in V} g\bigl(u_t(y)-u_t(x)\bigr)\, \kappa(x,\mathrm{d} y), \] and can be obtained with the choices $\upalpha \equiv \color{black} 1$, $\upphi(s):=s^2/2$ and $\Psi^*(r):=\int_0^r g(s)\,\mathrm{d} s$. \item Consider now the case when $\rmF$ is positively $q$-homogeneous, with $q\in [0,1]$. It is then natural to consider a $q$-homogeneous $\upalpha$ and the logarithmic entropy $\upphi(r)=r\log r-r+1$. If the function $h:(0,\infty)\to \mathbb{R}$, $h(r):=\rmF(r,1)/\upalpha(r,1)$ is increasing, then setting as in \eqref{eq:149p} \begin{displaymath} \Psi^*(\xi):=\int_1^{\exp (\xi)}h(r)\,\mathrm{d} r \end{displaymath} equation \eqref{eq:180} provides an example of generalized gradient system $(\mathscr E,\mathscr R,\mathscr R^*)$. Simple examples are $\rmF(u,v)=v^q-u^q$, corresponding to the equation \[ \partial_t u_t(x) = \int_{y\in V} \bigl(u^q_t(y)-u^q_t(x)\bigr)\, \kappa(x,\mathrm{d} y), \] with $\upalpha(u,v):= \mathfrak m_p(u^q,v^q)$ and $\Psi^*(\xi):=\frac 1q\Psi_p^*(q\xi)$, where $\Psi^*_p$ has been defined in~\eqref{eq:149p}. In the case $p=0$ we get $\Psi^*(\xi)=\frac 4q\big(\cosh(q\xi/2)-1\big)$. As a last example, we can consider $\rmF(u,v)=\operatorname{sign} (v-u)|v^m-u^m|^{1/m}$, $m>0$, and $\upalpha(u,v)=\min(u, v)$; in this case, the function $h$ given by \color{black} $h(r)=(r^m-1)^{1/m}$ when $r\ge1$, and $h(r)=-(r^{-m}-1)^{1/m}$ when $r<1$, satisfies the required monotonicity property. \end{enumerate} \subsection{Comments} \label{ss:comments} \emph{Rationale for studying this structure.} \color{ddcyan} We think that the structure of generalized gradient systems $(\calE,\mathscr R,\mathscr R^*)$ is sufficiently rich and interesting to deserve a careful analysis. It provides a genuine extension of the more familiar quadratic gradient-flow structure of Maas, Mielke, and Chow--Huang--Zhou, which better fits into the metric framework of \cite{AmbrosioGigliSavare08}. In Section~\ref{s:ex-sg} we will also show its connection with the theory of dissipative evolution equations. Moreover, \color{black} the specific non-homogeneous structure based on the $\cosh$ function~\eqref{choice:cosh} has a number of arguments in its favor, which can be summarized in the statement that it is `natural' in various different ways: \begin{enumerate} \item It appears in the characterization of large deviations of Markov processes; see Section~\ref{ss:ldp-derivation} or~\cite{MielkePeletierRenger14,BonaschiPeletier16}; \item It arises in evolutionary limits of other gradient structures (including quadratic ones) \cite{ArnrichMielkePeletierSavareVeneroni12,Mielke16,LieroMielkePeletierRenger17,MielkeStephan19TR}; \item It `responds naturally' to external forcing \cite[Prop.~4.1]{MielkeStephan19TR}; \item It can be generalized to nonlinear equations \cite{Grmela84,Grmela10}. \end{enumerate} We will explore these claims in more detail in a forthcoming paper. Last but not least, the very fact that non-quadratic, generalized gradient flows may arise in the limit of gradient flows suggests that, allowing for a broad class of dissipation mechanisms is crucial in order to (1) fully exploit the flexibility of the gradient-structure \color{black} formulation, and (2) explore its robustness with respect to $\Gamma$-converging energies and dissipation potentials. \color{black} \emph{Potential for generalization.} In this paper we have chosen to concentrate on the consequences of non-homogeneity of the dissipation potential $\Psi$ for the techniques that are commonly used in gradient-flow theory. Until now, the lack of a sufficiently general rigorous construction of the functional $\mathscr R$ and its minimal integral over curves $\mathscr{W}$ have impeded the use of this variational structure in rigorous proofs, and a main aim of this paper is to provide a way forward by constructing a rigorous framework for these objects, while keeping the setup (in particular, the ambient space $V$) as general as possible. \color{black} In order to restrict the length of this paper, we considered only simple driving functionals $\mathscr E$, which are of the local variety $\mathscr E(\rho) = \relax \int \upphi(\mathrm{d}\rho/\mathrm{d}\pi)\mathrm{d}\pi$. \normalcolor Many gradient systems appearing in the literature are driven by more general functionals, that include interaction and other nonlinearities~\cite{ErbarFathiLaschosSchlichting16TR,ErbarFathiSchlichting19TR,RengerZimmer19TR,HudsonVanMeursPeletier20TR}, and we expect that the techniques of this paper will be of use in the study of such systems. As one specific direction of generalization, we note that the Minimizing-Movement construction on which the proof of Theorem \ref{thm:construction-MM} is based has a scope wider than that of the generalized gradient structure $(\mathscr E, \mathscr R, \mathscr R^*)$ \color{black} under consideration. In fact, as we show in Section~\ref{s:MM}, Theorem~\ref{thm:construction-MM} yields the existence of (suitably formulated) gradient flows in a general \emph{topological space} endowed with a cost fulfilling suitable properties. While we do not develop this discussion in this paper, at places throughout the paper \color{black} we hint at this prospective generalization: the `abstract-level' properties of the DVT cost are addressed in Section~\ref{ss:4.5}, and the whole proof of Theorem \ref{thm:construction-MM} is carried out under more general conditions than those required on the `concrete' system set up in Section \ref{s:assumptions}. \color{black} \emph{Challenges for generalization.} A well-formed functional framework includes a concept of solutions that behaves well under the taking of limits, and the existence proof is the first test of this. Our existence proof highlights a central challenge here, in the appearance of \emph{two} slope functionals $\mathscr{S}^-$ and $\mathscr{D}$ that both represent rigorous versions of the `Fisher information' term $\mathscr R^*\bigl(\rho,-\dnabla\upphi'(\mathrm{d} \rho/\mathrm{d} \pi)\bigr)$. The chain-rule lower-bound inequality holds under general conditions for $\mathscr{D}$ (Theorem~\ref{th:chain-rule-bound}), but the Minimizing-Movement construction leads to the more abstract object $\mathscr{S}^-$. Passing to the limit in the minimizing-movement approach requires connecting the two through the inequality $\mathscr{S}^-\geq \mathscr{D}$. We prove it by first obtaining the inequality $\mathscr{S} \geq \mathscr{D}$, cf.\ Proposition \ref{p:slope-geq-Fish}, under the condition that a solution to the $(\mathscr E, \mathscr R, \mathscr R^*)$ system exists (for instance, by the approach developed in Section \ref{s:ex-sg}). We then deduce the inequality $\mathscr{S}^- \geq \mathscr{D}$ under the further condition that $\mathscr{D}$ be lower semicontinuous, which can be in turn proved under a suitable convexity condition (cf.\ Prop.\ \ref{PROP:lsc}). \color{black} We hope that more effective ways of dealing with these issues will be found in the future. \emph{Comparison with the Weighted Energy-Dissipation method.} It would be interesting to develop the analogous variational approach based on studying the \color{black} limit behaviour as $\varepsilon\downarrow0$ of the minimizers $(\rho_t,{\boldsymbol j}_t)_{t\ge0}$ of the Weighted Energy-Dissipation (\textrm{WED}) \color{black} functional \begin{equation} \label{eq:69} \mathscr{W}_\varepsilon(\rho,{\boldsymbol j} ):=\int_0^{+\infty} \mathrm e^{-t/\varepsilon} \Big(\mathscr R(\rho_t,{\boldsymbol j}_t)+\frac1\varepsilon\mathscr E(\rho_t)\Big)\,\mathrm{d} t \end{equation} among the solutions to the continuity equation with initial datum $\rho_0$, see \cite{RSSS19}. Indeed, the \emph{intrinsic character} of the \textrm{WED} functional, which only features the dissipation potential $\mathscr R$, makes it suitable to the present non-metric framework. \color{black} \subsection{Notation} The following table collects the notation used throughout the paper. \begin{center} \newcommand{\specialcell}[2][c]{ \begin{tabular}[#1]{@{}l@{}}#2\end{tabular}} \begin{small} \begin{longtable}{lll} $\dnabla$, $\odiv$ & graph gradient and divergence &\eqref{eq:def:ona-div}\\ $\upalpha(\cdot,\cdot)$ & multiplier in flux rate $\boldsymbol\upnu_\rho$ & Ass.~\ref{ass:Psi}\\ $\upalpha^\infty$, $\upalpha_*$ & recession function, Legendre transform & Section~\ref{subsub:convex-functionals} \\ $\upalpha[\cdot|\cdot]$, $\hat\upalpha$ & measure map, perspective function & Section~\ref{subsub:convex-functionals} \\ $\CER ab$ & set of curves $\rho$ with finite action & \eqref{def:Aab}\\ $ \|\kappa_V\|_\infty$ \color{black} & upper bound on $\kappa$ & Ass.~\ref{ass:V-and-kappa}\\ $ \mathrm{C}_{\mathrm{b}} $ & space of bdd, ct.\ functions with supremum norm\\ $\CE ab$ & set of pairs $(\rho,{\boldsymbol j} )$ satisfying the continuity equation & Def.~\ref{def-CE}\\ ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v)$, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^\pm_\upphi(u,v)$ & integrands defining the Fisher information $\mathscr{D}$ & \eqref{subeq:D}\\ $\mathscr{D}$ & Fisher-information functional & Def.~\ref{def:Fisher-information}\\ $E = V\times V$ & space of edges & Ass.~\ref{ass:V-and-kappa}\\ $\mathscr E$, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} (\mathscr E)$ & driving entropy \color{black} functional and its domain & \eqref{eq:def:S} \& Ass.~\ref{ass:S}\\ $\rmF$ & vector field & \eqref{eq:184}\\ ${\boldsymbol\vartheta}_\rho^\pm$, & $\rho$-adjusted jump rates & \eqref{def:teta}\\ $\boldsymbol \teta$ & equilibrium jump rate & \eqref{nu-pi}\\ $\kappa$ & jump kernel &\eqref{eq:def:generator} \& Ass.~\ref{ass:V-and-kappa}\\ $\kernel\kappa\gamma$ & $\gamma \otimes \kappa$ & \eqref{eq:84}\\ $\mathscr L$ & Energy-Dissipation balance functional &\eqref{eq:def:mathscr-L}\\ ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(\Omega;\mathbb{R}^m)$, ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(\Omega)$ & vector (positive) measures on $\Omega$ & Sec.~\ref{ss:3.1}\\ $\boldsymbol\upnu_\rho$ & edge measure in definition of $\mathscr R^*$, $\mathscr R$ &\eqref{eq:def:R*-intro}, \eqref{eq:def:R-intro}, \eqref{eq:def:alpha}\\ $Q$, $Q^*$ & generator and dual generator & \eqref{eq:fokker-planck}\\ $\mathscr R$, $\mathscr R^*$ & dual pair of dissipation potentials & \eqref{eq:def:R*-intro}, \eqref{eq:def:R-intro}, Def.~\ref{def:R-rigorous}\\ $\mathbb{R}_+ := [0,\infty)$ \\ ${\mathsf s}$ & symmetry map $(x,y) \mapsto (y,x)$ & \eqref{eq:87}\\ $\mathscr{S}^-$ & relaxed slope & \eqref{relaxed-nuovo}\\ $\Upsilon$ & perspective function associated with $\Psi$ and $\upalpha$& \eqref{Upsilon}\\ $V$ & space of states & Ass.~\ref{ass:V-and-kappa}\\ $\upphi$ & density of $\mathscr E$ & \eqref{eq:def:S} \& Ass.~\ref{ass:S}\\ $\Psi$, $\Psi^*$ & dual pair of dissipation functions & Ass.~\ref{ass:Psi}, Lem.~\ref{l:props:Psi}\\ $\mathscr{W}$ & Dynamic-Variational Transport cost & \eqref{def:W-intro} \& Sec.~\ref{sec:cost}\\ $\mathbb W$ & $\mathscr{W}$- action & \eqref{def-tot-var}\\ ${\mathsf x},{\mathsf y}$ & coordinate maps $(x,y) \mapsto x$ and $(x,y)\mapsto y$ & \eqref{eq:87}\\ \end{longtable} \end{small} \end{center} \subsubsection*{\bf Acknowledgements} M.A.P.\ acknowledges support from NWO grant 613.001.552, ``Large Deviations and Gradient Flows: Beyond Equilibrium". R.R.\ and G.S. acknowledge support from the MIUR - PRIN project 2017TEXA3H ``Gradient flows, Optimal Transport and Metric Measure Structures". O.T.\ acknowledges support from NWO Vidi grant 016.Vidi.189.102, ``Dynamical-Variational Transport Costs and Application to Variational Evolutions". Finally, the authors thank Jasper Hoeksema for insightful and valuable comments during the preparation of this manuscript. \section{Preliminary results} \label{ss:3.1} \subsection{Measure theoretic preliminaries} Let $(Y,\mathfrak B)$ be a measurable space. When $Y$ is endowed with \color{black} a (metrizable and separable) topology $\tau_Y$ we will often assume that $\mathfrak B$ coincides with the Borel $\sigma$-algebra $\mathfrak B(Y,\tau_Y)$ induced by $\tau_Y$. We recall that $(Y,\mathfrak B)$ is called a \emph{standard Borel space} if it is isomorphic (as a measurable space) to a Borel subset of a complete and separable metric space; equivalently, one can find a Polish topology $\tau_Y$ \color{black} on $Y$ such that $\mathfrak B=\mathfrak B(Y,\tau_Y)$. \par We will denote by ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ the space of $\sigma$-additive measures on $\mu: \mathfrak B \to \mathbb{R}^m$ of \emph{finite} total variation $\|\mu\|_{TV}: =|\mu|(Y)<{+\infty}$, where for every $B\in\mathfrak B$ \[ |\mu|(B): = \sup \left\{ \sum_{i=0}^{+\infty} |\mu(B_i)|\, : \ B_i \in \mathfrak B,\, \ B_i \text{ pairwise disjoint}, \ B = \bigcup_{i=0}^{+\infty} B_i \right\}. \] The set function $|\mu|: \mathfrak B \to [0,{+\infty})$ is a positive finite measure on $\mathfrak B$ \cite[Thm.\ 1.6]{AmFuPa05FBVF} and $({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m),\|\cdot\|_{TV})$ is a Banach space. In the case $m=1$, we will simply write ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y)$, and we shall denote the space of \emph{positive} finite measures on $\mathfrak B$ by ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$. For $m>1$, we will identify any element $\mu \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ with a vector $(\mu^1,\ldots,\mu^m)$, with $\mu^i \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y)$ for all $i=1,\ldots, m$. If $\varphi =(\varphi^1,\ldots,\varphi^m)\in \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$, the set of bounded $\mathbb{R}^m$-valued $\mathfrak B$-measurable maps, the duality between $\mu \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ and $\varphi$ can be expressed by \normalcolor \[ \langle\mu,\varphi\rangle : = \int_{Y} \varphi \cdot \mu (\mathrm{d} x) = \sum_{i=1}^m \int_Y \varphi^i(x) \mu^i(\mathrm{d} x). \] For every $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ and $B\in \mathfrak B$ we will denote by $\mu\mres B$ the restriction of $\mu$ to $B$, i.e.\ $\mu\mres B(A):=\mu(A\cap B)$ for every $A\in \mathfrak B$. Let $(X,\mathfrak A)$ be another measurable space and let ${\mathsf p}:X\to Y$ a measurable map. For every $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X;\mathbb{R}^m)$ we will denote by ${\mathsf p}_\sharp\mu$ the push-forward measure obtained by \begin{equation} \label{eq:82} {\mathsf p}_\sharp\mu(B):=\mu({\mathsf p}^{-1}(B))\quad\text{for every }B\in \mathfrak B. \end{equation} For every couple $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ and $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ there exist a unique (up to the modification in a $\gamma$-negligible set) $\gamma$-integrable map $\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}: Y\to\mathbb{R}^m$, a $\gamma$-negligible set $N\in \mathfrak B$ and a unique measure $\mu^\perp\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ yielding the \emph{Lebesgue decomposition} \begin{equation} \label{eq:Leb} \begin{gathered} \mu=\mu^a+\mu^\perp,\quad \mu^a=\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\,\gamma= \mu\mres(Y\setminus N),\quad \mu^\perp=\mu\mres N,\quad \gamma(N)=0\\ |\mu^\perp|\perp \gamma,\quad |\mu|(Y)=\int_Y \left|\frac{\mathrm{d} \mu}{\mathrm{d}\gamma}\right|\,\mathrm{d}\gamma+|\mu^\perp|(Y). \end{gathered} \end{equation} \subsection{Convergence of measures} \label{subsub:convergence-measures} Besides the topology of convergence in total variation (induced by the norm $\|\cdot\|_{TV}$), we will also consider \color{ddcyan} the topology of \emph{setwise convergence}, i.e.~the coarsest topology on ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ making all the functions \begin{displaymath} \mu\mapsto \mu(B)\quad B\in \mathfrak B \end{displaymath} continuous. \color{black} For a sequence $(\mu_n)_{n\in\mathbb{N}}$ and a candidate limit $\mu$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ we have the following equivalent characterizations of the corresponding convergence \cite[\S 4.7(v)]{Bogachev07}: \begin{enumerate} \item Setwise convergence: \begin{equation} \label{eq:71} \lim_{n\to{+\infty}}\mu_n(B)=\mu(B)\qquad \text{for every set $B\in \mathfrak B$}. \end{equation} \item Convergence in duality with $\mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$: \begin{equation} \label{eq:70} \lim_{n\to{+\infty}}\langle \mu_n,\varphi\rangle= \langle \mu,\varphi\rangle \qquad \text{for every $\varphi\in \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$}. \end{equation} \item Weak topology of the Banach space: the sequence $\mu_n$ converges to $\mu$ in the weak topology of the Banach space $({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m);\|\cdot\|_{TV})$. \item Weak $L^1$-convergence of the densities: there exists a common dominating measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ such that $\mu_n\ll\gamma$, $\mu\ll\gamma$ and \begin{equation} \label{eq:72} \frac{\mathrm{d}\mu_n}{\mathrm{d}\gamma}\rightharpoonup \frac{\mathrm{d}\mu}{\mathrm{d}\gamma} \quad\text{weakly in }L^1(Y,\gamma;\mathbb{R}^m). \end{equation} \item Alternative form of weak $L^1$-convergence: \eqref{eq:72} holds \emph{for every} common dominating measure $\gamma$. \end{enumerate} We will refer to \emph{setwise convergence} for sequences satisfying one of the equivalent properties above. The above topologies also share the same notion of compact subsets, as stated in the following useful theorem, cf.\ \cite[Theorem 4.7.25]{Bogachev07}, where we shall denote by $\sigma({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) ; \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m) )$ the weak topology on ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ induced by the duality with $\mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$. \color{black} \begin{theorem} \label{thm:equivalence-weak-compactness} For every set $\emptyset\neq M\subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ the following properties are equivalent: \begin{enumerate} \item $M$ has a compact closure in the topology of setwise convergence. \item $M$ has a compact closure in the topology $\sigma({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) ; \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m) )$. \item $M$ has a compact closure in the weak topology of $({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m);\|\cdot\|_{TV})$. \item Every sequence in $M$ has a subsequence converging \color{black} on every set of $\mathfrak B$. \item There exists a measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ such that \begin{equation} \label{eq:73} \forall\,\varepsilon>0\ \exists\,\delta>0: \quad B\in \mathfrak B,\ \gamma(B)\le \delta\quad \Rightarrow\quad \sup_{\mu\in M}\mu(B)\le \eps. \end{equation} \item There exists a measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ such that $\mu\ll\gamma$ for every $\mu\in M$ and the set $\{\mathrm{d}\mu/\mathrm{d}\gamma:\mu\in M\}$ has compact closure in the weak topology of $L^1(Y,\gamma;\mathbb{R}^m)$. \end{enumerate} \end{theorem} We also recall a useful characterization of weak compactness in $L^1$. \begin{theorem} \label{thm:L1-weak-compactness} Let $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ and $\emptyset\neq F\subset L^1(Y,\gamma;\mathbb{R}^m)$. The following properties are equivalent: \begin{enumerate} \item $F$ has compact closure in the weak topology of $L^1(Y,\gamma;\mathbb{R}^m)$; \item $F$ is bounded in $L^1(Y,\gamma;\mathbb{R}^m)$ and \color{ddcyan} equi-absolutely continuous, i.e.~\color{black} \begin{equation} \label{eq:73bis} \forall\,\varepsilon>0\ \exists\,\delta>0: \quad B\in \mathfrak B,\ \gamma(B)\le \delta\quad \Rightarrow\quad \sup_{f\in F}\int_B |f|\,\mathrm{d}\gamma\le \eps. \end{equation} \label{cond:setwise-compactness-superlinear} \item There exists a convex and superlinear function $\beta:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:74} \sup_{f\in F}\int_Y \beta(|f|)\,\mathrm{d}\gamma<{+\infty}. \end{equation} \end{enumerate} \end{theorem} The name `equi-absolute continuity' above derives from the interpretation that the {measure} $f\gamma$ is absolutely continuous with respect to $\gamma$ in a uniform manner; `equi-absolute continuity' is a shortening of Bogachev's terminology `$F$ has uniformly absolutely continuous integrals'~\cite[Def.~4.5.2]{Bogachev07}. A fourth equivalent property is equi-integrability with respect to $\gamma$~\cite[Th.~4.5.3]{Bogachev07}, a fact that we will not use. When $Y$ is endowed with a (separable and metrizable) topology $\tau_Y$, \color{black} we will use the symbol $\mathrm{C}_{\mathrm{b}}(Y;\mathbb{R}^m) $ to denote the space of bounded $\mathbb{R}^m$-valued continuous functions on $(Y,\tau_Y)$. \color{black} We will consider the corresponding weak topology $\sigma({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m);\mathrm{C}_{\mathrm{b}}(Y;\mathbb{R}^m))$ induced by the duality with $\mathrm{C}_{\mathrm{b}}(Y;\mathbb{R}^m)$. Prokhorov's Theorem yields that a subset $M\subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ has compact closure in this topology if it is bounded in the total variation norm and it is equally tight, i.e. \begin{equation} \label{eq:47} \forall\varepsilon>0\ \exists\, K\text{ compact in $Y$}: \quad \sup_{\mu\in M}|\mu|(Y\setminus K)\le \varepsilon. \end{equation} It is obvious that for a sequence $(\mu_n)_{n\in \mathbb{N}}$ convergence in total variation implies setwise convergence (or in duality with bounded measurable functions), and setwise convergence implies weak convergence in duality with bounded continuous functions. \subsection{Convex functionals and concave transformations of measures} \label{subsub:convex-functionals} We will use the following construction several times. Let $\uppsi:\mathbb{R}^m\to [0,{+\infty}]$ be convex and lower semicontinuous and let us denote by $\uppsi^\infty:\mathbb{R}^m\to [0,{+\infty}]$ its recession function \begin{equation} \label{eq:3} \uppsi^\infty(z):=\lim_{t\to{+\infty}}\frac{\uppsi(tz)}t=\sup_{t>0}\frac{\uppsi(tz)-\uppsi(0)}t, \end{equation} which is a convex, lower semicontinuous, and positively $1$-homogeneous map with $\uppsi^\infty(0)=0$. \color{black} We define the functional $\mathscr F_\uppsi:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) \times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)\mapsto [0,{+\infty}]$ by \begin{equation} \label{def:F-F} \mathscr F_\uppsi(\mu|\nu) := \int_Y \uppsi \Bigl(\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\Bigr)\,\mathrm{d}\nu+ \int_Y \uppsi^\infty\Bigl(\frac{\mathrm{d} \mu^\perp}{\mathrm{d} |\mu^\perp|}\Bigr) \, \mathrm{d} |\mu^\perp|,\qquad \text{for }\mu=\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\nu+\mu^\perp. \end{equation} Note that when $\uppsi$ is superlinear then $\uppsi^\infty(x)={+\infty}$ in $\mathbb{R}^m\setminus\{0\}$. Equivalently, \begin{equation} \label{eq:5} \text{$\uppsi$ superlinear,}\quad \mathscr F_\uppsi(\mu|\nu)<\infty\quad\Rightarrow\quad \mu\ll\nu,\quad \mathscr F_\uppsi(\mu|\nu)= \int_Y \uppsi \Bigl(\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\Bigr)\,\mathrm{d}\nu. \end{equation} We collect in the next Lemma a list of useful properties. \begin{lemma} \label{l:lsc-general}\ \begin{enumerate} \item\label{l:lsc-general:i3} When $\uppsi$ is also positively $1$-homogeneous, then $\uppsi\equiv \uppsi^\infty$, $\mathscr F_\uppsi(\cdot|\nu)$ is independent of $\nu$ and will also be denoted by $\mathscr F_\uppsi(\cdot)$: it satisfies \begin{equation} \label{eq:78} \mathscr F_\uppsi(\mu) \color{black} =\int_Y \uppsi\left(\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\right) \,\mathrm{d}\gamma\quad \text{for every }\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)\text{ such that } \mu\ll\gamma. \end{equation} \item If $\hat \uppsi:\mathbb{R}^{m+1}\to[0,\infty]$ denotes the 1-homogeneous, convex, perspective function associated with \color{black} $\uppsi$ by \begin{equation} \label{eq:76} \hat \uppsi(z,t):= \begin{cases} \uppsi(z/t)t&\text{if }t>0,\\ \uppsi^\infty(z)&\text{if }t=0,\\ {+\infty}&\text{if }t<0, \end{cases} \end{equation} then \begin{equation} \label{eq:77} \mathscr F_\uppsi(\mu|\nu)=\mathscr F_{\hat \uppsi}(\mu,\nu)\quad \text{for every } (\mu,\nu) \color{black} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)\times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y) \end{equation} with $\mathscr F_{\hat \uppsi}$ defined as in \eqref{eq:78}. \color{black} \item In particular, if $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ is a common dominating measure such that $\mu=u\gamma$, $\nu=v\gamma$, and $Y':=\{x\in Y:v(x)>0\}$ we also have \begin{equation} \label{eq:67} \mathscr F_\uppsi(\mu|\nu)= \int_Y \hat\uppsi(u,v)\,\mathrm{d}\gamma=\int_{Y'} \uppsi(u/v)v\,\mathrm{d}\gamma+ \int_{Y\setminus Y'} \uppsi^\infty(u)\,\mathrm{d}\gamma. \end{equation} \item The functional $\mathscr F_\uppsi$ is convex; if $\uppsi$ is also positively $1$-homogeneous then \begin{equation} \label{eq:81} \begin{aligned} \mathscr F_\uppsi(\mu+\mu')&\le \mathscr F_\uppsi(\mu)+\mathscr F_\uppsi(\mu')\\ \mathscr F_\uppsi(\mu+\mu')&= \mathscr F_\uppsi(\mu)+\mathscr F_\uppsi(\mu')\quad \text{if }\mu\perp\mu'. \end{aligned} \end{equation} \item Jensen's inequality: \begin{equation} \label{eq:79} \hat\uppsi(\mu^a(B),\nu(B)) +\uppsi^\infty(\mu^\perp(B)) \color{black} \le \mathscr F_\uppsi(\mu\mres B\color{ddcyan} |\color{black}\nu\mres B)\quad \text{for every }B\in \mathfrak B \end{equation} (with $\mu = \mu^a + \mu^\perp $ the Lebesgue decomposition of $\mu$ w.r.t.\ $\nu$). \color{black} \item If $\uppsi(0)=0$ then for every $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y,\mathbb{R}^m)$, $\nu,\nu'\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ \begin{equation} \label{eq:80} \nu\le \nu'\quad\Rightarrow\quad \mathscr F_\uppsi(\mu|\nu)\ge \mathscr F_\uppsi(\mu|\nu'). \end{equation} \item \label{l:lsc-general:i1} $\mathscr F_\uppsi$ is \color{ddcyan} sequentially \color{black} lower semicontinuous in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) \times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ with respect to the topology of setwise convergence. \item \label{l:lsc-general:i2} If $\mathfrak B$ is the Borel family induced by a \color{ddcyan} Polish topology $\tau_Y$ on $Y$, \color{black} $\mathscr F_\uppsi$ is lower semicontinuous with respect to ~weak convergence (in duality with continuous bounded functions). \color{black} \end{enumerate} \end{lemma} \begin{proof} \color{ddcyan} The above properties are mostly well known; we give a quick sketch of the proofs of the various claims for the ease of the reader. \noindent \textit{(1)} Let us set $u:=\mathrm{d} \mu/\mathrm{d}\nu$, $u^\perp:=\mathrm{d}\mu^\perp/\mathrm{d}|\mu|$ and let $N\in\mathfrak B$ $\nu$-negligible such that $\mu^\perp=\mu\mres N$. We also set $N':=\{y\in Y\setminus N:|u(y)|> 0\}$; notice that $\nu\mres N'\ll |\mu|$. If $v$ is the Lebesgue density of $|\mu|$ w.r.t.~$\gamma$, since $\uppsi=\uppsi^\infty$ is positively $1$-homogeneous and $\uppsi(0)=0$, we have \begin{align*} \mathscr F_\uppsi(\mu|\nu) &=\int_{N'}\uppsi(u)\,\mathrm{d} \nu+ \int_N \uppsi(u^\perp)\,\mathrm{d}|\mu^\perp| =\int_{N'}\uppsi(u)/|u|\,\mathrm{d} |\mu|+ \int_N \uppsi(u^\perp)\,\mathrm{d}|\mu^\perp| \\&=\int_{N'}v\uppsi(u)/|u|\,\mathrm{d} \gamma+ \int_N v\uppsi(u^\perp)\,\mathrm{d}\gamma= \int_{N'}\uppsi(uv/|u|)\,\mathrm{d} \gamma+ \int_N \uppsi(u^\perp v)\,\mathrm{d}\gamma \\&= \int_{N'}\uppsi(\mathrm{d}\mu/\mathrm{d}\gamma)\,\mathrm{d} \gamma+ \int_N \uppsi(\mathrm{d} \mu/\mathrm{d}\gamma)\,\mathrm{d}\gamma= \int_Y \uppsi(\mathrm{d} \mu/\mathrm{d}\gamma)\,\mathrm{d}\gamma=\mathscr F_\uppsi(\mu|\gamma), \end{align*} where we also used the fact that $|\mu|(Y\setminus (N\cup N'))=0$, so that $\mathrm{d} \mu/\mathrm{d}\gamma=0$ $\gamma$-a.e.~on $Y\setminus (N\cup N').$ \noindent\textit{(2)} Since $\hat\uppsi$ is $1$-homogeneous, we can apply the previous claim and evaluate $\mathscr F_{\hat\uppsi}(\mu,\nu)$ by choosing the dominating measure $\gamma:=\nu+\mu^\perp$. \noindent\textit{(3)} It is an immediate consequence of the first two claims. \noindent\textit{(4)} By \eqref{eq:77} it is sufficient to consider the $1$-homogeneous case. The convexity then follows by the convexity of $\uppsi$ and by choosing a common dominating measure to represent the integrals. Relations \color{black} \eqref{eq:81} are also immediate. \noindent\textit{(5)} Using \eqref{eq:77} and selecting a dominating measure $\gamma$ with $\gamma(B)=1$, Jensen's inequality applied to the convex functional $\hat\uppsi$ yields \begin{displaymath} \hat\uppsi(\mu(B),\nu(B))= \hat\uppsi\Big(\int_B \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\,\mathrm{d}\gamma, \int_B \frac{\mathrm{d}\nu}{\mathrm{d}\gamma}\,\mathrm{d}\gamma\Big)\le \int_B \hat\uppsi\Big(\frac{\mathrm{d}\mu}{\mathrm{d}\gamma},\frac{\mathrm{d}\nu}{\mathrm{d}\gamma}\Big)\,\mathrm{d}\gamma =\mathscr F_{\hat\uppsi}(\mu\mres B,\nu\mres B). \end{displaymath} Applying now the above inequality to the mutally singular couples $(\mu^a,\nu)$ and $(\mu^\perp,0)$ and using the second identity of \eqref{eq:81} we obtain \eqref{eq:79}. \noindent\textit{(6)} We apply \eqref{eq:77} and the first identity of \eqref{eq:67}, observing that if $\uppsi(0)=0$ then $\hat\uppsi$ is decreasing with respect to its second argument. \noindent\textit{(7)} By \eqref{eq:77} it is not restrictive to assume that $\Psi$ is $1$-homogeneous. If $(\mu_n)_n$ is a sequence setwise converging to $\mu$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ we can find a common dominating measure $\gamma$ such that \eqref{eq:72} holds. The claimed property is then reduced to the weak lower semicontinuity of the functional \begin{equation} \label{eq:121} u\mapsto \int_Y \Psi(u)\,\mathrm{d}\gamma \end{equation} in $L^1(Y,\gamma;\mathbb{R}^m)$. Since the functional of \eqref{eq:121} is convex and strongly lower semicontinuous in $L^1(Y,\gamma;\mathbb{R}^m)$ (thanks to Fatou's Lemma), it is weakly lower semicontinuous as well. \noindent\textit{(8)} It follows by the same argument of \cite[Theorem 2.34]{AmFuPa05FBVF}, by using a suitable dual formulation which holds also in Polish spaces, where all the finite Borel measures are Radon (see e.g.~\cite[Theorem 2.7]{LMS18} for positive measures). \end{proof} \subsubsection*{Concave transformation of vector measures} \label{subsub:concave-transformation} Let us set $\mathbb{R}_+:=[0,{+\infty}[$, $\mathbb{R}^m_+:=(\mathbb{R}_+)^m$, and let $\upalpha:\mathbb{R}^m_+\to\mathbb{R}_+$ be a continuous and concave function. It is obvious that $\upalpha$ is non-decreasing with respect to each variable. As for \eqref{eq:3}, the recession function $\upalpha^\infty$ is defined by \begin{equation} \label{eq:1} \upalpha^\infty(z):=\lim_{t\to{+\infty}}\frac{\upalpha(tz)}t=\inf_{t>0}\frac{\upalpha(tz)-\upalpha(0)}t,\quad z\in \mathbb{R}^m_+. \end{equation} We define the corresponding map $\upalpha:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m_+)\times{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ by \begin{equation} \label{eq:6} \upalpha[\mu|\gamma]:= \upalpha\Bigl(\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\Bigr)\gamma+ \upalpha^\infty\Bigl(\frac{\mathrm{d}\mu}{\mathrm{d} |\mu^\perp|}\Bigr)|\mu^\perp|\quad \mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m_+),\ \gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}_+(Y), \end{equation} where as usual $\mu=\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\gamma+\mu^\perp$ is the Lebesgue decomposition of $\mu$ with respect to ~$\gamma$; in what follows, we will use the short-hand $\mu_\gamma := \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\gamma$. We also mention in advance that, for shorter notation we will write $\upalpha[\mu_1,\mu_2|\gamma]$ in place of $\upalpha[(\mu_1,\mu_2)|\gamma]$. \color{black} Like for $\mathscr F$, it is not difficult to check that $\upalpha[\mu|\gamma]$ is independent of $\gamma$ if $\upalpha$ is positively $1$-homogeneous (and thus coincides with $\upalpha^\infty$). If we define the perspective function $\hat \upalpha:\mathbb{R}_+^{m+1}\to\mathbb{R}_+$ \begin{equation} \label{eq:83} \hat \upalpha(z,t):= \begin{cases} \upalpha(z/t)t&\text{if }t>0,\\ \upalpha^\infty(z)&\text{if }t=0 \end{cases} \end{equation} we also get $\upalpha[\mu|\gamma]=\hat\upalpha(\mu,\gamma)$. We denote by $\upalpha_*:\mathbb{R}^m_+\to[-\infty,0]$ the upper semicontinuous concave conjugate of $\upalpha$ \begin{equation} \label{eq:4} \upalpha_*(y):=\inf_{x\in \mathbb{R}^m_+} \left( y\cdot x-\upalpha(x)\right),\quad D(\upalpha_*):=\big\{y\in \mathbb{R}^m_+:\upalpha_*(y)>-\infty\big\}. \end{equation} The function $\upalpha_*$ provides simple affine upper bounds for \color{black} $\upalpha$ \begin{equation} \label{eq:8} \upalpha(x)\le x\cdot y-\upalpha_*(y)\quad\text{for every }y\in D(\upalpha_*) \end{equation} and Fenchel duality yields \begin{equation} \label{eq:7} \upalpha(x)=\inf_{y\in \mathbb{R}^m_+} \left( y\cdot x-\upalpha_*(y) \right)= \inf_{y\in D(\upalpha_*)}\left( y\cdot x-\upalpha_*(y) \right). \end{equation} We will also use that \begin{equation} \label{form-4-alpha-infty} \upalpha^\infty(z) = \inf_{y\in D(\upalpha_*)} y \cdot z\,. \end{equation} Indeed, on the one hand for every $y \in D(\upalpha_*)$ and $t>0$ we have that \[ \upalpha^\infty(z) \leq \frac1t \left( \alpha(tz) - \alpha(0)\right) \leq \frac1t \left( y \cdot (tz) -\upalpha(0) - \upalpha^*(y) \right); \] by the arbitrariness of $t>0$, we conclude that $\upalpha^\infty(z) \leq y \cdot z$ for every $y \in D(\upalpha_*)$. On the other hand, by \eqref{eq:7} we have \[ \begin{aligned} \upalpha^\infty(z) = \inf_{t>0}\frac{\upalpha(tz)-\upalpha(0)}t & = \inf_{t>0} \inf_{y\in D(\upalpha^*)} \frac{y \cdot (tz) -\upalpha^*(y) - \upalpha(0)}t \\ & = \inf_{y\in D(\upalpha^*)} \left( y \cdot z +\inf_{t>0} \frac{-\upalpha^*(y) - \upalpha(0)}t \right) = \inf_{y\in D(\upalpha^*)} y \cdot z, \end{aligned} \] where we have used that $-\upalpha^*(y) - \upalpha(0) \geq 0$ since $\upalpha(0) = \inf_{y\in D(\upalpha^*) } ({-}\upalpha^*(y))$. \color{black} \par For every Borel set $B\subset Y$, Jensen's inequality yields (recall the notation $\mu_\gamma = \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\gamma$) \color{black} \begin{equation} \label{eq:9} \begin{aligned} \upalpha[\mu|\gamma](B)&\le \upalpha\Bigl(\frac{\mu_\gamma(B)}{\gamma(B)}\Bigr)\gamma(B)+ \upalpha^\infty(\mu^\perp(B))\\ \upalpha[\mu|\gamma](B)&\le\upalpha(\mu(B))\quad\text{if }\upalpha=\upalpha^\infty \text{ is $1$-homogeneous.} \end{aligned} \end{equation} In fact, for every $y,y'\in D(\upalpha_*)$, \begin{align*} \upalpha[\mu|\gamma](B) &=\int_B \upalpha[\mu|\gamma]\le \int_B \Bigl(y\cdot \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}-\upalpha_*(y)\Bigr)\,\mathrm{d}\gamma+ \int_B \Bigl(y'\cdot \frac{\mathrm{d}\mu}{\mathrm{d} |\mu^\perp|}\Bigr)\,\mathrm{d}\,|\mu^\perp| \\&=y\cdot \mu_\gamma(B)-\upalpha_*(y)\gamma(B) +y'\cdot \mu^\perp(B). \end{align*} Taking the infimum with respect to ~$y$ and $y'$, and recalling \eqref{eq:7} and \eqref{form-4-alpha-infty}, \color{black} we find \eqref{eq:9}. Choosing $y=y'$ in the previous formula we also obtain the linear upper bound \begin{equation} \label{eq:31} \upalpha[\mu|\gamma]\le y\cdot\mu-\upalpha_*(y)\gamma \quad\text{for every }y\in D(\upalpha_*). \end{equation} \subsection{Disintegration and kernels} \label{subsub:kernels} Let $(X,\mathfrak A)$ and $(Y,\mathfrak B)$ be measurable spaces and let $\big(\kappa(x,\cdot)\big)_{x\in X}$ be a $\mathfrak A$-measurable family of measures in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$, i.e. \begin{equation} \label{eq:17} \text{for every $B\in \mathfrak B$,}\quad x\mapsto \kappa(x,B)\ \text{is $\mathfrak A$-measurable}. \end{equation} We will set \begin{equation} \label{eq:155} \kappa_Y(x):=\kappa(x,Y),\quad \|\kappa_Y\|_\infty \color{black} :=\sup_{x\in X} |\kappa|(x,Y), \color{black} \end{equation} and we say that $\kappa$ is a bounded kernel if $\|\kappa_Y\|_\infty$ is finite. If $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(X)$ and \begin{equation} \label{eq:89} \text{$\kappa_Y$ is $\gamma$-integrable, i.e.}\quad \int_X \kappa(x,Y)\,\gamma(\mathrm{d} x)<{+\infty}, \end{equation} then Fubini's Theorem \cite[II, 14]{Dellacherie-Meyer78} shows that there exists a unique measure $\kernel\kappa\gamma(\mathrm{d} x,\mathrm{d} y)= \gamma(\mathrm{d} x)\kappa(x,\mathrm{d} y)$ on $(X\times Y,\mathfrak A\otimes \mathfrak B)$ such that \begin{equation} \label{eq:84} \kernel\kappa\gamma(A\times B)=\int_A \kappa(x,B)\,\gamma(\mathrm{d} x)\quad\text{for every }A\in \mathfrak A,\ B\in \mathfrak B. \end{equation} If $X=Y$, the measure $\gamma$ is called \emph{invariant} if $\kernel\kappa\gamma$ has the same marginals; equivalently \begin{equation} \label{eq:156} {\mathsf y}_\sharp \kernel\kappa\gamma(\mathrm{d} y)= \int_X \kappa(x,\mathrm{d} y)\gamma(\mathrm{d} x)= \kappa_Y(y)\gamma(\mathrm{d} y), \end{equation} where $ {\mathsf y}:E \to V $ denotes the projection on the second component, cf.\ \eqref{eq:87} ahead. We say that \color{black} $\gamma$ is \emph{reversible} if it satisfies \emph{the detailed balance condition}, i.e. $\kernel\kappa\gamma$ is symmetric: ${\mathsf s}_\sharp \kernel\kappa\gamma=\kernel\kappa\gamma$. The concepts of invariance and detailed balance correspond to the analogous concepts in stochastic-process theory; see Section~\ref{ss:assumptions}. It is immediate to check that reversibility implies invariance. If $f:X\times Y\to \mathbb{R}$ is a positive or bounded measurable function, then \begin{equation} \label{eq:86} \text{the map $x\mapsto \kappa f(x):=\int_Y f(x,y)\kappa(x,\mathrm{d} y)$ is $\mathfrak A$-measurable} \end{equation} and \begin{equation} \label{eq:85} \int_{X\times Y}f(x,y)\,\kernel\kappa\gamma(\mathrm{d} x,\mathrm{d} y)= \int_X\Big(\int_Y f(x,y)\,\kappa(x,\mathrm{d} y)\Big)\gamma(\mathrm{d} x). \end{equation} Conversely, if $X,Y$ are standard Borel spaces, $\boldsymbol\kappa\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(X\times Y)$ (with the product $\sigma$-algebra) and the first marginal ${\mathsf p}^X_\sharp \boldsymbol\kappa $ of $\boldsymbol\kappa$ is absolutely continuous with respect to ~$\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, then we may apply the disintegration Theorem \cite[Corollary 10.4.15]{Bogachev07} to find a $\gamma$-integrable kernel $(\kappa(x,\cdot))_{x\in X}$ such that $\boldsymbol\kappa=\kernel\kappa\gamma$. We will often apply the above construction in two cases: when $X=Y:=V$, the main domain of our evolution problems (see Assumptions \ref{ass:V-and-kappa} below), and when $X:=I=(a,b)$ is an interval of the real line endowed with the Lebesgue measure $\lambda$. In this case, we will denote by $t$ the variable in $I$ and by $(\mu_t)_{t\in X}$ a measurable family in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y)$ parametrized by $t\in I$: \begin{equation} \label{eq:99} \text{if }\int_I \mu_t(Y)\,\mathrm{d} t<{+\infty} \text{ then we set } \mu_\lambda\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(I\times Y),\quad \mu_\lambda(\mathrm{d} t,\mathrm{d} y)=\lambda(\mathrm{d} t)\mu_t(\mathrm{d} y). \end{equation} \begin{lemma} \label{le:kernel-convergence} If $\mu_n\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X)$ is a sequence converging to $\mu$ setwise and $(\kappa(x,\cdot))_{x\in X}$ is a \emph{bounded} measurable kernel in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$, then $\kernel\kappa{\mu_n}\to \kernel\kappa\mu$ setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X\times Y,\mathfrak A\otimes\mathfrak B)$. If $X,Y$ are Polish spaces and $\kappa$ also satisfies the weak Feller property, i.e. \begin{equation} \label{eq:Feller} x\mapsto \kappa(x,\cdot)\quad\text{is weakly continuous in }{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y), \end{equation} (where `weak' means in duality with continuous bounded functions), \color{black} then for every weakly converging sequence $\mu_n\to\mu$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X)$ we have $\kernel\kappa{\mu_n}\to \kernel\kappa\mu$ weakly as well. \end{lemma} \begin{proof} If $f:X\times Y\to \mathbb{R}$ is a bounded $\mathfrak A\otimes\mathfrak B$-measurable map, then by \eqref{eq:86} also the map $\kappa f$ is bounded and $\mathfrak A$-measurable so that \begin{displaymath} \lim_{n\to{+\infty}}\int_{X\times Y} f\,\mathrm{d} \kernel\kappa{\mu_n}= \lim_{n\to{+\infty}}\int_{X} \kappa f\,\mathrm{d} \mu_n= \int_X \kappa f\,\mathrm{d} \mu=\int_{X\times Y} f\,\mathrm{d} \kernel\kappa\mu, \end{displaymath} showing the setwise convergence. The other statement follows by a similar argument. \end{proof} \normalcolor \section{Jump processes, large deviations, and their generalized gradient structures} \label{s:assumptions} \subsection{The systems of this paper} \label{ss:assumptions} In the Introduction we described jump processes on~$V$ with kernel $\kappa$, and showed that the evolution equation $\partial_t\rho_t = Q^*\rho_t$ for the law $\rho_t$ of the process is a generalized gradient flow characterized by a driving functional $\mathscr E$ and a dissipation potential~$\mathscr R^*$. The mathematical setup of this paper is slightly different. Instead of starting with an evolution equation and proceeding to the generalized gradient system, our mathematical development starts with the generalized gradient system; we then consider the equation to be defined by this system. In this Section, therefore, we describe assumptions that we make on~$\mathscr E$ and $\mathscr R^*$ that will allow us to set up the rigorous functional framework for the evolution equation~\eqref{eq:GF-intro}. We first state the assumptions about the sets $V$ of `vertices' and $E:=V\times V$ of `edges'. `Edges' are identified with ordered pairs $(x,y)$ of vertices $x,y\in V$. We will denote by $\mathsf x,\mathsf y:E\to V$ and $\symmapn:E\toE$ the coordinate and the symmetry maps defined by \begin{equation} \mathsf x(x,y):=x,\quad \mathsf y(x,y):=y,\quad \symmapn(x,y): = (y,x)\quad \label{eq:87} \text{for every }x,y\in V. \end{equation} \begin{Assumptions}{$V\!\pi\kappa$} \label{ass:V-and-kappa} We assume that \begin{equation} \label{locally-compact} \begin{gathered} \text{$(V,\mathfrak B,\pi)$ is a standard Borel measure space, $\pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, } \end{gathered} \end{equation} \noindent $(\kappa(x,\cdot))_{x\in V}$ is bounded kernel in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ (see \S \ref{subsub:kernels}), satisfying the \emph{detailed-balance condition} \begin{equation}\label{detailed-balance} \int_{A} \kappa(x,B)\,\pi(\mathrm{d} x) = \int_{B} \kappa(y,A)\,\pi(\mathrm{d} y) \qquad \text{for all } A,B\in \mathfrak B, \end{equation} and the uniform upper bound \begin{equation} \label{bound-unif-kappa} \|\kappa_V\|_\infty= \color{black} \sup_{x\in V} \,\kappa(x,V) <{+\infty}. \end{equation} \end{Assumptions} The measure $\pi\in{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ often is referred to as the \emph{invariant measure}, and it will be stationary under the evolution generated by the generalized gradient system. By Fubini's Theorem (see \S\,\ref{subsub:kernels}) we also introduce the measure $\boldsymbol \teta$ \normalcolor on $E$ given by \begin{equation} \label{nu-pi} \boldsymbol \teta(\mathrm{d} x\,\mathrm{d} y) = \kernel\kappa\pi(\mathrm{d} x,\mathrm{d} y)=\pi(\mathrm{d} x)\kappa (x,\mathrm{d} y),\quad \boldsymbol \teta(A{\times}B) = \int_{A} \kappa(x,B)\, \pi(\mathrm{d} x) \,. \end{equation} Note that the invariance of the measure $\pi$ \color{black} and \normalcolor the detailed balance condition \eqref{detailed-balance} can be rephrased in terms of $\boldsymbol \teta$ as \begin{equation} \label{symmetry-nu-pi} \mathsf x_\sharp \boldsymbol \teta=\mathsf y_\sharp\boldsymbol \teta,\qquad\normalcolor \symmap \boldsymbol \teta = \boldsymbol \teta\,. \color{black} \end{equation} Conversely, if we choose a symmetric measure $\boldsymbol \teta\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ such that \begin{equation} \label{eq:88} \mathsf x_\sharp \boldsymbol \teta\ll\pi,\quad \frac{\mathrm{d} (\mathsf x_\sharp \boldsymbol \teta)}{\mathrm{d} \pi}\le \|\kappa_V\|_\infty \color{black} <{+\infty} \quad\text{$\pi$-a.e.} \end{equation} then the disintegration Theorem \cite[Corollary 10.4.15]{Bogachev07} shows the existence of a bounded measurable kernel $(\kappa(x,\cdot))_{x\in X}$ satisfying \eqref{detailed-balance} and \eqref{nu-pi}. \normalcolor We next turn to the driving functional, which is given by the construction in \color{black} \eqref{def:F-F} and \eqref{eq:5} for a superlinear density $\uppsi=\relax\upphi$ and for the choice $\gamma=\pi$. \begin{Assumptions}{$\mathscr E\upphi$} \label{ass:S} The driving functional $\mathscr E: {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \to [0,{+\infty}]$ is of the form \begin{equation} \label{driving-energy} \mathscr E(\rho): = \relax\mathscr F_{\upphi}(\rho|\pi)=\normalcolor \begin{cases} \displaystyle \relax\int_V \upphi\Bigl(\frac{\mathrm{d} \rho}{\mathrm{d}\pi}\Bigr)\,\mathrm{d}\pi &\text{if } \rho \ll \pi, \\ {+\infty} &\text{otherwise,} \end{cases} \end{equation} with \begin{multline} \label{cond-phi} \upphi\in \mathrm{C}([0,{+\infty}))\cap\mathrm{C}^1((0,{+\infty})),\; \min \upphi= 0, \text{ and $\upphi$ is convex}\\ \text{with superlinear growth at infinity.} \end{multline} \end{Assumptions} \noindent Under these assumptions the functional $\mathscr E$ is lower semicontinuous on ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ both with respect to the topology of setwise convergence, and \color{black} any compatible weak topology (see Lemma~\ref{l:lsc-general}). A central example was already mentioned in the introduction, i.e.\ the Boltzmann-Shannon entropy function \begin{equation} \label{logarithmic-entropy} \upphi(s) = s\log s - s + 1, \qquad s\geq 0. \end{equation} Finally, we state our assumptions on the dissipation. \begin{Assumptions}{$\mathscr R^*\Psi\upalpha$} \label{ass:Psi} We assume that the {dual} dissipation density $\Psi^*$ satisfies \begin{equation} \left.\begin{gathered} \label{Psi1} \Psi^*: \mathbb{R} \to [0,{+\infty}) \text{ is convex, differentiable, even, with $\Psi^*(0)=0$, and} \\ \lim_{|\xi|\to\infty} \frac{\Psi^*(\xi)}{|\xi|} ={+\infty}\, . \end{gathered}\quad\right\} \end{equation} The flux density map $\upalpha: [0,{+\infty}) \times [0,{+\infty}) \to [0,{+\infty})$, with $\upalpha\not\equiv0$, is continuous, concave, symmetric: \begin{equation} \label{alpha-symm} \text{} \upalpha(u_1,u_2) = \upalpha(u_2,u_1) \quad\text{ for all } u_1,\, u_2 \in [0,{+\infty}), \end{equation} and its recession function $\upalpha^\infty$ vanishes on the boundary of $\mathbb{R}_+^2$: \begin{equation} \label{alpha-0} \text{for every }u_1,u_2\in \mathbb{R}^2_+:\quad u_1u_2=0\quad\Longrightarrow\quad \upalpha^\infty(u_1,u_2) = 0. \end{equation} \end{Assumptions} Note that since $\upalpha$ is nonnegative, concave, and not trivially $0$, it cannot vanish in the interior of $\mathbb{R}^2_+$, i.e.~ \begin{equation} \label{eq:38} u_1u_2>0\quad\Rightarrow\quad \upalpha(u_1,u_2)>0. \end{equation} \normalcolor The examples that we gave in the introduction of the cosh-type dissipation~\eqref{choice:cosh} and the quadratic dissipation~\eqref{choice:quadratic} both fit these assumptions; other examples are \[ \upalpha(u,v) = 1 \qquad\text{and}\qquad \upalpha(u,v) = u+v. \] In some cases we will use an additional property, namely that $\upalpha$ is positively $1$-homogeneous, i.e.\ $\upalpha(\lambda u_1,\lambda u_2) = \lambda \upalpha(u_1,u_2)$ for all $\lambda \geq0$. This $1$-homogeneity is automatically satisfied under the compatibility condition \color{black} \eqref{cond:heat-eq-2}, with the Boltzmann entropy function $\upphi(s) = s\log s - s + 1$. Concaveness of $\upalpha$ is a natural assumption in view of the convexity of $\mathscr R$ \normalcolor (cf.\ \color{purple} Remark \ref{rem:alpha-concave} and \color{black} Lemma \ref{l:alt-char-R} ahead), while $1$-homogeneity will make the definition of $\mathcal{R}$ independent of the choice of a reference measure. It is interesting to observe that the concavity and symmetry conditions, that one has to naturally assume to ensure the aforementioned properties of $\mathscr R$, were singled out for the analog of the function $\upalpha$ in the construction of the distance yielding the quadratic gradient structure of \cite{Maas11}. The choices for $\Psi^*$ above generate corresponding properties for the Legendre dual $\Psi$: \begin{lemma} \label{l:props:Psi} Under Assumption~\ref{ass:Psi}, the function $\Psi: \mathbb{R}\to\mathbb{R}$ is even and satisfies \begin{subequations} \label{props:Psi} \begin{gather} \label{basic-props-Psi} 0=\Psi(0) < \Psi(s) < {+\infty} \text{ for all }s\in \mathbb{R}\setminus \{0\}.\\ \Psi\text{ is strictly convex, \color{ddcyan} strictly increasing, \color{black} and superlinear.} \label{eq:propsPsi:Psi-strictly-convex} \end{gather} \end{subequations} \end{lemma} \begin{proof} The superlinearity of $\Psi^*$ implies that $\Psi(s)<{+\infty}$ for all $s\in \mathbb{R}$, and similarly the finiteness of $\Psi^*$ on $\mathbb{R}$ implies that $\Psi$ is superlinear. Since $\Psi^*$ is even, $\Psi$ is convex and even, and therefore $\Psi(s) \geq \Psi(0) = \sup_{\xi\in \mathbb{R}} [-\Psi^*(\xi)] = 0.$ Furthermore, since for all $p\in \mathbb{R}$, $\mathop{\rm argmin}_{s\in\mathbb{R}} (\Psi(s) -p s)= \partial \Psi^*(p)$ (see e.g.~\cite[Thm.\ 11.8]{Rockafellar-Wets98}) and $\Psi^*$ is differentiable at every $p$, we conclude that $\mathop{\rm argmin}_s (\Psi(s)-ps ) = \{(\Psi^*)'(p)\}$; therefore each point of the graph of $\Psi$ is an exposed point. It follows that $\Psi$ is strictly convex, and $\Psi(s)>0$ for all $s\not=0$. \end{proof} As described in the introduction, we use $\Psi$, $\Psi^*$, and $\upalpha$ to define the dual pair of dissipation potentials $\mathscr R$ and~$\mathscr R^*$, which for a couple of measures $\rho=u\pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and ${\boldsymbol j}\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ are \color{black} formally given by \begin{equation}\label{eq:dissipation-pair} \mathscr R(\rho,{\boldsymbol j} ) := \frac12 \normalcolor \int_{E} \Psi\left(2 \frac{\mathrm{d} {\boldsymbol j} }{\mathrm{d} \boldsymbol\upnu_\rho}\right)\mathrm{d}\boldsymbol\upnu_\rho, \qquad \mathscr R^*(\rho,\xi) := \frac12\normalcolor\int_{E} \Psi^*(\xi) \, \mathrm{d}\boldsymbol\upnu_\rho, \end{equation} with \begin{equation} \label{def:nu_rho} \boldsymbol\upnu_\rho(\mathrm{d} x\,\mathrm{d} y) := \upalpha\big(u(x),u(y)\big)\,\boldsymbol \teta(\mathrm{d} x\,\mathrm{d} y) = \upalpha\big(u(x),u(y)\big)\,\pi(\mathrm{d} x)\kappa(x,\mathrm{d} y). \end{equation} This expression for the edge measure $\boldsymbol\upnu_\rho$ also is implicitly present in the structure built in \cite{Maas11,Erbar14}. \color{black} The above definitions are made rigorous in Definition~\ref{def:R-rigorous} and in \eqref{eq:27} below. The three sets of conditions above, Assumptions~\ref{ass:V-and-kappa}, \ref{ass:S}, and~\ref{ass:Psi}, are the main assumptions of this paper. Under these assumptions, the evolution equation~\eqref{eq:GF-intro} may be linear or nonlinear in $\rho$. The equation coincides with the Forward Kolmogorov equation~\eqref{eq:fokker-planck} if and only if condition~\eqref{cond:heat-eq-2} is satisfied, as shown below. \subsubsection*{Calculation for \eqref{cond:heat-eq-2}.} Let us call $\mathscr Q[\rho]$ the right-hand side of \eqref{eq:GF-intro} and let us compute \[ \langle \mathscr Q[\rho], \varphi \rangle =\bigl \langle -\odiv \Bigl[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*\Bigl(\rho,- \relax \dnabla\upphi'\Bigl(\frac{\mathrm{d} \rho}{\mathrm{d}\pi}\Bigr)\Bigr)\Bigr], \varphi \bigr\rangle \] for every $ \varphi \in \mathrm{B}_{\mathrm b}(V) $ and $\rho \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ with $\rho \ll \pi$. \normalcolor With $u = \frac{\mathrm{d} \rho}{\mathrm{d} \pi} $ we thus obtain \begin{align} \notag \langle \mathscr Q[\rho], \varphi \rangle&= \bigl\langle {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*\bigl(\rho,-\relax \dnabla\upphi'(u) \bigr),\dnabla \varphi \bigr \rangle \\ & = \label{almost-therepre} \frac12 \iint_{E} \big(\Psi^*\big)' \left( -\relax \dnabla\upphi'(u)(x,y) \right) \dnabla \varphi (x,y) \boldsymbol\upnu_\rho(\mathrm{d} x,\mathrm{d} y)\,. \end{align} Recalling the definitions~\eqref{def:nu_rho} of $\boldsymbol\upnu_\rho$ and~\eqref{eq:184} of $\rmF$, \eqref{almost-therepre} thus becomes \begin{align} \notag \langle \mathscr Q[\rho], \varphi \rangle&= \frac 12\iint_{E} \bigl(\Psi^*\bigr)' \bigl( \relax \upphi'(u(x)) - \upphi'(u(y)) \bigr) \,\dnabla \varphi(x,y) \,\upalpha(u(x),u(y)) \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \\&= \frac12 \iint_{E} \rmF(u(x),u(y)) \big(\varphi(x)-\varphi(y)\big)\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\label{eq:187} \\& \stackrel{(*)}{=} \color{black} \iint_{E} \rmF(u(x),u(y))\notag \varphi(x)\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)= \int_V \varphi(x) \Big(\int_V \rmF(u(x),u(y))\kappa(x,\mathrm{d} y)\Big)\pi(\mathrm{d} x) \end{align} where for $(*)$ \color{black} we used the symmetry of $\boldsymbol \teta$ (i.e.\ the detailed-balance condition). This calculation justifies \eqref{eq:180}. In the linear case of \eqref{eq:fokker-planck} it is immediate to see that \begin{align} \langle Q^*\rho,\varphi\rangle\notag = \langle \rho,Q\varphi\rangle &=\iint_{E} [\varphi(y)-\varphi(x)]\,\kappa(x,\mathrm{d} y)\rho(\mathrm{d} x ) \\&= \frac12 \iint_{E} \dnabla \varphi(x,y) \bigl[\kappa(x,\mathrm{d} y)\rho(\mathrm{d} x) - \kappa(y,\mathrm{d} x)\rho(\mathrm{d} y)\bigr] \notag \\\label{for-later} & =\frac12 \iint_{E} \dnabla \varphi(x,y) \bigl[u(x) -u(y)\bigr] \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y), \end{align} Comparing \eqref{for-later} and \eqref{eq:187} we obtain that $\rmF$ has to fulfill \color{black} \eqref{cond:heat-eq-2}. \normalcolor \subsection{Derivation of the cosh-structure from large deviations} \label{ss:ldp-derivation} We mentioned in the introduction that the choices \begin{equation} \label{choices:ldp} \upphi(s) = s\log s - s + 1,\qquad \Psi^*(\xi) = 4\bigl(\cosh(\xi/2)-1\bigr), \qquad\text{and}\qquad \upalpha(u,v) = \sqrt{uv} \end{equation} arise in the context of large deviations. In this section we describe this context. Throughout this section we work under Assumptions~\ref{ass:V-and-kappa}, \ref{ass:S}, and~\ref{ass:Psi}, and since we are interested in the choices above, we will also assume~\eqref{choices:ldp}, implying that \[ \boldsymbol\upnu_\rho(\mathrm{d} x\,\mathrm{d} y) = \sqrt{u(x)u(y)}\, \pi(\mathrm{d} x)\kappa(x,\mathrm{d} y), \qquad \text{if }\rho = u\pi \ll \pi. \] Consider a sequence of independent and identically distributed stochastic processes $X^i$, $i=1, 2, \dots$ on $V$, each described by the jump kernel $\kappa$, or equivalently by the generator $Q$ in~\eqref{eq:def:generator}. With probability one, a realization of each process has a countable number of jumps in the time interval $[0,{+\infty})$, and we write $t^i_k$ for the $k^{\mathrm{th}}$ jump time of $X^i$. We can assume that $X^i$ is a c\`adl\`ag function of time. We next define the empirical measure $\rho^n$ and the empirical flux ${\boldsymbol j} ^n$ by \begin{align*} &\rho^n: [0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V), &\rho^n_t &:= \frac1n \sum_{i=1}^n \delta_{X^i_t}, \\ &{\boldsymbol j}^n\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+((0,T)\times E), &\qquad {\boldsymbol j}^n(\mathrm{d} t\,\mathrm{d} x\,\mathrm{d} y) &:= \frac1n \sum_{i=1}^n \sum_{k=1}^\infty \delta_{t^i_k}(\mathrm{d} t) \delta_{(X^i_{t-},X^i_{t})}(\mathrm{d} x\,\mathrm{d} y), \end{align*} where $t^i_k$ is the $k^{\mathrm{th}}$ jump time of $X^i$, and $X^i_{t-}$ is the left limit (pre-jump state) of $X^i$ at time~$t$. Equivalently, ${\boldsymbol j}^n$ is defined by \[ \langle {\boldsymbol j}^n, \varphi\rangle := \frac1n \sum_{i=1}^n \sum_{k=1}^\infty \varphi\bigl(t_k^i,X^i_{t_k^i-},X^i_{t_k^i}\bigr), \qquad \text{for }\varphi\in \mathrm{C}_{\mathrm{b}}([0,T]\times E). \] A standard application of Sanov's theorem yields a large-deviation characterization of the pair $(\rho^n,{\boldsymbol j}^n)$ in terms of two rate functions $I_0$ and $I$, \[ \mathrm{Prob}\bigl((\rho^n,{\boldsymbol j}^n)\approx (\rho,{\boldsymbol j} )\bigr) \sim \exp\Bigl[ -n \bigl(I_0(\rho_0) + I(\rho,{\boldsymbol j} )\bigr)\Bigr], \qquad\text{as }n\to\infty. \] The rate function $I_0$ describes the large deviations of the initial datum $\rho^n_0$; this functional is determined by the choices of the initial data of $X^i_0$ and is independent of the stochastic process itself, and we therefore disregard it here. The functional $I$ characterizes the large-deviation properties of the dynamics of the pair $(\rho^n,{\boldsymbol j}^n)$ conditional on the initial state, and has the expression \begin{equation} \label{eq:def:I} I(\rho,{\boldsymbol j} ) = \int_0^T \mathscr F_\upeta({\boldsymbol j}_t | {\boldsymbol\vartheta}_{\rho_t}^-)\, \mathrm{d} t. \end{equation} In this expression we write ${\boldsymbol\vartheta}_{\rho_t}^-$ for the measure $\rho_t(\mathrm{d} x)\kappa(x,\mathrm{d} y)\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ (see also~\eqref{def:teta} ahead). The function $\upeta$ is the Boltzmann entropy function that we have seen above, \[ \upeta(s) := s\log s - s+ 1,\qquad\text{for }s\geq0, \] and the functional $\mathscr F_\upeta:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)\times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E) \to [0,\infty]$ is given by~\eqref{def:F-F}. Even though the function $\upeta$ coincides in this section with $\upphi$, we choose a different notation to emphasize that the roles of $\upphi$ and $\upeta$ are different: the function $\upphi$ defines the entropy of the system, which is related to the large deviations of the empirical measures $\rho^n$ in equilibrium (see~\cite{MielkePeletierRenger14}); the function $\upeta$ characterizes the large deviations of the time courses of $\rho^n$ and ${\boldsymbol j}^n$. \begin{remark} Sanov's theorem can be found in many references on large deviations (e.g.~\cite[Sec.~6.2]{DemboZeitouni98}); the derivation of the expression~\eqref{eq:def:I} is fairly well known and can be found in e.g.~\cite[Eq.~(8)]{MaesNetocny08} or \cite[App.~A]{KaiserJackZimmer18}. Instead of proving~\eqref{eq:def:I} we give an interpretation of the expression~\eqref{eq:def:I} and the function $\upeta$ in terms of exponential clocks. An exponential clock with rate parameter $r$ has large-deviation behaviour given by $r\eta(\cdot/r)$ (see~\cite[Exercise 5.2.12]{DemboZeitouni98} or~\cite[Th.~1.5]{Moerters10}) in the following sense: for each $t>0$, \[ \mathrm{Prob}\bigl( \text{ $\approx \beta nt$ firings in time $nt$ }\bigr) \sim \exp \Bigl[ -n tr\,\upeta(\beta/r)\Bigr]\qquad\text{as }n\to\infty. \] The expression~\eqref{eq:def:I} generalizes this to a field of exponential clocks, one for each edge~$(x,y)$. In this case, the rescaled rate parameter $r$ for the clock at edge $(x,y)$ is equal to $\rho_t(\mathrm{d} x)\kappa(x,\mathrm{d} y)$, since it is proportional to the number of particles $n\rho_t(\mathrm{d} x)$ at $x$ and to the rate of jump $\kappa(x,\mathrm{d} y)$ from $x$ to $y$. The flux $n{\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y)$ is the observed number of jumps from $x$ to $y$, corresponding to firings of the clock associated with the edge $(x,y)$. In this way, the functional $I$ in~\eqref{eq:def:I} can be interpreted as characterizing the large-deviation fluctuations in the clock-firings for each edge $(x,y)\inE$. \end{remark} The expression~\eqref{eq:def:I} leads to the functional $\mathscr L$ in~\eqref{eq:def:mathscr-L} after a symmetry reduction, which we now describe (see also~\cite[App.~A]{KaiserJackZimmer18}). Assuming that we are more interested in the fluctuation properties of $\rho$ than those of ${\boldsymbol j}$, we might decide to minimize $I(\rho,{\boldsymbol j} )$ over a class of fluxes ${\boldsymbol j} $ for a fixed choice of $\rho$. Here we choose to minimize over the class of fluxes with the same skew-symmetric part, \[ A_{\boldsymbol j} := \bigl\{{\boldsymbol j}'\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}([0,T]\times E): {\boldsymbol j}' - \symmap{\boldsymbol j}' = {\boldsymbol j} - \symmap {\boldsymbol j} \bigr\}. \] By the form~\eqref{eq:ct-eq-intro} of the continuity equation and the definition~\eqref{eq:def:div} of the divergence we have $\odiv {\boldsymbol j}' = \odiv {\boldsymbol j} $ for all ${\boldsymbol j}'\in A_j$, so that replacing ${\boldsymbol j} $ by ${\boldsymbol j}'$ preserves the continuity equation. \begin{Flemma} The minimum of $I(\rho,{\boldsymbol j}'\,)$ over all ${\boldsymbol j}'\in A_j$ is achieved for the `skew-sym\-me\-tri\-za\-tion' ${\boldsymbol j}^\flat = \tfrac12({\boldsymbol j} -\symmap {\boldsymbol j} )$, and for ${\boldsymbol j}^\flat$ \color{black} the result equals $\tfrac12\mathscr L$: \begin{equation} \label{eq:infI=infL} \inf_{{\boldsymbol j}'\,\in A_j} I(\rho,{\boldsymbol j}'\,) = \inf_{{\boldsymbol j}'\in A_j} \tfrac12\mathscr L(\rho,{\boldsymbol j}'\,) = \tfrac12\mathscr L(\rho,{\boldsymbol j}^\flat). \color{black} \end{equation} Consequently, for a given curve $\rho:[0,T]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{align*} \inf_j \Bigl\{ I(\rho,{\boldsymbol j} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\} &= \inf_j \Bigl\{ \tfrac12\mathscr L(\rho,{\boldsymbol j} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\}, \\ \noalign{\noindent and in this final expression the flux can be assumed to be skew-symmetric:} \color{black} &=\inf_j \Bigl\{ \tfrac12\mathscr L \bigl(\rho,{\boldsymbol j} \bigr): \partial_t \rho + \odiv {\boldsymbol j} = 0 \text{ and } \symmap {\boldsymbol j} = - {\boldsymbol j} \Bigr\}. \end{align*} \end{Flemma} This implies that the two functionals $I$ and $\mathscr L$ can be considered to be the same, if one is only interested in $\rho$, not in ${\boldsymbol j} $. By the Contraction Principle (e.g.~\cite[Sec.~4.2.1]{DemboZeitouni98}) the functional $\rho \mapsto \inf_j I(\rho,{\boldsymbol j} ) = \inf_j \tfrac12\mathscr L(\rho,{\boldsymbol j} )$ also can be viewed as the large-deviation rate function of the sequence of empirical measures $\rho^n$. The above lemma is only formal because we have not given a rigorous definition of the functional~$\mathscr L$. While it would be possible to do so, using the construction of Lemma~\ref{l:lsc-general} and the arguments of the proof below, actually the rest of this paper deals with this question in a more detailed manner. In addition, at this stage this lemma only serves to explain why we consider this specific class of functionals~$\mathscr L$. Therefore here we only give heuristic arguments. \begin{proof} We assume throughout this (formal) proof that all measures are absolutely continuous, strictly positive, and finite where necessary. Note that writing $\rho_t = u_t \pi$ we have ${\boldsymbol\vartheta}_{\rho_t}^-(\mathrm{d} x\,\mathrm{d} y) = u_t(x) {\boldsymbol\vartheta}(\mathrm{d} x\,\mathrm{d} y)$, and using~\eqref{choices:ldp} we therefore have \begin{gather*} \sqrt{{\boldsymbol\vartheta}_{\rho_t}^- \, \symmap{\boldsymbol\vartheta}_{\rho_t}^-}\; (\mathrm{d} x\,\mathrm{d} y) = \sqrt{u_t(x)u_t(y)}\;\boldsymbol \teta(\mathrm{d} x\,\mathrm{d} y) = \boldsymbol\upnu_{\rho_t}(\mathrm{d} x\,\mathrm{d} y), \qquad\text{and}\\ \log \frac{\mathrm{d} \symmap {\boldsymbol\vartheta}_{\rho_t}^-}{\mathrm{d} {\boldsymbol\vartheta}_{\rho_t}^-} (x,y) = \log \frac {u_t(y)}{u_t(x)} = \dnabla \upphi'(u_t)(x,y). \end{gather*} For the length of this proof we write $\hat \upeta$ for the perspective function corresponding to $\upeta$ (see \color{purple} \eqref{eq:76} in \color{black} Lemma~\ref{l:lsc-general}) \[ \color{purple} \hat{\upeta}(a,b) \color{black} := \begin{cases} a\log \dfrac ab - a + b & \text{if $a,b>0$,}\\ 0 &\text{if }a = 0,\\ +\infty &\text{if $a>0$, $b=0$.} \end{cases} \] We now rewrite $\inf_{{\boldsymbol j}'\,\in A_j} I(\rho,{\boldsymbol j}'\,)$ as \begin{align*} \inf_{{\boldsymbol j}' \,\in A_j} &\int_0^T \iint_E \upeta\biggl( \frac{\mathrm{d} {\boldsymbol j}'_t}{\mathrm{d} {\boldsymbol\vartheta}_{\rho_t}^-}\biggr)\, \mathrm{d} {\boldsymbol\vartheta}_{\rho_t}^- \mathrm{d} t = \inf_{{\boldsymbol j}' \,\in A_j} \int_0^T \iint_E u_t\, \upeta\biggl( \frac1 {u_t} \frac{\mathrm{d} {\boldsymbol j}'_t}{\mathrm{d} {\boldsymbol\vartheta}}\biggr) \,\mathrm{d} {\boldsymbol\vartheta}\, \mathrm{d} t\\ &=\inf_{{\boldsymbol j}' = \zeta \color{black} {\boldsymbol\vartheta} \,\in A_j} \int_0^T \iint_E \hat \upeta\bigl( \zeta_t(x,y), \color{black} u_t(x)\bigr) {\boldsymbol\vartheta}(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t \\ &= \frac12 \normalcolor \inf_{{\boldsymbol j}' = \zeta \color{black} \boldsymbol \teta \,\in A_j} \int_0^T \iint_E \Bigl\{\hat \upeta\bigl( \zeta_t(x,y), \color{black} u_t(x)\bigr) + \hat \upeta\bigl( \zeta_t(y,x), \color{black} u_t(y)\bigr) \Bigr\}\,{\boldsymbol\vartheta}(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t . \end{align*} Since $\zeta(x,y) - \zeta(y,x) = \mathrm{d} ({\boldsymbol j}' - \symmap {\boldsymbol j}'\,)/\mathrm{d}\boldsymbol \teta$ \color{black} is constrained in $A_j$, we follow the expression inside the second integral and set \[ \psi:\mathbb{R}\times[0,{+\infty})^2 \to [0,{+\infty}], \qquad \psi(s\,; c,d) := \inf_{a,b\geq0} \Bigl\{ \bigl[\hat \upeta(a,c) + \hat \upeta(b,d)\bigr] : a-b = 2s\Bigr\}, \] for which a calculation gives the explicit formula (for $c,d>0$) \[ \psi(s\,; c,d) = \frac{\sqrt{cd}}2\; \biggl\{\Psi\biggl( \frac { 2s}{\sqrt{cd}}\biggr) + \Psi^*\biggl( - \log \frac dc\biggr) \biggr\} + s \; \log \frac dc, \] in terms of the function $\Psi^*(\xi) = 4\bigl(\cosh \xi/2 - 1\bigr)$ and its Legendre dual $\Psi$. This minimization corresponds to minimizing over all fluxes for which the `net flux' ${\boldsymbol j} - \symmap {\boldsymbol j} = 2 \tj $ \color{black} is the same; see e.g.~\cite{Renger18,KaiserJackZimmer18} for discussions. Let $\cw(x,y) := (w(x,y) - w(y,x)) = \frac{\mathrm{d}(2\tj)}{\mathrm{d} \boldsymbol \teta}$ \color{black} and $\upalpha_t := \upalpha_t(x,y) = \sqrt{u_t(x)u_t(y)}$. We find \begin{align} \inf_{{\boldsymbol j}'\, \in A_j} &\int_0^T \mathscr F_\upeta\bigl( {\boldsymbol j}'_t | {\boldsymbol\vartheta}_{\rho_t}^-\bigr) \,\mathrm{d} t \notag \\ &= \frac12\normalcolor \int_0^T \iint_E \psi\bigl( \tfrac12 \cw_t(x,y) \color{black} \,;\, u_t(x),u_t(y)\bigr)\, \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \mathrm{d} t \notag\\ &=\frac 12 \int_0^T \iint_E \biggl\{ \frac{\upalpha_t}2 \Psi\biggl(\frac{ \cw_t}{\upalpha_t}\biggr) + \frac{\upalpha_t}2 \Psi^*\biggl(- \dnabla \upphi'(u_t)\biggr) + \frac12 \cw_t\, \color{black} \dnabla \upphi'(u_t) \biggr\}\,\mathrm{d}{\boldsymbol\vartheta}\mathrm{d} t \notag\\ &= \frac12 \int_0^T \iint_E \frac12 \biggl\{ \Psi\biggl(\frac{{2\mathrm{d}\tj_t} \color{black}}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}\biggr) + \Psi^*\biggl(- \dnabla \upphi'(u_t)\biggr)\biggr\}\,\mathrm{d}\boldsymbol\upnu_{\rho_t} \, \mathrm{d} t + \frac12 \mathscr E(\rho_T) - \frac12 \mathscr E(\rho_0). \label{eq:formal-motivation-L} \end{align} In the last identity we used the fact that since $\odiv \tj_t = -\partial_t \rho_t$, \color{black} formally we have \[ \int_0^T \iint_E \frac12 \cw_t\, \dnabla \upphi'(u_t) \,\mathrm{d}{\boldsymbol\vartheta}\mathrm{d} t = \int_0^T \iint_E \dnabla \upphi'(u_t) \, \mathrm{d}\tj_t \,\mathrm{d} t \color{black} = \int_0^T \langle \upphi'(u_t) ,\partial_t \rho_t \rangle\,\mathrm{d} t = \mathscr E(\rho_T)- \mathscr E(\rho_0). \] The expression on the right-hand side of~\eqref{eq:formal-motivation-L} is one half times the functional $\mathscr L$ defined in~\eqref{eq:def:mathscr-L} (see also~\eqref{ineq:deriv-GF}). This proves that \[ \inf_{{\boldsymbol j}'\,\in A_j} I(\rho,{\boldsymbol j}'\,) = \frac12 \mathscr L\bigl(\rho, \tj \color{black}\,\bigr). \] From convexity of $\Psi$ and symmetry of $\boldsymbol\upnu_\rho$ we deduce that $\mathscr L(\rho, \tj) \color{black} \leq \mathscr L(\rho,{\boldsymbol j} )$ for any ${\boldsymbol j} $; see Remark~\ref{rem:skew-symmetric}. The identity $\mathscr L\bigl(\rho,\tj\,\bigr) = \inf_{{\boldsymbol j}'\,\in A_j} \mathscr L (\rho,{\boldsymbol j}'\,)$ \color{black} then follows immediately; this proves~\eqref{eq:infI=infL}. To prove the second part of the Lemma, we write \begin{align*} \inf_j \Bigl\{ I(\rho,{\boldsymbol j} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\} &= \inf_j \Bigl\{ \Bigl[\,\inf _{{\boldsymbol j}'\in A_j} I(\rho,{\boldsymbol j}'\,) \Bigr]: \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\},\\ &= \inf_j \Bigl\{ \Bigl[\,\inf _{{\boldsymbol j}'\in A_j} \tfrac12 \mathscr L(\rho,{\boldsymbol j}'\,) \Bigr]: \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\},\\ &= \inf_j \Bigl\{ \tfrac12\mathscr L(\rho, \tj \color{black} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\},\\ &= \inf_j \Bigl\{ \tfrac12\mathscr L(\rho, \tj \color{black}) : \partial_t \rho + \odiv \tj \color{black} = 0\Bigr\}. \end{align*} This concludes the proof. \end{proof} \section{Curves in \texorpdfstring{${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$}{M+(V)}} A major challenge in any rigorous treatment of an equation such as~\eqref{eq:GGF-intro-intro} is finding a way to deal with the time derivative. The Ambrosio-Gigli-Savar\'e framework for metric-space gradient systems, for instance, is organized around absolutely continuous curves. These are a natural choice because on the one hand this class admits a `metric velocity' that generalizes the time derivative, while on the other hand solutions are automatically absolutely continuous by the superlinear growth of the dissipation potential. For the systems of this paper, a similar role is played by curves such that the `action' $\int \mathscr R\,\mathrm{d} t$ is finite; we show below that the superlinearity of $\mathscr R(\rho,{\boldsymbol j} )$ in ${\boldsymbol j} $ leads to similarly beneficial properties. In order to exploit this aspect, however, a number of intermediate steps need to be taken: \begin{enumerate}[label=(\alph*)] \item \label{intr:curves:1} We define the class $\CE0T$ of solutions $(\rho,{\boldsymbol j} )$ of the continuity equation~\eqref{eq:ct-eq-intro} (Definition~\ref{def-CE}). \item For such solutions, $t\mapsto \rho_t$ is continuous in the total variation distance (Corollary~\ref{c:narrow-ct}). \item We give a rigorous definition of the functional $\mathscr R$ (Definition~\ref{def:R-rigorous}), and describe its behaviour on absolutely continuous and singular parts of $(\rho,{\boldsymbol j})$ (Lemma~\ref{l:alt-char-R} and Theorem~\ref{thm:confinement-singular-part}). \item If the action functional $\int \mathscr R$ is finite along a solution $(\rho,{\boldsymbol j})$ of the continuity equation in $[0,T]$, then the property that $\rho_t$ is absolutely continuous with respect to ~$\pi$ at some time $t\in [0,T]$ propagates to all the interval $[0,T]$ (Corollary~\ref{cor:propagation-AC}). \item We prove a chain rule for the derivative of convex entropies along curves of finite $\mathscr R$-action (Theorem~\ref{th:chain-rule-bound}) and derive an estimate involving $\mathscr R$ and a Fisher-information-like term (Corollary~\ref{th:chain-rule-bound2}). \item\label{intr:curves:5} If the action $\int\mathscr R$ is uniformly bounded along a sequence $(\rho^n,{\boldsymbol j}^n)\in\CE0T$, then the sequence is compact in an appropriate sense (Proposition~\ref{prop:compactness}). \end{enumerate} Once properties~\ref{intr:curves:1}--\ref{intr:curves:5} have been established, the next step is to consider finite-action curves that also connect two given values $\mu,\nu$, leading to the definition of the Dynamical-Variational Transport (DVT) cost \begin{equation} \label{def-psi-rig-intro-section} \DVT \tau\mu\nu : = \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t \, : \, (\rho,{\boldsymbol j} ) \in \CE 0\tau, \ \rho_0 = \mu, \ \rho_\tau = \nu \right\}\,. \end{equation} This definition is in the spirit of the celebrated Benamou-Brenier formula for the Wasserstein distance \cite{Benamou-Brenier}, generalized to a broader family of transport distances \cite{DolbeaultNazaretSavare09} and to jump processes \cite{Maas11,Erbar14}. However, a major difference with those constructions is that $\mathscr{W}$ also depends on the time variable $\tau$ and that $\DVT\tau\cdot\cdot$ is not a (power of a) distance, since $\Psi$ is not, in general, positively homogeneous of any order. Indeed, \color{black} when $\mathscr R$ is $p$-homogeneous in ${\boldsymbol j} $, for $p\in (1,{+\infty})$, we have (see also the discussion at the beginning of Sec.\ \ref{ss:MM}) \begin{equation} \label{eq:DVT=JKO} \DVT \tau\mu\nu = \frac1{\tau^{p-1}} \DVT 1\mu\nu= \frac1{ p \color{black} \tau^{p-1}}d_{\mathscr R}^p(\mu,\nu), \end{equation} where $d_\mathscr R$ is an extended distance and \normalcolor is a central object in the usual Minimizing-Movement construction. In Section~\ref{s:MM}, the DVT cost~$\mathscr{W}$ will replace the rescaled $p$-power of the distance and play a similar role for the Minimizing-Movement approach. \color{black} For the rigorous construction of $\mathscr{W}$, \begin{enumerate}[label=(\alph*),resume] \item we show that minimizers of~\eqref{def-psi-rig-intro-section} exist (Corollary~\ref{c:exist-minimizers}); \item we establish properties of $\mathscr{W}$ that generalize those of the metric-space version~\eqref{eq:DVT=JKO} (Theorem~\ref{thm:props-cost}). \end{enumerate} Finally, \begin{enumerate}[label=(\alph*),resume] \item we close the loop by showing that from a given functional $\mathscr{W}$ integrals of the form $\int_a^b\mathscr R$ can be reconstructed (Proposition~\ref{t:R=R}). \end{enumerate} Throughout this section we adopt \textbf{Assumptions~\ref{ass:V-and-kappa} and \ref{ass:Psi}}. \subsection{The continuity equation} \label{sec:ct-eq} We now introduce the formulation of the continuity equation we will work with. Hereafter, for a given function $\mu :I \to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$, or $\mu : I \to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$, with $I=[a,b]\subset\mathbb{R}$, we shall often write $\mu_t$ in place of $\mu(t)$ for a given $t\in I$ and denote the time-dependent function $\mu $ by $(\mu_t)_{t\in I}$. We will write $\lambda$ for the Lebesgue measure on $I$. The following definition mimics those given in \cite[Sec.~8.1]{AmbrosioGigliSavare08} and \cite[Def.~4.2]{DNS09}. \begin{definition}[Solutions $(\rho,{\boldsymbol j})$ of the continuity equation] \label{def-CE} Let $I=[a,b]$ be a closed interval of $\mathbb{R}$. We denote by $\CEI I$ the set of pairs $(\rho,{\boldsymbol j})$ given by \begin{itemize} \item a family of time-dependent measures $\rho=(\rho_t)_{t\in I} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, and \item a measurable family $({\boldsymbol j}_t)_{t\in I} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ with $\int_0^T |{\boldsymbol j}_t|(E)\,\mathrm{d} t <{+\infty}$, satisfying the continuity equation \begin{equation} \label{eq:ct-eq-def} \dot\rho + \odiv {\boldsymbol j}=0 \quad\text{ in } I\times V, \end{equation} in the following sense: \begin{equation} \label{2ndfundthm} \int_V \varphi\,\mathrm{d} \rho_{t_2}-\int_V \varphi\,\mathrm{d}\rho_{t_1}= \iint_{J\times E} \overline\nabla \varphi \,\mathrm{d} {\boldsymbol j}_\lambda \quad\text{for all $\varphi\in \mathrm{B}_{\mathrm b}(V)$, $J=[t_1,t_2]\subset I$}. \end{equation} where ${\boldsymbol j}_\lambda(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y):=\lambda(\mathrm{d} t){\boldsymbol j}_t(\mathrm{d} x,\mathrm{d} y)$. \end{itemize} Given $\rho_0,\, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, we will use the notation \[ \CEIP I{\rho_0}{\rho_1} : = \bigl\{(\rho,{\boldsymbol j} ) \in \CEI I\, : \ \rho(a)=\rho_0, \ \rho(b) = \rho_1\bigr\}\,. \] \end{definition} \begin{remark} \label{rem:expand} The requirement~\eqref{2ndfundthm} shows in particular that $t\mapsto \rho_t$ is continuous with respect to the total variation metric. Choosing $\varphi\equiv1$ in~\eqref{2ndfundthm}, one immediately finds that \begin{equation} \label{eq:91} \text{the total mass }\rho_t(V)\text{ is constant in $I$}. \end{equation} By the disintegration theorem, it is equivalent to assign the measurable family $({\boldsymbol j}_t)_{t\in I}$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ or the measure ${\boldsymbol j}_\lambda$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(I\times E)$. \end{remark} We can in fact prove a more refined property. The proof of the Corollary below is postponed to Appendix \ref{appendix:proofs}. \begin{cor} \label{c:narrow-ct} If $(\rho,{\boldsymbol j} )\in\CE0T$, then there exist a \emph{common} dominating measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ (i.e., $\rho_t \ll \gamma$ for all $t\in [a,b]$), \color{black} and an absolutely continuous map $\tilde u:[a,b]\to L^1(V,\gamma)$ such that $\rho_t=\tilde u_t\gamma\ll \gamma$ for every $t\in [a,b]$. \end{cor} The interpretation of the continuity equation in Definition \ref{def-CE}---in duality with all bounded measurable functions---is quite strong, and in particular much stronger than the more common continuity in duality with \emph{continuous} and bounded functions. However, this continuity \color{purple} equation \color{black} can be recovered starting from a much weaker formulation. The following result illustrates this; it is a translation of \cite[Lemma 8.1.2]{AmbrosioGigliSavare08} (cf.\ also \cite[Lemma 4.1]{DNS09}) to the present setting. The proof adapts the argument for \cite[Lemma 8.1.2]{AmbrosioGigliSavare08} and is given in Appendix~\ref{appendix:proofs}. \begin{lemma}[Continuous representative] \label{l:cont-repr} Let $(\rho_t)_{t\in I} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and $({\boldsymbol j}_t)_{t\in I}$ be measurable families that are integrable with respect to ~$\lambda$ and let $\tau$ be any separable and metrizable topology inducing $\mathfrak B$. If \begin{equation} -\int_0^T \eta'(t) \left( \int_V \zeta(x) \rho_t (\mathrm{d} x ) \right) \mathrm{d} t = \int_0^T \eta(t)\Big(\iint_E \overline\nabla\zeta(x,y)\, {\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y)\Big)\,\mathrm{d} t \,,\label{eq:90} \end{equation} holds for every $\eta \in \mathrm{C}_\mathrm{c}^\infty((a,b))$ and $\zeta \in \mathrm{C}_{\mathrm{b}} (V,\tau)$, then there exists a unique curve $I \ni t \mapsto \tilde{\rho}_t \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+ (V)$ such that $\tilde{\rho}_t = \rho_t$ for $\lambda$-a.e. $t\in I$. The curve $\tilde\rho$ is continuous in the total-variation norm with estimate \begin{equation} \label{est:ct-eq-TV} \|\tilde \rho_{t_2}-\tilde \rho_{t_1}\|_{TV} \leq 2 \int_{t_1}^{t_2} |{\boldsymbol j}_t|(E)\, \mathrm{d} t \qquad \text{ for all } t_1 \leq t_2, \end{equation} and satisfies \begin{equation} \label{maybe-useful} \int_V \varphi(t_2,\cdot) \,\mathrm{d}\tilde\rho_{t_2} - \int_V \varphi(t_1,\cdot) \,\mathrm{d}\tilde\rho_{t_1} = \int_{t_1}^{t_2} \int_V \partial_t \varphi \,\mathrm{d}\tilde\rho_t\,\mathrm{d} t + \int_{J\times E} \overline\nabla \varphi \,\mathrm{d} {\boldsymbol j}_\lambda \end{equation} for all $\varphi \in \mathrm{C}^1(I;\mathrm{B}_{\mathrm b}(V))$ and $J=[t_1,t_2]\subset T$. \end{lemma} \begin{remark} In \eqref{2ndfundthm} we can always replace ${\boldsymbol j} $ with the positive measure \color{purple} ${\boldsymbol j} ^+:=({\boldsymbol j} -\symmap {\boldsymbol j} )_+ = (2\tj)_+$, since $\odiv {\boldsymbol j} = \odiv {\boldsymbol j}^+$ \color{black} (see Lemma~\ref{le:A1}); therefore we can assume without loss of generality that ${\boldsymbol j} $ is a positive measure. \end{remark} \normalcolor As another immediate consequence of \eqref{2ndfundthm}, the concatenation of two solutions of the continuity equation is again a solution; the result below also contains a statement about time rescaling of the solutions, whose proof follows from trivially adapting that of \cite[Lemma 8.1.3]{AmbrosioGigliSavare08} and is thus omitted. \begin{lemma}[Concatenation and time rescaling] \label{l:concatenation&rescaling} \begin{enumerate} \item Let $(\rho^i,{\boldsymbol j}^i) \in \CE 0{T_i}$, $i=1,2$, with $\rho_{T_1}^1 = \rho_0^2$. Define $(\rho_t,{\boldsymbol j}_t)_{t\in [0,T_{1}+T_2]}$ by \[ \rho_t: = \begin{cases} \rho_t^1 & \text{ if } t \in [0,T_1], \\ \rho_{t-T_1}^2 & \text{ if } t \in [T_1,T_1+T_2], \end{cases} \qquad \qquad {\boldsymbol j}_t: = \begin{cases} {\boldsymbol j}_t^1 & \text{ if } t \in [0,T_1], \\ {\boldsymbol j}_{t-T_1}^2 & \text{ if } t \in [T_1,T_1+T_2]\,. \end{cases} \] Then, $(\rho,{\boldsymbol j} ) \in \CE 0{T_1+T_2}$. \item Let $\mathsf{t} : [0,\hat{T}] \to [0,T]$ be strictly increasing and absolutely continuous, with inverse $\mathsf{s}: [0,T]\to [0,\hat{T}]$. Then, $(\rho, {\boldsymbol j} ) \in \CE 0T$ if and only if $\hat \rho: = \rho \circ \mathsf{t}$ and $\hat {\boldsymbol j} : = \mathsf{t}' ({\boldsymbol j} {\circ} \mathsf{t})$ fulfill $(\hat \rho, \hat {\boldsymbol j} ) \in \CE 0{\hat T}$. \end{enumerate} \end{lemma} \subsection{Definition of the dissipation potential \texorpdfstring{$\mathscr R$}R} \label{ss:def-R} In this section we give a rigorous definition of the dissipation potential $\mathscr R$, following the formal descriptions above. In the special case when $\rho$ and ${\boldsymbol j}$ are absolutely continuous, i.e. \begin{equation} \rho=u\pi\ll\pi \qquad\text{and}\qquad 2{\boldsymbol j} = w\boldsymbol \teta\ll\boldsymbol \teta, \end{equation} we set \begin{equation} \label{concentration-set} E': = \{ (x,y) \in E\, : \upalpha(u(x),u(y))>0 \}, \end{equation} and in this case we can define the functional $\mathscr R$ by the direct formula \begin{equation} \label{eq:21} \mathscr R(\rho,{\boldsymbol j} )= \begin{cases} \displaystyle \frac12\int_{E'} \Psi\Bigl(\frac{w(x,y)}{\upalpha(u(x),u(y))}\Bigr)\upalpha(u(x),u(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) &\text{if }|{\boldsymbol j} |(E\setminus E') =0,\\ {+\infty}&\text{if }|{\boldsymbol j} |(E\setminus E')>0. \end{cases} \end{equation} Recalling the definition of the perspective function $\hat\Psi$ \eqref{eq:76}, we can also write \eqref{eq:21} in the equivalent and more compact form \begin{equation} \label{eq:92} \mathscr R(\rho,{\boldsymbol j} )= \frac12\iint_E \hat\Psi\big(w(x,y), \upalpha(u(x),u(y)) \big)\, \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y),\quad 2{\boldsymbol j}=w\boldsymbol \teta\,. \end{equation} so that it is natural to introduce the function $\Upsilon : [0,{+\infty})\times[0,{+\infty})\times\mathbb{R}\to[0,{+\infty}] $, \begin{equation} \label{Upsilon} \Upsilon (u,v,w) := \hat\Psi(w,\upalpha(u,v)), \end{equation} observing that \begin{equation} \label{eq:22} \mathscr R(\rho,{\boldsymbol j} )= \frac12\iint_E \Upsilon(u(x),u(y),w(x,y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\quad\text{for } 2{\boldsymbol j}=w\boldsymbol \teta. \end{equation} \normalcolor \begin{lemma}\label{lem:Upsilon-properties} The function $\Upsilon:[0,{+\infty})\times[0,{+\infty})\times\mathbb{R}\to[0,{+\infty}]$ defined above is convex and lower semicontinuous, with recession functional \begin{equation} \label{eq:23} \Upsilon^\infty(u,v,w)= \hat\Psi(w, \upalpha^\infty(u,v))= \begin{cases} \displaystyle \Psi\left( \frac{w}{\upalpha^\infty(u,v)}\right) \upalpha^\infty(u,v) & \text{ if $\upalpha^\infty(u,v)>0$} \\ 0 &\text{ if $w=0$} \\ {+\infty} & \text{ if $w\ne 0$ and $\upalpha^\infty(u,v)=0$.} \end{cases} \end{equation} For any $u,v\in [0,\infty)$ with $\upalpha^\infty(u,v)>0$, the map $w\mapsto \Upsilon(u,v,w)$ is strictly convex. If $\upalpha$ is positively 1-homogeneous then $\Upsilon$ is positively 1-homogeneous as well. \end{lemma} \begin{proof} Note that $\Upsilon$ may be equivalently represented in the form \begin{equation} \label{eq:dual-formulation-Upsilon} \Upsilon(u,v,w) = \sup_{\xi\in\mathbb{R}} \bigl\{\xi w - \upalpha(u,v)\Psi^*(\xi)\bigr\} =: \sup_{\xi\in\mathbb{R}} f_\xi(u,v,w)\,. \end{equation} The convexity of $f_\xi$ for each $\xi\in\mathbb{R}$ readily follows from its linearity in $w$ and the convexity of $-\upalpha$ in $(u,v)$. Therefore, $\Upsilon$ is convex and lower semicontinuous as the pointwise supremum of a family of convex continuous functions. The characterization~\eqref{eq:23} of $\Upsilon^\infty$ follows from observing that $\Upsilon(0,0,0)=\hat\Psi(0,0)=0$ and using the $1$-homogeneity of $\hat\Psi$: \begin{align*} \lim_{t\to{+\infty}} t^{-1}\Upsilon(tu,tv,tw) &= \lim_{t\to{+\infty}} t^{-1} \hat\Psi\Big(tw,\upalpha( tu,tv)\Big) =\lim_{t\to{+\infty}} \hat\Psi\Big(w,t^{-1}\upalpha( tu,tv)\Big) \\&= \hat\Psi\Big(w,\upalpha^\infty(u,v)\Big)\,, \end{align*} where the last equality follows from the continuity of $r\mapsto \hat\Psi(w,r)$ for all $w\in \mathbb{R}$. The strict convexity of $w\mapsto \Upsilon(u,v,w)$ for any $u,v\in [0,\infty)$ with $\upalpha^\infty(u,v)>0$ follows directly from the strict convexity of $\Psi$ (cf. Lemma~\ref{l:props:Psi}). \end{proof} The choice~\eqref{eq:22} provides a rigorous definition of $\mathscr R$ for couples of measures $(\rho,{\boldsymbol j})$ that are absolutely continuous with respect to $\pi$ and ${\boldsymbol\vartheta}$. In order to extend $\mathscr R$ to pairs $(\rho,{\boldsymbol j})$ that are not absolutely continuous, it is useful to interpret the measure \begin{equation} \boldsymbol\upnu_\rho(\mathrm{d} x,\mathrm{d} y):=\upalpha(u(x),u(y))\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \label{eq:26} \end{equation} in the integral of \eqref{eq:21} in terms of a suitable concave transformation as in \eqref{eq:6} of two couplings generated by $\rho$. We therefore introduce the measures \begin{equation} \label{def:teta} \begin{aligned} {\boldsymbol\vartheta}_{\rho}^-(\mathrm{d} x\,\mathrm{d} y) := \rho(\mathrm{d} x)\kappa(x,\mathrm{d} y),\qquad {\boldsymbol\vartheta}_{\rho}^+(\mathrm{d} x\,\mathrm{d} y) := \rho(\mathrm{d} y)\kappa(y,\mathrm{d} x)= s_{\#}{\boldsymbol\vartheta}_\rho^-(\mathrm{d} x\,\mathrm{d} y), \end{aligned} \end{equation} observing that \begin{equation} \label{eq:24} \rho=u\pi\ll\pi\quad\Longrightarrow\quad {\boldsymbol\vartheta}^\pm_\rho\ll\boldsymbol \teta,\qquad \frac{\mathrm{d} {\boldsymbol\vartheta}_\rho^-}{\mathrm{d} \boldsymbol \teta}(x,y) = u(x), \quad \frac{\mathrm{d} {\boldsymbol\vartheta}_\rho^+}{\mathrm{d} \boldsymbol \teta}(x,y) = u(y). \end{equation} We thus obtain that \eqref{eq:26}, \eqref{eq:21} and \eqref{eq:22} can be equivalently written as \begin{equation} \label{eq:27} \boldsymbol\upnu_\rho=\upalpha[{\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho|\boldsymbol \teta],\quad \mathscr R(\rho,{\boldsymbol j} )=\frac12\mathscr F_\Psi(2{\boldsymbol j} |\boldsymbol\upnu_\rho)\,, \end{equation} where $\upalpha[{\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho|\boldsymbol \teta]$ stands for $\upalpha[({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho)|\boldsymbol \teta]$, and the functional $ \mathscr F_\psi(\cdot | \cdot)$ is from \eqref{def:F-F}, and also \begin{equation} \label{eq:28} \mathscr R(\rho,{\boldsymbol j} )=\frac12\mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j} |\boldsymbol \teta)\,, \end{equation} again writing for shorter notation $\mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j} |\boldsymbol \teta)$ in place of $\mathscr F_\Upsilon(({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j}) |\boldsymbol \teta)$. Therefore we can use the same expressions \eqref{eq:27} and \eqref{eq:28} to extend the functional $\mathscr R$ to measures $\rho$ and ${\boldsymbol j} $ that need not be absolutely continuous with respect to ~$\pi$ and $\boldsymbol \teta$; the next lemma shows that they provide equivalent characterizations. We introduce the functions $u^\pm:E\to\mathbb{R}$, adopting the notation \begin{multline} \label{eq:93} u^-:=u\circ {\mathsf x}\quad \text{and}\quad u^+:=u\circ {\mathsf y},\\ \text{or equivalently} \quad u^-(x,y):=u(x),\quad u^+(x,y):=u(y). \end{multline} (Recall that ${\mathsf x}$ and ${\mathsf y}$ denote the coordinate maps from $E$ to $V$). \begin{lemma} \label{l:4.8} For every $\rho\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and ${\boldsymbol j} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ we have \begin{equation} \label{eq:29} \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho, 2{\boldsymbol j} \color{black} |\boldsymbol \teta) =\mathscr F_\Psi( 2{\boldsymbol j} \color{black} |\boldsymbol\upnu_\rho). \end{equation} If $\rho=\rho^a+\rho^\perp$ and ${\boldsymbol j}={\boldsymbol j}^a+{\boldsymbol j}^\perp$ are the Lebesgue decompositions of $\rho$ and ${\boldsymbol j}$ with respect to ~$\pi$ and $\boldsymbol \teta$, respectively, we have \begin{equation} \label{eq:94} \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho, 2{\boldsymbol j} \color{black} |\boldsymbol \teta)= \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_{\rho^a},{\boldsymbol\vartheta}^+_{\rho^a}, 2{\boldsymbol j}^a \color{black} |\boldsymbol \teta)+ \mathscr F_{\Upsilon^\infty}({\boldsymbol\vartheta}^-_{\rho^\perp},{\boldsymbol\vartheta}^+_{\rho^\perp}, 2{\boldsymbol j}^\perp). \color{black} \end{equation} \end{lemma} \begin{proof} Let us consider the Lebesgue decomposition $\rho=\rho^a+\rho^\perp$, $\rho^a=u\pi$, and a corresponding partition of $V$ in two disjoint Borel sets $R,P$ such that $\rho^a=\rho\mres R$, $\rho^\perp=\rho\mres P$ and $\pi(P)=0$, which yields \begin{equation} \label{eq:30} {\boldsymbol\vartheta}^\pm_\rho={\boldsymbol\vartheta}^\pm_{\rho^a}+{\boldsymbol\vartheta}^\pm_{\rho^\perp},\quad {\boldsymbol\vartheta}^\pm_{\rho^a}\ll\boldsymbol \teta,\quad {\boldsymbol\vartheta}^-_{\rho^\perp}:={\boldsymbol\vartheta}^-_\rho\mres{P\times V}, \quad {\boldsymbol\vartheta}^+_{\rho^\perp}:={\boldsymbol\vartheta}^+_\rho\mres{V\times P}. \end{equation} Since $\boldsymbol \teta(P\times V)=\boldsymbol \teta(V\times P) \le \|\kappa_V\|_\infty \color{black} \pi(P)=0$, ${\boldsymbol\vartheta}^\pm_{\rho^\perp}$ are singular with respect to ~$\boldsymbol \teta$. Let us also consider the Lebesgue decomposition ${\boldsymbol j}={\boldsymbol j}^a+{\boldsymbol j}^\perp$ of ${\boldsymbol j}$ with respect to ~$\boldsymbol \teta$. We can select a measure ${\boldsymbol \varsigma}\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ such that ${\boldsymbol\vartheta}^\pm_{\rho^\perp}=z^\pm{\boldsymbol \varsigma}\ll{\boldsymbol \varsigma}$, ${\boldsymbol j}^\perp\ll{\boldsymbol \varsigma}$ and ${\boldsymbol \varsigma}\perp\boldsymbol \teta$, obtaining \begin{equation} \label{eq:34} \begin{aligned} \boldsymbol\upnu_\rho=\upalpha[{\boldsymbol\vartheta}_\rho^-,{\boldsymbol\vartheta}_\rho^+|\boldsymbol \teta]= \boldsymbol\upnu_\rho^1+\boldsymbol\upnu_\rho^2, \quad \boldsymbol\upnu_\rho^1:=\upalpha(u^-,u^+)\boldsymbol \teta,\quad \boldsymbol\upnu_\rho^2:=\upalpha^\infty(z^-,z^+){\boldsymbol \varsigma}. \end{aligned} \end{equation} Since ${\boldsymbol j} \ll \boldsymbol \teta+{\boldsymbol \varsigma}$, we can decompose \begin{equation} \label{decomp-DF} 2{\boldsymbol j} =w\boldsymbol \teta+w'{\boldsymbol \varsigma}, \color{black} \end{equation} and by the additivity property \eqref{eq:81} we obtain \begin{equation} \label{heartsuit} \begin{aligned} \mathscr F_\Psi &( 2{\boldsymbol j} \color{black} |\boldsymbol\upnu_\rho) = \mathscr F_{\hat \Psi}( 2{\boldsymbol j}, \color{black}\boldsymbol\upnu_\rho)= \mathscr F_{\hat\Psi}(w{\boldsymbol\vartheta},\boldsymbol\upnu_\rho^1)+ \mathscr F_{\hat\Psi}(w'{\boldsymbol \varsigma},\boldsymbol\upnu_\rho^2) \\&\stackrel{(*)}= \iint_E \Upsilon(u(x),u(y),w(x,y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)+ \iint_E \Upsilon^\infty(z^-(x,y),z^+(x,y),w'(x,y))\,{\boldsymbol \varsigma}(\mathrm{d} x,\mathrm{d} y) \\&= \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_{\rho^a},{\boldsymbol\vartheta}^+_{\rho^a}, 2{\boldsymbol j}^a |\boldsymbol \teta)+ \color{black} \mathscr F_{\Upsilon^\infty}({\boldsymbol\vartheta}^-_{\rho^\perp},{\boldsymbol\vartheta}^+_{\rho^\perp}, 2{\boldsymbol j}^\perp) \color{black} =\mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho, 2{\boldsymbol j} |\boldsymbol \teta). \color{black} \end{aligned} \end{equation} Indeed, identity (*) follows from the fact that, since $\hat{\Psi}$ is $1$-homogeneous, \[ \mathscr F_{\hat\Psi}(w{\boldsymbol\vartheta},\boldsymbol\upnu_\rho^1) = \iint_{\color{ddcyan} E} \color{ddcyan} \hat{\Psi} \left( \frac{\mathrm{d}(w{\boldsymbol\vartheta},\boldsymbol\upnu_\rho^1)}{\mathrm{d} \gamma}\right) \mathrm{d} \gamma \] for every $\gamma \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ such that $w{\boldsymbol\vartheta} \ll \gamma$ and $\boldsymbol\upnu_\rho^1 \ll \gamma$, cf.\ \eqref{eq:78}. Then, it suffices to observe that $w{\boldsymbol\vartheta} \ll {\boldsymbol\vartheta}$ and $\boldsymbol\upnu_\rho^1 \ll {\boldsymbol\vartheta}$ with $\frac{\mathrm{d} \boldsymbol\upnu_\rho^1}{\mathrm{d} {\boldsymbol\vartheta}} = \upalpha(u^-,u^+)$. The same argument applies to $ \mathscr F_{\hat\Psi}(w'{\boldsymbol \varsigma},\boldsymbol\upnu_\rho^2)$, cf.\ also Lemma~\ref{l:lsc-general}(3). \color{black} \end{proof} \begin{definition} \label{def:R-rigorous} The \textit{dissipation potential} $\mathscr R: {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\times{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E) \to [0,{+\infty}]$ is defined by \begin{equation} \label{def:action} \mathscr R(\rho,{\boldsymbol j} ) := \frac12 \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j} |\boldsymbol \teta) = \frac12 \mathscr F_\Psi(2{\boldsymbol j} |\boldsymbol\upnu_\rho). \end{equation} where ${\boldsymbol\vartheta}_{\rho}^\pm$ are defined by \eqref{def:teta}. If $\upalpha$ is $1$-homogeneous, then $\mathscr R(\rho,{\boldsymbol j})$ is independent of $\boldsymbol \teta$. \end{definition} \begin{lemma} \label{l:alt-char-R} Let $\rho=\rho^a+\rho^\perp\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and ${\boldsymbol j} ={\boldsymbol j}^a+{\boldsymbol j}^\perp\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$, with $\rho^a=u\pi$, $2{\boldsymbol j}^a=w\boldsymbol \teta$, and $\rho^\perp$, $j^\perp$ as in Lemma \ref{l:4.8}, \color{black} satisfy $\mathscr R(\rho,{\boldsymbol j} )<{+\infty}$, and let $P\in \calB(V)$ be a $\pi$-negligible set such that $\rho^\perp=\rho\mres P$. \begin{enumerate}[label=(\arabic*)] \item We have $|{\boldsymbol j} |(P\times (V\setminus P))= |{\boldsymbol j} |((V\setminus P)\times P)=0$, ${\boldsymbol j}^\perp={\boldsymbol j} \mres(P\times P)$, and \begin{equation} \label{eq:37} \mathscr R(\rho,{\boldsymbol j} )= \mathscr R(\rho^a,{\boldsymbol j}^a)+ \frac12 \mathscr F_{\Upsilon^\infty}({\boldsymbol\vartheta}^-_{\rho^\perp},{\boldsymbol\vartheta}^+_{\rho^\perp},2{\boldsymbol j}^\perp). \end{equation} In particular, if $\upalpha$ is $1$-homogeneous we have the decomposition \begin{equation} \label{eq:175} \mathscr R(\rho,{\boldsymbol j} )= \mathscr R(\rho^a,{\boldsymbol j}^a)+ \mathscr R(\rho^\perp,{\boldsymbol j}^\perp). \end{equation} \item \label{l:alt-char-R:i2} If $\rho\ll \pi$ or $\upalpha$ is sub-linear, i.e.~$\upalpha^\infty\equiv0$, or $\kappa(x,\cdot)\ll\pi$ for every $x\in V$, then ${\boldsymbol j} \ll\boldsymbol \teta$ and ${\boldsymbol j}^\perp\equiv0$. In any of these three cases, $\mathscr R(\rho,{\boldsymbol j} ) = \mathscr R(\rho^a,{\boldsymbol j})$, and setting $E'$ as in \eqref{concentration-set} we have $w=0$ $\boldsymbol \teta$-a.e.\ on $E\setminusE'$, and \eqref{eq:21} holds. \item Furthermore, $\mathscr R$ is convex and lower semicontinuous with respect to ~setwise convergence in $(\rho,{\boldsymbol j})$. If $\kappa$ satisfies the weak Feller property, then $\mathscr R$ is also lower semicontinuous with respect to weak convergence in duality with continuous bounded functions. \color{black} \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} Equation~\eqref{eq:37} is an immediate consequence of \eqref{eq:94}. To prove the properties of ${\boldsymbol j}$, set $R = V\setminus P$ for convenience. By using the decompositions ${\boldsymbol j} =w\boldsymbol \teta+w'{\boldsymbol \varsigma}$ and ${\boldsymbol\vartheta}_{\rho}^\pm = {\boldsymbol\vartheta}_{\rho^a}^\pm + {\boldsymbol\vartheta}_{\rho^\perp}^ \pm = {\boldsymbol\vartheta}_{\rho^a}^\pm + z^\pm {\boldsymbol \varsigma}$ introduced in the proof of the previous Lemma, the definition~\eqref{eq:30} implies that ${\boldsymbol\vartheta}^+_{\rho^\perp}(P\times R)=0$, so that $z^+=0$ ${\boldsymbol \varsigma}$-a.e.~in $P\times R$; analogously $z^-=0$ ${\boldsymbol \varsigma}$-a.e.~in $R\times P$. By \eqref{alpha-0} we find that $\upalpha^\infty(z^-,z^+)=0$, ${\boldsymbol \varsigma}$-a.e.~in $(P\times R)\cup (R\times P)$ and therefore $w'=0$ as well, since $\Upsilon^\infty(z^-,z^+,w')<{+\infty}$ ${\boldsymbol \varsigma}$-a.e (see \eqref{heartsuit}). \color{black} We eventually deduce that ${\boldsymbol j}^\perp={\boldsymbol j} \mres P\times P$. \textit{(2)} When $\rho\ll\pi$ we can choose $P=\emptyset$ so that ${\boldsymbol j}^\perp={\boldsymbol j}\mres P=0$. When $\upalpha$ is sub-linear then $\boldsymbol\upnu_\rho\ll \boldsymbol \teta$ so that ${\boldsymbol j} \ll\boldsymbol \teta$ since $\Psi$ is superlinear. If $\kappa(x,\cdot)\ll \pi$ for every $x\in V$, then ${\mathsf y}_\sharp {\boldsymbol\vartheta}^-_{\rho^\perp}\ll \pi$ and ${\mathsf x}_\sharp {\boldsymbol\vartheta}^+_{\rho^\perp}\ll \pi$, so that ${\boldsymbol\vartheta}^\pm_{\rho^\perp}(P\times P)=0$, since $P$ is $\pi$-negligible. We deduce that ${\boldsymbol j}^\perp(P\times P)=0$ as well. \noindent \textit{(3)} The convexity of $\mathscr R$ follows by the convexity of the functional $\mathscr F_\Upsilon$. The lower semicontinuity follows by combining Lemma \ref{le:kernel-convergence} with Lemma \ref{l:lsc-general}. \normalcolor \end{proof} \begin{cor} \label{cor:decomposition} Let $\pi_1,\pi_2\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ be mutually singular measures satisfying the detailed balance condition with respect to ~$\kappa$, and let $\boldsymbol \teta_i=\boldsymbol \kappa_{\pi_i}$ be the corresponding symmetric measures in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ (see Section~\ref{subsub:kernels}). For every pair $(\rho,{\boldsymbol j})$ with $\rho=\rho_1+\rho_2$, ${\boldsymbol j}={\boldsymbol j}_1+{\boldsymbol j}_2$ for $\rho_i\ll\pi_i$ and ${\boldsymbol j}_i\ll\boldsymbol \teta_i$, we have \begin{equation} \label{eq:176} \mathscr R(\rho,{\boldsymbol j})=\mathscr R_1(\rho_1,{\boldsymbol j}_1)+\mathscr R_2(\rho_2,{\boldsymbol j}_2), \end{equation} where $\mathscr R_i$ is the dissipation functional induced by $\boldsymbol \teta_i$. When $\upalpha$ is $1$-homogeneous, $\mathscr R_i=\mathscr R$. \end{cor} \subsection{Curves with finite \texorpdfstring{$\mathscr R$}R-action} In this section, we study the properties of curves with finite $\mathscr R$-action, i.e., elements of \begin{equation} \label{def:Aab} \CER ab: = \biggl\{ (\rho,{\boldsymbol j} ) \in \CE ab:\ \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t <{+\infty} \biggr\}. \end{equation} The finiteness of the $\mathscr R$-action leads to the following remarkable property: A curve $(\rho,{\boldsymbol j})$ with finite $\mathscr R$-action can be separated into two mutually singular curves $(\rho^a,{\boldsymbol j}^a),\ (\rho^\perp,{\boldsymbol j}^\perp)\in \CER ab$ that evolve independently, and contribute independently to~$\mathscr R$. Consequently, finite $\mathscr R$-action preserves $\pi$-absolute continuity of $\rho$: if $\rho_t\ll\pi$ at any $t$, then $\rho_t\ll\pi$ at all~$t$. These properties and others are proved in Theorem~\ref{thm:confinement-singular-part} and Corollary~\ref{cor:propagation-AC} below. \begin{remark}\label{rem:skew-symmetric} If $(\rho,{\boldsymbol j} )\in \CER ab$ then the `skew-symmetrization' $ \tj=({\boldsymbol j} -\symmap {\boldsymbol j} )/2$ \color{black} of ${\boldsymbol j} $ gives rise to a pair $(\rho,\tj)\in \CER ab$ \color{black} as well, and it has lower $\mathscr R$-action: \[ \int_a^b \mathscr R(\rho_t, \tj_t)\, \mathrm{d} t \color{black} \leq \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t. \] This follows from the convexity of $w\mapsto\Upsilon(u_1,u_2,w)$, the symmetry of $(u_1,u_2)\mapsto\Upsilon(u_1,u_2,w)$, and the invariance of the continuity equation~\eqref{eq:ct-eq-def} under the `skew-symmetrization' ${\boldsymbol j} \mapsto \tj$ \color{purple} (cf.\ also the calculations in the proof of Corollary \ref{th:chain-rule-bound2}). \color{black} As a result, we can often assume without loss of generality that a flux ${\boldsymbol j} $ is skew-symmetric, i.e.\ that $\symmap {\boldsymbol j} = -{\boldsymbol j} $. \end{remark} \begin{theorem} \label{thm:confinement-singular-part} Let $(\rho,{\boldsymbol j} )\in \CER ab$ and let us consider the Lebesgue decompositions $\rho_t=\rho_t^a+\rho_t^\perp$ and ${\boldsymbol j}_t ={\boldsymbol j}_t^a+{\boldsymbol j}_t^\perp$ of $\rho_t$ with respect to ~$\pi$ and of ${\boldsymbol j}_t $ with respect to ~$\boldsymbol \teta$. \begin{enumerate} \item We have $(\rho^a,{\boldsymbol j}^a)\in \CER ab$ with \begin{equation} \label{eq:55} \int_a^b \mathscr R(\rho^a_t,{\boldsymbol j}^a_t)\, \mathrm{d} t \le \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t . \end{equation} In particular $t\mapsto \rho_t^a(V)$ and $t\mapsto\rho_t^\perp(V)$ are constant. \item If $\upalpha$ is $1$-homogeneous then also $(\rho^\perp,{\boldsymbol j}^\perp)\in \CER ab$ and \begin{equation} \label{eq:55bis} \int_a^b \mathscr R(\rho^a_t,{\boldsymbol j}^a_t)\, \mathrm{d} t + \int_a^b \mathscr R(\rho^\perp_t,{\boldsymbol j}^\perp_t)\, \mathrm{d} t= \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t . \end{equation} \item If $\upalpha$ is sub-linear or $\kappa(x,\cdot)\ll\pi$ for every $x\in V$, then $\rho_t^\perp$ is constant in $[a,b]$ and ${\boldsymbol j}^\perp\equiv0$. \end{enumerate} \end{theorem} \begin{proof} \textit{(1)} Let $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ be a dominating measure for the curve $\rho$ according to Corollary~\ref{c:narrow-ct} and let us denote by $\gamma=\gamma^a+\gamma^\perp$ the Lebesgue decomposition of $\gamma$ with respect to ~$\pi$; we also denote by $P\in\calB(V)$ a $\pi$-negligible Borel set such that $\gamma^\perp=\gamma\mres P$. Setting $R:=V\setminus P$, since $\rho_t\ll \gamma$ we thus obtain $\rho^a_t=\rho_t\mres R$, $\rho^\perp_t=\rho_t\mres P$. By Lemma \ref{l:alt-char-R} for $\lambda$-a.e.~$t\in (a,b)$ we obtain ${\boldsymbol j}^\perp_t={\boldsymbol j} \mres(P\times P)$ and ${\boldsymbol j}^a_t={\boldsymbol j} \mres(R\times R)$ with $|{\boldsymbol j}_t|(R\times P)=|{\boldsymbol j}_t|(P\times R)=0$. For every function $\varphi\in\mathrm{B}_{\mathrm b}$ we have $\dnabla(\varphi\chi_R)\equiv0$ on $P\times P$ so that we get \begin{align*} \int_V \varphi\,\mathrm{d} \rho_{t_2}^a- \int_V \varphi\,\mathrm{d} \rho_{t_1}^a &= \int_R \varphi\,\mathrm{d} \rho_{t_2}- \int_R \varphi\,\mathrm{d} \rho_{t_1}= \int_{t_1}^{t_2} \iint_{E} \dnabla(\varphi \chi_R)\,\mathrm{d} ({\boldsymbol j}^a_t+{\boldsymbol j}^\perp_t)\,\mathrm{d} t \\&=\int_{t_1}^{t_2} \iint_{R\times R} \dnabla(\varphi \chi_R)\,\mathrm{d} {\boldsymbol j}^a_t\,\mathrm{d} t =\int_{t_1}^{t_2} \iint_{E} \dnabla\varphi\,\mathrm{d} {\boldsymbol j}^a_t\,\mathrm{d} t, \end{align*} showing that $(\rho^a,{\boldsymbol j}^a)$ belongs to $\CE ab$. Estimate \eqref{eq:55} follows by \eqref{eq:37}. \color{red} From Lemma~\ref{l:cont-repr} \color{black} we deduce that $\rho_t^a(V)$ and $\rho_t^\perp(V)$ are constant. \textit{(2)} This follows by the linearity of the continuity equation and \eqref{eq:175}. \textit{(3)} If $\upalpha$ is sub-linear or $\kappa(x,\cdot)\ll\pi$ for every $x\in V$, then Lemma~\ref{l:alt-char-R} shows that ${\boldsymbol j}^\perp\equiv 0$. Since by linearity $(\rho^\perp,{\boldsymbol j}^\perp)\in \CE ab$, we deduce that $\rho^\perp_t$ is constant. \end{proof} \begin{cor}\label{cor:propagation-AC} \color{red} Let $(\rho,{\boldsymbol j} )\in \CER ab$. \color{black} If there exists $t_0\in [a,b]$ such that $\rho_{t_0}\ll\pi$, then we have $\rho_t\ll\pi$ for every $t\in [a,b]$, ${\boldsymbol j}^\perp\equiv0$, and $\odiv {\boldsymbol j}_t\ll \pi$ for $\lambda$-a.e.~$t\in (a,b)$. In particular, there exists an absolutely continuous and a.e.~differentiable map $u:[a,b]\to L^1(V,\pi)$ and a map $w\in L^1(E,\lambda\otimes\boldsymbol \teta)$ such that \begin{equation} \label{eq:42} 2{\boldsymbol j}_\lambda=w \lambda\otimes\boldsymbol \teta,\quad \partial_t u_t(x)=\frac12\int_V \big(w_t(y,x)-w_t(x,y)\big)\,\kappa(x,\mathrm{d} y) \quad\text{for a.e.~}t\in (a,b). \end{equation} Moreover there exists a measurable map $\xi:(a,b)\times E\to \mathbb{R}$ such that $w=\xi\upalpha(u^-,u^+)$ $\lambda\otimes\boldsymbol \teta$-a.e.~and \begin{equation} \label{eq:45} \mathscr R(\rho_t,{\boldsymbol j}_t)= \frac12\iint_E \Psi(\xi_t(x,y))\upalpha(u_t(x),u_t(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \quad\text{for a.e.~$t\in (a,b)$.} \end{equation} If $w$ is skew-symmetric, then $\xi$ is skew-symmetric as well and \eqref{eq:42} reads as \begin{equation} \label{eq:177} \partial_t u_t(x)=\int_V w_t(y,x)\,\kappa(x,\mathrm{d} y)= \int_V \xi_t(y,x)\upalpha(u_t(x),u_t(y))\,\kappa(x,\mathrm{d} y) \quad\text{a.e.~in }(a,b). \end{equation} \end{cor} \begin{remark} \label{rem:general-fact} Relations \eqref{eq:42} and \eqref{eq:177} hold both in the sense of a.e.~differentiability of maps with values in $L^1(V,\pi)$ and pointwise a.e.~with respect to ~$x\in V$: more precisely, there exists a set $U\subset V$ of full $\pi$-measure such that for every $x\in U$ the map $t\mapsto u_t(x)$ is absolutely continuous and equations \eqref{eq:42} and \eqref{eq:177} hold for every $x\in U$, a.e.~with respect to ~$t\in (0,T)$. \end{remark} \begin{proof} The first part of the statement is an immediate consequence of Theorem~\ref{thm:confinement-singular-part}, which yields $\rho^\perp_t(V)= 0$ for every $t\in [a,b]$. We can thus write $2{\boldsymbol j} =w(\lambda\otimes \boldsymbol \teta)$ for some measurable map $w:(a,b)\times E\to \mathbb{R}$. Moreover $\odiv {\boldsymbol j} \ll\lambda\otimes \pi$, since ${\mathsf s}_\sharp{\boldsymbol j}\ll{\mathsf s}_\sharp (\lambda\otimes\boldsymbol \teta)= \lambda\otimes\boldsymbol \teta$, and therefore \begin{equation} \label{eq:46} 2 {\boldsymbol j}^\flat={\boldsymbol j}-{\mathsf s}_\sharp{\boldsymbol j}\ll \lambda\otimes\boldsymbol \teta\quad\Longrightarrow\quad \odiv {\boldsymbol j}= {\mathsf x}_\sharp (2{\boldsymbol j}^\flat)\ll \color{black} {\mathsf x}_\sharp(\lambda\otimes\boldsymbol \teta)\color{red} \ll \color{black}\lambda\otimes\pi. \end{equation} Setting $z_t=\mathrm{d}(\odiv {\boldsymbol j}_t)/\mathrm{d}\pi$ we get for a.e.~$t\in (a,b)$ \begin{align*} \partial_t u_t&=-z_t,\\ -2\int_V \varphi \,z_t\,\mathrm{d}\pi&= \iint_E (\varphi(y)-\varphi(x))w_t(x,y)\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) = \iint_E \varphi(x) (w_t(y,x)-w_t(x,y))\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \\&= \int_V \varphi(x) \Big(\int_V (w_t(y,x)-w_t(x,y)) \kappa(x,\mathrm{d} y)\Big)\pi(\mathrm{d} x), \end{align*} The existence of $\xi$ and formula~\eqref{eq:45} follow from Lemma \ref{l:alt-char-R}\ref{l:alt-char-R:i2}. \end{proof} \subsection{Chain rule for convex entropies} \label{subsec:chain-rule} Let us now consider a continuous convex function $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$ that is differentiable in $(0,+\infty)$. The main choice for $\upbeta$ will be the function~$\upphi$ that appears in the definition of the driving functional~$\mathscr E$ (see Assumption~\ref{ass:S}), and the example of the Boltzmann-Shannon entropy function~\eqref{logarithmic-entropy} illustrates why we only assume differentiability away from zero. By setting $\upbeta'(0)=\lim_{r\downarrow0} \upbeta'(r)\in [-\infty,{+\infty})$, we define the function ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta:\mathbb{R}_+\times \mathbb{R}_+\to[-\infty,+\infty]$ by \begin{equation} \label{eq:102} {\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v):= \begin{cases} \upbeta'(v)-\upbeta'(u)&\text{if }u,v\in \mathbb{R}_+\times \mathbb{R}_+\setminus \{(0,0)\},\\ 0&\text{if }u=v=0. \end{cases} \end{equation} Note that ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta$ is continuous (with extended real values) in $\mathbb{R}_+\times \mathbb{R}_+\setminus\{(0,0)\}$ and is finite and continuous whenever $\upbeta'(0)>-\infty$. When $\upbeta'(0)=-\infty$ we have ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(0,v)=-{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,0)={+\infty}$ for every $u,v>0$. In the following we will adopt the convention \begin{equation} \label{eq:75} |\pm\infty|={+\infty},\quad a\cdot ({+\infty}):= \begin{cases} {+\infty}&\text{if }a>0,\\ 0&\text{if }a=0,\\ -\infty&\text{if }a<0 \end{cases} \quad a\cdot(-\infty)=-a\cdot ({+\infty}), \end{equation} for every $a\in [-\infty,+\infty]$ and, using this convention, we define the extended valued function $\rmB_\upbeta:\mathbb{R}_+\times\mathbb{R}_+\times \mathbb{R}\to [-\infty,+\infty]$ by \begin{equation} \label{eq:105} \rmB_\upbeta(u,v,w):={\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)w. \end{equation} We want to study the differentiability properties of the functional $\mathscr F_\upbeta(\cdot|\pi)$ along solutions $(\rho,{\boldsymbol j})\in \CEI I$ of the continuity equation. Note that if $\upbeta$ is superlinear and $\mathscr F_\upbeta$ is finite at a time $t_0\in I$, then Corollary \ref{cor:propagation-AC} shows that $\rho_t\ll\pi$ for every $t\in I$. If $\upbeta$ has linear growth then \begin{equation} \label{eq:100} \mathscr F_\upbeta(\rho_t|\pi)= \int_V \upbeta(u_t)\,\mathrm{d}\pi+\upbeta^\infty(1)\rho^\perp(V),\quad \rho_t=u_t\pi+\rho_t^\perp, \end{equation} where we have used that $t \mapsto \rho_t^\perp(V)$ is constant. Thus, we are reduced to studying $\mathscr F_\upbeta$ along $(\rho^a,{\boldsymbol j}^a)$, which is still a solution of the continuity equation. The absolute continuity property of $\rho_t$ with respect to ~$\pi$ is therefore quite a natural assumption in the next result. \begin{theorem}[Chain rule I] \label{th:chain-rule-bound} Let $(\rho,{\boldsymbol j} )\in \CER ab$ with $\rho_t=u_t\pi\ll \pi$ and let $2{\boldsymbol j}^\flat={\boldsymbol j}-{\mathsf s}_\sharp {\boldsymbol j}=w^\flat\lambda\otimes\boldsymbol \teta$ \color{black} as in Corollary \ref{cor:propagation-AC} satisfy \begin{equation} \label{ass:th:CR} \int_V \upbeta(u_a)\,\mathrm{d}\pi<{+\infty},\quad \int_a^b\iint_E\Big(\rmB_\upbeta(u_t(x),u_t(y),w^\flat_t(x,y))\Big)_+\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t<{+\infty} \end{equation} Then the map $t\mapsto \int_V \upbeta(u_t)\,\mathrm{d}\pi$ is absolutely continuous in $[a,b]$, the map $\rmB_\upbeta(u^-,u^+,w^\flat)$ is $\lambda\otimes\boldsymbol \teta$-integrable and \begin{equation} \label{eq:CR} \frac{\mathrm{d}}{\mathrm{d} t}\int_V \upbeta(u_t)\,\mathrm{d}\pi = \frac12\iint_{E} \rmB_\upbeta(u_t(x),u_t(y),w^\flat_t(x,y)) \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \quad\text{for a.e.~}t\in (a,b). \end{equation} \end{theorem} \begin{remark} At first sight condition~\eqref{ass:th:CR} on the positive part of $\rmB_\upbeta$ is remarkable: we only require the positive part of $\rmB_\upbeta$ to be integrable, but in the assertion we obtain integrability of the negative part as well. This integrability arises from the combination of the upper bound on $\int_V \upbeta(u_a)\,\mathrm{d} \pi$ in~\eqref{ass:th:CR} with the lower bound $\upbeta\geq0$. \end{remark} \begin{proof} {\em Step 1:\ Chain rule for an approximation.} Define for $k\in \mathbb{N}$ an approximation $\upbeta_k$ of $\upbeta$ as follows:\ Let $\upbeta_k'(\sigma):=\max\{-k,\min\{\upbeta'(\sigma),k\}\}$ be the truncation of $\upbeta'$ to the interval $[-k,k]$. Due to the assumptions on $\upbeta$, we may assume that $\upbeta$ achieves a minimum at the point $s_0\in[0,{+\infty})$. Now set $\upbeta_k(s) := \upbeta(s_0) + \int_{s_0}^s \upbeta_k'(\sigma)\,\mathrm{d} \sigma$. Note that $\upbeta_k$ is differentiable and globally Lipschitz, and converges monotonically to $\upbeta(s)$ for all $s\geq0$ as $k\to\infty$. For each $k\in\mathbb{N}$ and $t\in [a,b]$ we define \[ S_{k}(t): = \int_{V} \upbeta_k(u_t)\, \mathrm{d} \pi,\quad S(t): = \int_{V} \upbeta(u_t)\, \mathrm{d}\pi. \] By convexity and Lipschitz continuity of $\upbeta_k$, we have that \begin{align*} \upbeta_k(u_t(x))-\upbeta_k(u_s(x)) \le \upbeta_k'(u_t(x))( u_t(x)-u_s(x)) \le k| u_t(x)-u_s(x)|\,. \end{align*} Hence, we deduce by Corollary \ref{cor:propagation-AC} that for every $a\le s<t\le b$ \begin{align*} S_{k}(t) - S_{k}(s) &= \int_{V} \bigl[\upbeta_k(u_t(x))-\upbeta_k(u_s(x))\bigr]\pi(\mathrm{d} x) \\ &\le k\|u_t-u_s\|_{L^1(V;\pi)} \le k\int_s^t \|\partial_r u_r\|_{L^1(V;\pi)}\,\mathrm{d} r. \end{align*} We conclude that the function $t\mapsto S_k(t)$ is absolutely continuous. Let us pick a point $t\in (a,b)$ of differentiability for $t\mapsto S_k(t)$: it easy to check that \begin{align*} \frac{\mathrm{d} }{\mathrm{d} t}S_k(t) &= \int_{V} \upbeta'_k(u_t)\,\partial_t u_t \,\mathrm{d}\pi = \frac12\iint_{E} \dnabla \upbeta'_k(u_t)w^\flat_t \, \mathrm{d}\boldsymbol \teta\,, \end{align*} which by integrating over time yields \begin{equation}\label{ineq:est-S-k} S_k(t) - S_k(s) = \frac12\int_s^t\iint_{E} \dnabla \upbeta'_k(u_r)w^\flat_r\, \mathrm{d} \boldsymbol \teta\,\mathrm{d} r \qquad \text{for all } a \leq s \leq t \leq b. \end{equation} \paragraph{\em Step 2:\ The limit $k\to\infty$} Since $0\leq \upbeta_k''\leq \upbeta''$ we have \begin{equation} \label{eq:103} 0\le {\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_{\upbeta_k}(u,v)=\upbeta_k'(v)-\upbeta_k'(u)\le \upbeta'(v)-\upbeta'(u)={\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\quad\text{whenever }0\le u\le v \end{equation} and \begin{equation} \label{eq:104} |\upbeta_k'(v)-\upbeta_k'(u)|\le |{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)|\quad \text{for every }u,v\in \mathbb{R}_+. \end{equation} We can thus estimate the right-hand side in \eqref{ineq:est-S-k} \begin{align} (B_k)_+=\left( \dnabla \upbeta'_k(u)\, w^\flat\right)_+ & \le \left({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u^-,u^+) w^\flat\right)_+=B_+ \label{est:CR:w-ona-phi-k} \end{align} where we have used the short-hand notation \begin{equation} \label{eq:106} B_k(r,x,y)=\rmB_{\upbeta_k}(u_r(x),u_r(y),w^\flat_r(x,y)),\quad B(r,x,y):=\rmB_\upbeta(u_r(x),u_r(y),w^\flat_r(x,y)). \end{equation} Assumption~\eqref{ass:th:CR} implies that the right-hand side in~\eqref{est:CR:w-ona-phi-k} is an element of $L^1([a,b]\times E;\lambda\otimes\boldsymbol \teta)$, so that in particular $B_+\in \mathbb{R}$ for $(\lambda\otimes\boldsymbol \teta)$-a.e.~$(t,x,y)$. Moreover, \eqref{ineq:est-S-k} yields \begin{align} \int_a^b \iint_E (B_k)_-\,\mathrm{d} \boldsymbol \teta_\lambda &=\notag \int_a^b \iint_E (B_k)_+\,\mathrm{d} \boldsymbol \teta_\lambda+ S_k(a)-S_k(b) \\ &\le \int_a^b \iint_E (B)_+\,\mathrm{d} \boldsymbol \teta_\lambda+ S(a)<{+\infty}.\label{eq:193} \end{align} Note that the sequence $k\mapsto (B_k)_-$ is definitely $0$ or is monotonically increasing to $B_-$. Beppo Levi's Monotone Convergence Theorem and the uniform estimate \eqref{eq:193} then yields that $B_-\in L^1((a,b)\times E,\lambda\otimes\boldsymbol \teta)$, thus showing that $\rmB_\upbeta(u^-,u^+,w^\flat)$ is $(\lambda\otimes\boldsymbol \teta)$-integrable as well. We can thus pass to the limit in \eqref{ineq:est-S-k} as $k\to{+\infty}$ and we have \begin{equation} \lim_{k\to{+\infty}} \dnabla \upbeta'_k(u)\, w^\flat =B \quad \text{$\lambda\otimes \boldsymbol \teta$-a.e.~in $(a,b)\times E$.} \label{eq:57} \end{equation} The identity~\eqref{eq:57} is obvious if $\upbeta'(0)$ is finite, and if $\upbeta'(0)=-\infty$ then it follows by the upper bound \eqref{est:CR:w-ona-phi-k} and the fact that the right-hand side of \eqref{est:CR:w-ona-phi-k} is finite almost everywhere. \normalcolor The Dominated Convergence Theorem then implies that \[ \int_s^t\iint_{E} \dnabla \upbeta'_k(u_r)\, w^\flat_r \,\mathrm{d} \boldsymbol \teta\,\mathrm{d} r \quad\longrightarrow\quad \int_s^t\iint_{E}B\, \mathrm{d} \boldsymbol \teta\,\mathrm{d} r \qquad\text{as}\quad k\to\infty\,. \] By the monotone convergence theorem $S(t) = \lim_{k\to {+\infty}} S_k(t)\in [0,{+\infty}]$ for all $t\in [a,b]$ and the limit is finite for $t=0$. For all $t\in [a,b]$, therefore, \[ S(t) = S(a)+ \frac12\int_a^t \iint_E B \, \mathrm{d} \boldsymbol \teta \,\mathrm{d} r, \] which shows that $S$ is absolutely continuous and \eqref{eq:CR} holds. \end{proof} \par We now introduce three functions associated with the (general) continuous convex function $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$, differentiable in $(0,+\infty)$, that we have considered so far, and whose main example will be the entropy density $\upphi$ from \eqref{cond-phi}. \normalcolor Recalling the definition \eqref{eq:102}, the convention \eqref{eq:75}, and setting $\Psi^*(\pm\infty):={+\infty}$, let us now introduce the functions ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta, {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta,{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta:\mathbb{R}_+^2\to[0,{+\infty}]$ \begin{subequations} \label{subeq:D} \begin{align} \label{eq:181} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u,v)&:= \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v))\upalpha(u,v)\\ &\phantom:= \begin{cases} \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v))\upalpha(u,v)&\text{if }\upalpha(u,v)>0,\\ 0&\text{otherwise,} \end{cases}\notag\\[2\jot] \label{eq:52} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta(u,v)&:= \begin{cases} \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v))\upalpha(u,v)&\text{if }\upalpha(u,v)>0,\\ 0&\text{if }u=v=0,\\ {+\infty}&\text{otherwise, i.e.~if }\upalpha(u,v)=0,\ u\neq v, \end{cases}\\[2\jot] \label{eq:182} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta(\cdot,\cdot)&:= \text{the lower semicontinuous envelope of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^+$ in $\mathbb{R}_+^2$}. \end{align} \end{subequations} The function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ corresponding to the choice $\upbeta = \upphi$ shall feature in the (rigorous) definition of the \emph{Fisher information} functional $\mathscr{D}$, cf.\ \eqref{eq:def:D} ahead. Nonetheless, it is significant to introduce the functions $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ and $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upphi$ as well, cf.\ Remarks \ref{rmk:why-interesting-1} and \ref{rmk:Mark} ahead. \color{black} \begin{example}[The functions ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^\pm_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ in the quadratic and in the $\cosh$ case] \label{ex:Dpm} In the two examples of the linear equation~\eqref{eq:fokker-planck}, with Boltzmann entropy function~$\upphi$, and with quadratic and cosh-type potentials $\Psi^*$ (see~\eqref{choice:cosh} and~\eqref{choice:quadratic}), the functions ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^\pm_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ take the following forms: \begin{enumerate} \item If $\Psi^*(s)=s^2/2$ and, accordingly, $\upalpha(u,v)=(u-v)/(\log(u)-\log(v))$ for all $u,v >0$ (with $\upalpha(u,v)=0$ otherwise), then \begin{align*} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u,v) &= \begin{cases} \frac{1}{2}(\log(u)-\log(v))(u-v) & \text{if } u,\, v>0, \\ 0 & \text{if $u=0$ or $v=0$}, \end{cases}\\ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) = {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upphi(u,v)&= \begin{cases} \frac{1}{2}(\log(u)-\log(v))(u-v) & \text{if } u,\, v>0, \\ 0 & \text{if } u=v=0, \\ {+\infty} & \text{if } u=0 \text{ and } v \neq 0, \text{ or vice versa}. \end{cases} \end{align*} {For this example ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^+$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ are convex, and all three functions are lower semicontinuous.} \item For the case $\Psi^*(s)=4\bigl(\cosh(s/2)-1\bigr)$ and, accordingly, $\upalpha(u,v)=\sqrt{u v}$ for all $u,v \geq 0$, then \begin{align*} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u,v) &= \begin{cases} 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2 & \text{if } u,\, v>0, \\ 0 & \text{if $u=0$ or $v=0$}, \end{cases}\\ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) &= 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2\qquad {\text{for all }u,v\geq 0,}\\ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upphi(u,v) &= \begin{cases} 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2 & \text{if $u, v>0$ or $u=v=0$}, \\ {+\infty} & \text{if } u=0 \text{ and } v \neq 0, \text{ or vice versa}. \end{cases} \end{align*} For this example, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^+$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ again are convex, but only ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ are lower semicontinuous. \end{enumerate} \end{example} We collect a number of general properties of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^\pm$. \begin{lemma} \label{le:trivial-but-useful} \begin{enumerate}[ref=(\arabic*)] \item ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-\leq {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta\leq {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^+$; \item ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta$ are lower semicontinuous; \item \label{le:trivial-but-useful:ineq} For every $u,v\in \mathbb{R}_+$ and $w\in \mathbb{R}$ we have \begin{equation} \label{eq:107} \bigl|\rmB_\upbeta(u,v,w)\bigr| \le \Upsilon(u,v,w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u,v). \end{equation} \item Moreover, when the right-hand side of \eqref{eq:107} is finite, then the equality \begin{equation} \label{eq:107a} -\rmB_\upbeta(u,v,w) = \Upsilon(u,v,w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u,v) \end{equation} is equivalent to the condition \begin{equation} \label{eq:109} \upalpha(u,v)=w=0\quad\text{or}\quad\biggl[ \upalpha(u,v)>0,\ {\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \mathbb{R},\ -w=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\big)\upalpha(u,v)\biggr]. \end{equation} \end{enumerate} \end{lemma} \begin{proof} It is not difficult to check that ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta$ is lower semicontinuous: such a property is trivial where $\upalpha$ vanishes, and in all the other cases it is sufficient to use the positivity and the continuity of $\Psi^*$ in $[-\infty,+\infty]$, the continuity of ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta$ in $\mathbb{R}_+^2\setminus\{(0,0)\}$, and the continuity and the positivity of $\upalpha$. It is also obvious that ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta$, and therefore ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta$. For the inequality~\eqref{eq:107}, let us distinguish the various cases: \begin{itemize} \item If $w=0$ or $u=v=0$, then $\rmB_\beta(u,v,w) =0$ so that \eqref{eq:107} is trivially satisfied. We can thus assume $w\neq0$ and $u+v>0$. \item When $\upalpha(u,v)=0$ then $\Upsilon(u,v,w) ={+\infty}$ so that \eqref{eq:107} is trivially satisfied as well. We can thus assume $\upalpha(u,v)>0$. \item If ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \{\pm\infty\}$ then ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-(u,v)={+\infty}$ and the right-hand side of \eqref{eq:107} is infinite. \item It remains to consider the case when ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \mathbb{R}$, $\upalpha(u,v)>0$ and $w\neq0$. In this situation \begin{align} \bigl|\rmB(u,v,w)\bigr|&=\bigl|{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)w\bigr|= \bigg|{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\frac{w}{\upalpha(u,v)}\bigg| \upalpha(u,v) \notag \\&\le \Psi\Big(\frac w{\upalpha(u,v)}\Big) \upalpha(u,v)+ \Psi^*\Big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\Big) \upalpha(u,v) \notag\\ &=\Upsilon(u,v,w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-(u,v). \label{eq:108} \end{align} This proves~\eqref{eq:107}. \end{itemize} It is now easy to study the case of equality in~\eqref{eq:107a}, when the right-hand side of \eqref{eq:107} and \eqref{eq:107a} is finite. This in particular implies that $\upalpha(u,v)>0$ and ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \mathbb{R}$ or $\upalpha(u,v)=0$ and $w=0$. In the former case, calculations similar to \eqref{eq:108} show that $-w=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\big)\upalpha(u,v)$. In the latter case, $\alpha(u,v) = w =0$ yields that $ \rmB_\upbeta(u,v,w)=0$, $\Upsilon (u,v,w) = \hat{\Psi}(w,\alpha(u,v)) = \hat{\Psi}(0,0) = 0$, and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta(u,v) = \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)) \alpha(u,v)=0$. \color{black} \end{proof} \par As a consequence of Lemma \ref{le:trivial-but-useful}, we conclude a chain-rule inequality involving the smallest functional ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-$ and thus, a fortiori, the functional ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta$ which, for $\upbeta=\upphi$, shall enter into the definition of the Fisher information $\mathscr{D}$. \normalcolor \begin{cor}[Chain rule II] \label{th:chain-rule-bound2} Let $(\rho,{\boldsymbol j} )\in \CER ab$ with $\rho_t=u_t\pi\ll \pi$ and $2{\boldsymbol j}_\lambda=w (\lambda\otimes\boldsymbol \teta)$ satisfy \begin{equation} \label{ass:th:CR2} \int_V \upbeta(u_a)\,\mathrm{d}\pi<{+\infty}, \quad \int_a^b \iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u_t(x),u_t(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\mathrm{d} t<{+\infty}. \end{equation} Then the map $t\mapsto \int_V \upbeta(u_t)\,\mathrm{d}\pi$ is absolutely continuous in $[a,b]$ and \begin{equation} \label{eq:CR2} \begin{aligned} \left|\frac{\mathrm{d}}{\mathrm{d} t}\int_V \upbeta(u_t)\,\mathrm{d}\pi\right| \le \mathscr R(\rho_t,{\boldsymbol j}_t)+\frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u_t(x),u_t(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\quad\text{for a.e.~}t\in (a,b). \end{aligned} \end{equation} If moreover \[ -\frac{\mathrm{d}}{\mathrm{d} t}\int_V \upbeta(u_t)\,\mathrm{d}\pi =\mathscr R(\rho_t,{\boldsymbol j}_t)+\frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u_t(x),u_t(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \] then $2{\boldsymbol j}={\boldsymbol j}^\flat$ and \begin{equation} \label{eq:110} -w_t(x,y)=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u_t(x),u_t(y))\big)\upalpha(u_t(x),u_t(y)) \quad\text{for $\boldsymbol \teta$-a.e.~$(x,y)\in E$}. \end{equation} In particular, $ w_t=0$ $\boldsymbol \teta$-a.e.~in $\big\{(x,y)\in E: \upalpha(u_t(x),u_t(y))=0\big\}.$ \end{cor} \begin{proof} We recall that for $\lambda$-a.e.~$t\in (a,b)$ \begin{displaymath} \mathscr R(\rho_t,{\boldsymbol j}_t)= \frac12\iint_E \Upsilon(u_t(x),u_t(y),w_t(x,y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y). \end{displaymath} We can then apply Lemma \ref{le:trivial-but-useful} and Theorem \ref{th:chain-rule-bound}, observing that \begin{equation} \label{eq:178} \iint_E \Upsilon(u_t(x),u_t(y),w^\flat_t(x,y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\le \iint_E \Upsilon(u_t(x),u_t(y), w(x,y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \end{equation} since \begin{align*} \Upsilon(u_t(x),u_t(y),w^\flat_t(x,y))&= \Upsilon(u_t(x),u_t(y),\frac12 (w_t(x,y)-w_t(y,x))) \\&\le \frac12\Upsilon(u_t(x),u_t(y),w_t(x,y)) +\frac12\Upsilon(u_t(x),u_t(y),w_t(y,x)) \end{align*} and the integral of the last term coincides with the right-hand side of \eqref{eq:178} thanks to the symmetry of $\boldsymbol \teta$. \end{proof} \subsection{Compactness properties of curves with uniformly bounded $\mathscr R$-action} \label{subsec:compactness} The next result shows an important compactness property for collections of curves in $\CER ab$ with bounded action. Recalling the discussion and the notation of Section~\ref{subsub:kernels}, we will systematically associate with a given $(\rho,{\boldsymbol j})\in \CEIR I$, $I=[a,b]$, a couple of measures $\rho_\lambda\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(I\times V)$, ${\boldsymbol j}_\lambda\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(I\times E)$ by integrating with respect to ~the Lebesgue measure $\lambda$ in $I$: \begin{equation} \label{eq:95} \rho_\lambda(\mathrm{d} t,\mathrm{d} x)= \lambda(\mathrm{d} t)\rho_t(\mathrm{d} x),\quad {\boldsymbol j}_\lambda(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y)=\lambda(\mathrm{d} t){\boldsymbol j}_t(\mathrm{d} x,\mathrm{d} y). \end{equation} Similarly, we define \begin{equation} \label{eq:97} \begin{aligned} {\boldsymbol\vartheta}_{\rho,\lambda}^\pm(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y):={}& ({\boldsymbol\vartheta}_{\rho}^\pm)_\lambda(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y)= \lambda(\mathrm{d} t){\boldsymbol\vartheta}_{\rho_t}^\pm(\mathrm{d} x,\mathrm{d} y) \\={}& \lambda(\mathrm{d} t)\rho_t(\mathrm{d} x)\kappa(x,\mathrm{d} y) ={\boldsymbol\vartheta}_{\rho_\lambda}^\pm(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y). \end{aligned} \end{equation} It is not difficult to check that \begin{equation} \label{eq:96} \int_I \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t= \frac12\mathscr F_\Upsilon({\boldsymbol\vartheta}_{\rho,\lambda}^-,{\boldsymbol\vartheta}_{\rho,\lambda}^+,2{\boldsymbol j}_\lambda|\lambda\otimes\boldsymbol \teta). \end{equation} \begin{prop}[Bounded $\int\mathscr R$ implies compactness and lower semicontinuity] \label{prop:compactness} Let $(\rho^n,{\boldsymbol j}^n)_n \subset \CER ab$ be a sequence such that the initial states $\rho^n_a$ are $\pi$-absolutely-continuous and relatively compact with respect to setwise convergence. Assume that \begin{gather} \label{eq:lem:compactness:assumptions} M:=\sup_{n\in\mathbb{N}}\int_a^b \mathscr R(\rho_t^n, {\boldsymbol j}_t^n) \,\mathrm{d} t<{+\infty}. \end{gather} Then, there exist a subsequence (not relabelled) and a pair $(\rho,{\boldsymbol j} )\in \CER ab$ such that, for the measures ${\boldsymbol j}_\lambda^n \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}([a,b]\timesE)$ defined as in \eqref{eq:95} there holds \begin{subequations} \label{cvs-rho-n-j-n} \begin{align} & \label{converg-rho-n} \rho_t^n\to \rho_t\quad\text{setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ for all $t\in[a,b]$}\,,\\ & \label{converg-j-n} {\boldsymbol j}_\lambda^n\rightharpoonup {\boldsymbol j}_\lambda \quad\text{setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}([a,b]\timesE)$}\,, \end{align} \end{subequations} where ${\boldsymbol j}_\lambda$ is induced (in the sense of \eqref{eq:95}) by a $\lambda$-integrable family $({\boldsymbol j}_t)_{t\in [a,b]}\subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$. In addition, for any sequence $(\rho^n,{\boldsymbol j}^n)$ converging to $(\rho,{\boldsymbol j} )$ in the sense of~\eqref{cvs-rho-n-j-n}, we have \begin{equation} \label{ineq:lsc-R} \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t \leq \liminf_{n\to\infty} \int_a^b \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\, \mathrm{d} t . \end{equation} \end{prop} \begin{proof} Let us first remark that the mass conservation property of the continuity equation yields \begin{equation} \label{eq:62} \rho_t^n(V)=\rho_a^n(V)\le M_1\quad\text{for every }t\in [a,b],\ n\in \mathbb{N} \end{equation} for a suitable finite constant $M_1$ independent of $n$. We deduce that for every $t\in [a,b]$ the measures $\boldsymbol \teta_{\rho_t^n}^\pm$ have total mass bounded by $M_1 \|\kappa_V\|_\infty$, \color{black} so that estimate \eqref{eq:31} for $y=(c,c)\in D(\upalpha_*)$ yields \begin{equation} \label{eq:63} \boldsymbol\upnu_{\rho^n_t}(E)= \upalpha[\boldsymbol \teta^+_{\rho_t^n},\boldsymbol \teta^-_{\rho_t^n}|\boldsymbol \teta](E) \le M_2 \quad \text{for every }t\in [a,b],\ n\in \mathbb{N}, \end{equation} where $M_2:=2 c\,M_1 \|\kappa_V\|_\infty \color{black}-\upalpha_*(c,c)\boldsymbol \teta(E)$. Jensen's inequality \eqref{eq:79} and the monotonicity property \eqref{eq:80} yield \begin{equation} \label{eq:64} \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\ge \frac12 \hat\Psi\Bigl(2{\boldsymbol j}_t^n(E),\boldsymbol\upnu_{\rho^n_t}(E)\Bigr)\ge \frac12 \hat\Psi\Bigl(2{\boldsymbol j}_t^n(E),M_2\Bigr)= \frac12 \Psi\Bigl(\frac{2{\boldsymbol j}_t^n(E)} {M_2}\Bigr) M_2, \end{equation} with $\hat\Psi$ the perspective function associated with $\Psi$, cf.~\eqref{eq:76}. Since $\Psi$ has superlinear growth, we deduce that the sequence of functions $t\mapsto |{\boldsymbol j}_t^n|(E)$ is equi-integrable. Since the sequence $(\rho_a^n)_n$, with $\rho_a^n = u_a^n \pi \ll \pi$, is relatively compact with respect to setwise convergence, by Theorems \ref{thm:equivalence-weak-compactness}(6) and \ref{thm:L1-weak-compactness}(3) there exist a convex superlinear function $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$ and a constant $M_3<{+\infty}$ such that \begin{equation} \label{eq:58} \mathscr F_\upbeta(\rho^n_a|\pi)= \int_V \upbeta(u_a^n)\,\mathrm{d}\pi\le M_3\quad \text{for every }n\in \mathbb{N}. \end{equation} Possibly adding $M_1$ to $M_3$, it is not restrictive to assume that $\upbeta'(r)\ge1$. We can then apply Lemma \ref{le:slowly-increasing-entropy} and we can find a smooth convex superlinear function $\upomega:\mathbb{R}_+\to\mathbb{R}_+$ such that \eqref{eq:41} holds. In particular \begin{align} \label{eq:59} \int_V \upomega(u_a^n)\,\mathrm{d}\pi&\le M_1,\\ \notag \int_a^b \iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upomega(u_r^n(x),u_r^n(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} r&\le \int_a^b \iint_E (u_r^n(x)+u^n_r(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} r \\&\leq 2(b-a)M_1\|\kappa_V\|_\infty.\label{eq:60} \end{align} By Corollary~\ref{th:chain-rule-bound2} we obtain \begin{equation} \label{eq:61} \int_V \upomega(u_t^n)\,\mathrm{d}\pi\le M+(b-a)M_1 \|\kappa_V\|_\infty + M_1\quad \text{for every }t\in [a,b]. \end{equation} By \eqref{est:ct-eq-TV} we deduce that \begin{equation} \label{eq:65} \|u^n_t-u^n_s\|_{L^1(V,\pi)}\le \zeta(s,t)\quad \text{where}\quad \zeta(s,t): = 2\sup_{n\in \mathbb{N}} \int_{s}^{t} |{\boldsymbol j}_r^n|(E)\,\mathrm{d} r \,. \end{equation} Since $t\mapsto |{\boldsymbol j}^n_t|(E)$ is equi-integrable we have \[ \lim_{(s,t)\to (r,r)} \zeta(s,t) =0 \qquad \text{for all } r \in [a,b]\,, \] We conclude that the sequence of maps $(u_t^n)_{t\in [a,b]}$ satisfies the conditions of the compactness result \cite[Prop.\ 3.3.1]{AmbrosioGigliSavare08}, which yields the existence of a (not relabelled) subsequence and of a $L^1(V,\pi)$-continuous (thus also weakly-continuous) function $[a,b]\ni t \mapsto u_t\in L^1(V,\pi)$ such that $u^n_t\rightharpoonup u_t$ weakly in $L^1(V,\pi)$ for every $t\in [a,b]$. By \eqref{eq:72} we also deduce that \eqref{converg-rho-n} holds, i.e. \[ \rho_t^n\to \rho_t=u_t\pi\quad\text{setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$ for all $t\in[a,b]$}. \] It is also clear that for every $t\in [a,b]$ we have $\boldsymbol \teta_{\rho_t^n}^\pm\to\boldsymbol \teta_{\rho_t}^\pm$ setwise. The Dominated Convergence Theorem and~\eqref{eq:70}, \color{purple} \eqref{eq:85} \color{black} imply that the corresponding measures $\boldsymbol \teta_{\rho^n,\lambda}^\pm$ converge setwise to $\boldsymbol \teta_{\rho,\lambda}^\pm$, and are therefore equi-absolutely continuous with respect to ~$\boldsymbol \teta_\lambda=\lambda\otimes\boldsymbol \teta$ (recall \eqref{eq:73bis}). \color{ddcyan} Let us now show that also the sequence $({\boldsymbol j}^n_\lambda)_{n}$ is equi-absolutely continuous with respect to ~$\boldsymbol \teta_\lambda$, so that \eqref{converg-j-n} holds up to extracting a further subsequence. Selecting a constant $c>0$ sufficiently large so that $\upalpha(u_1,u_2)\le c(1+u_1+u_2)$, the trivial estimate $\boldsymbol\upnu_\rho\le c(\boldsymbol \teta+{\boldsymbol\vartheta}_\rho^-+{\boldsymbol\vartheta}_\rho^+)$ and the monotonicity property \eqref{eq:80} yield \color{ddcyan} \begin{equation} \label{eq:66} M\ge \int_a^b\mathscr R(\rho^n_t,{\boldsymbol j}^n_t )\,\mathrm{d} t= \frac12\mathscr F_\Psi(2{\boldsymbol j}^n_\lambda |\boldsymbol\upnu_{\rho^n_\lambda})\ge \mathscr F_\Psi({\boldsymbol j}^n_\lambda |{\boldsymbol \varsigma}^n ),\ {\boldsymbol \varsigma}^n :=c(\boldsymbol \teta_\lambda+\boldsymbol \teta_{\rho^n,\lambda}^++\boldsymbol \teta_{\rho^n,\lambda}^-). \end{equation} For every $B\in \mathfrak A\otimes \mathfrak B$, $\mathfrak A$ being the Borel $\sigma$-algebra of $[a,b]$, with $\boldsymbol \teta_\lambda(B)>0$, Jensen's inequality \eqref{eq:79} yields \begin{equation} \label{eq:111} \Psi\biggl(\frac{{\boldsymbol j}_\lambda^n(B)}{{\boldsymbol \varsigma}^n(B)}\biggr) {\boldsymbol \varsigma}^n(B)\le \mathscr F_\Psi({\boldsymbol j}_\lambda^n\mres B|{\boldsymbol \varsigma}^n\mres B)\le M. \end{equation} Denoting by $U:\mathbb{R}_+\to\mathbb{R}_+$ the inverse function of $\Psi$, we thus find \begin{equation} \label{eq:112} {\boldsymbol j}_\lambda^n(B)\le {\boldsymbol \varsigma}^n(B)\,U\biggl(\frac{M}{{\boldsymbol \varsigma}^n(B)}\biggr). \end{equation} Since $\Psi$ is superlinear, $U$ is sublinear so that \begin{equation} \label{eq:118} \lim_{\delta\downarrow0}\delta U(M/\delta)=0. \end{equation} For every $\eps>0$ there exists $\delta_0>0$ such that $\delta U(M/\delta)\le \eps$ for every $\delta\in (0,\delta_0)$. Since ${\boldsymbol \varsigma}^n$ is equi absolutely continuous with respect to~$\boldsymbol \teta_\lambda$ we can also find $\delta_1>0$ such that $\boldsymbol \teta_\lambda (B)<\delta_1$ yields ${\boldsymbol \varsigma}^n(B)\le \delta_0$. By \eqref{eq:112} we eventually conclude that ${\boldsymbol j}^n_\lambda(B)\le \eps$. \color{black} It is then easy to pass to the limit in the integral formulation \eqref{2ndfundthm} of the continuity equation. Finally, concerning \eqref{ineq:lsc-R}, it is sufficient to use the equivalent representation given by \eqref{eq:96}. \end{proof} \normalcolor \subsection{Definition and properties of the cost}\label{sec:cost} We now define the Dynamical-Variational Transport cost $\mathcal{W} : (0,{+\infty}) \times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \to [0,{+\infty})$ by \begin{equation} \label{def-psi-rig} \DVT \tau{\rho_0}{\rho_1} : = \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t) \,\mathrm{d} t \, : \, (\rho,{\boldsymbol j} ) \in \CEP 0\tau{\rho_0}{\rho_1} \right\}\,. \end{equation} In studying the properties of $\calW$, we will also often use the notation \begin{equation} \label{adm-curves} \ADM 0\tau{\rho_0}{\rho_1}: = \biggl\{ (\rho,{\boldsymbol j} )\in\CER 0{\tau}\, : \normalcolor \ \rho(0)=\rho_0, \ \rho(\tau) = \rho_1 \biggr\}\,, \end{equation} with $\CER 0\tau$ the class from \eqref{def:Aab}. For given $\tau>0$ and $\rho_0,\, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, if the set $\ADM 0\tau{\rho_0}{\rho_1}$ is non-empty, then it contains an exact minimizer for $\DVT {\tau}{\rho_0}{\rho_1}$. This is stated by the following result that is a direct consequence of Proposition~\ref{prop:compactness}. \color{black} \begin{cor}[Existence of minimizers] \label{c:exist-minimizers} If $\rho_0,\rho_1\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and $\ADM 0\tau{\rho_0}{\rho_1} $ is not empty, then the infimum in~\eqref{def-psi-rig} is achieved. \end{cor} \begin{remark}[Scaling invariance] \label{rem:scaling-invariance} Let us consider the perspective function $\hat \Psi(r,s)$ associated wih $\Psi$ as in \eqref{eq:76}, $\hat \Psi(r,s)=s\Psi(r/s)$ if $s>0$. We call $\mathscr R_s(\rho,{\boldsymbol j})$ the dissipation functional induced by $\hat{\Psi}(\cdot, s)$, with induced Dynamic-Transport cost $\mathscr W_s$. For every $\tau>0$, $\rho_0,\rho_1\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ a rescaling argument yields \begin{equation} \label{eq:188} \mathscr{W}(\tau,\rho_0, \rho_1) =\mathscr W_{\tau/\sigma}(\sigma,\rho_0,\rho_1) = \inf\left\{ \int_0^{\sigma} \mathscr R_{\tau/\sigma}(\rho_t,{\boldsymbol j}_t) \,\mathrm{d} t \, : \, (\rho,{\boldsymbol j} ) \in \CEP 0{\sigma}{\rho_0}{\rho_1} \color{black} \right\}\,. \end{equation} In particular, choosing $\sigma=1$ we find \normalcolor \begin{equation} \label{eq:188bis} \mathscr{W}(\tau,\rho_0, \rho_1) =\mathscr W_{\tau}(1,\rho_0,\rho_1). \end{equation} Since $\hat\Psi(\cdot,\tau)$ is convex, lower semicontinuous, and decreasing with respect to ~$\tau$, we find that $\tau\mapsto \mathscr{W}(\tau,\rho_0, \rho_1) $ is decreasing and convex as well. \end{remark} \par Currently, proving that \emph{any} pair of measures can be connected by a curve with finite action $\int \mathscr R$ under general conditions on $V$, $\Psi$ and $\upalpha$ is an open problem: in other words, in the general case we cannot exclude that $\ADM 0\tau{\rho_0}{\rho_1} = \emptyset$, which would make $\DVT {\tau}{\rho_0}{\rho_1} = {+\infty}$. Nonetheless, in a more specific situation, Proposition \ref{prop:sufficient-for-connectivity} below provides sufficient conditions for this connectivity property, between two measures $\rho_0, \, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ with the same mass and such that $\rho_i \ll \pi$ for $i\in \{0,1\}$. Preliminarily, we give the following \begin{definition} Let $q\in (1,{+\infty})$. We say that the measures $(\pi,\boldsymbol \teta) $ satisfy a \emph{$q$-Poincar\'e inequality} if there exists a constant $C_P>0$ such that for every $\xi \in L^q(V;\pi)$ with $\int_{V}\xi(x) \pi(\mathrm{d} x) =0$ there holds \begin{equation} \label{q-Poinc} \int_{V} |\xi(x)|^q \pi(\mathrm{d} x) \leq C_P \int_{E} |\dnabla \xi(x,y)|^q \boldsymbol \teta(\mathrm{d} x, \mathrm{d} y) . \end{equation} \end{definition} We are now in a position to state the connectivity result, where we specialize the discussion to dissipation densities with $p$-growth for some $p \in (1,{+\infty})$. \begin{prop} \label{prop:sufficient-for-connectivity} Suppose that \begin{equation} \label{psi-p-growth} \exists\, p \in (1,{+\infty}), \, \overline{C}_p>0 \ \ \forall\,r \in \mathbb{R} \, : \qquad \Psi(r) \leq \overline{C}_p(1{+}|r|^p), \end{equation} and that the measures $(\pi,\boldsymbol \teta) $ satisfy a $q$-Poincar\'e inequality for $q=\tfrac p{p-1}$. Let $\rho_0, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ with the same mass be given by $\rho_i = u_i \pi$, with positive $u_i \in L^1(V; \pi) \cap L^\infty (V; \pi) $, for $i \in \{0,1\}$. Then, for every $\tau>0$ the set $\ADM 0\tau{\rho_0}{\rho_1} $ is non-empty and thus $\DVT \tau{\rho_0}{\rho_1}<\infty$. \end{prop} We postpone the proof of Proposition \ref{prop:sufficient-for-connectivity} to Appendix \ref{s:app-2}, where some preliminary results, also motivating the role of the $q$-Poincar\'e inequality, will be provided. \color{black} \subsection{Abstract-level properties of \texorpdfstring{$\mathscr{W}$}W} \label{ss:4.5} The main result of this section collects a series of properties of the cost that will play a key role in the study of the \emph{Minimizing Movement} scheme \eqref{MM-intro}. Indeed, as already hinted in the Introduction, the analysis that we will carry out in Section \ref{s:MM} ahead might well be extended to a scheme set up in a general topological space, endowed with a cost functional enjoying properties \eqref{assW} below. We will now check them for the cost $\mathscr{W}$ associated with generalized gradient structure $(\mathscr E,\mathscr R,\mathscr R^*)$ \color{black} fulfilling \textbf{Assumptions~\ref{ass:V-and-kappa} and \ref{ass:Psi}}. In this section \emph{all convergences will be with respect to the setwise topology}. \begin{theorem} \label{thm:props-cost} The cost $\mathscr{W}$ enjoys the following properties: \begin{subequations}\label{assW} \begin{enumerate}[label={(\arabic*)}] \item \label{tpc:1} For all $\tau>0,\, \rho_0,\, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{equation}\label{e:psi2} \mathscr{W}(\tau,\rho_0,\rho_1)= 0 \ \Leftrightarrow \ \rho_0=\rho_1. \end{equation} \item \label{tpc:2} For all $\rho_1,\, \rho_2,\,\rho_3\in{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and $\tau_1, \tau_2 \in (0,{+\infty})$ with $\tau=\tau_1 +\tau_2$, \begin{equation}\label{e:psi3} \mathscr{W}(\tau,\rho_1,\rho_3) \leq \mathscr{W}(\tau_1,\rho_1,\rho_2) + \mathscr{W}(\tau_2,\rho_2,\rho_3). \end{equation} \item \label{tpc:3} For $\tau_n\to \tau>0, \ \rho_0^n \to \rho, \ \rho_1^n \to \rho_1$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{equation}\label{lower-semicont} \liminf_{n \to {+\infty}} \mathscr{W}(\tau_n,\rho_0^n, \rho_1^n) \geq \mathscr{W}(\tau,\rho_0,\rho_1). \end{equation} \item \label{tpc:4} For all $\tau_n \downarrow 0$ and for all $(\rho_n)_n$, $ \rho \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{equation}\label{e:psi4} \sup_{n\in\mathbb{N}} \mathscr{W}(\tau_n, \rho_n,\rho) <{+\infty} \quad \Rightarrow \quad \rho_n \to \rho. \end{equation} \item \label{tpc:5} For all $\tau_n \downarrow 0$ and all $(\rho_n)_n$, $ (\nu_n)_n \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ with $\rho_n \to \rho, \ \nu_n \to\nu$, \begin{equation}\label{e:psi6} \limsup_{n\to\infty} \mathscr{W}(\tau_n, \rho_n,\nu_n) <{+\infty} \quad \Rightarrow \quad \rho = \nu. \end{equation} \end{enumerate} \end{subequations} \end{theorem} \begin{proof} \textit{\ref{tpc:1}} Since $\Psi(s)$ is strictly positive for $s\neq0$ it is immediate to check that $\mathscr R(\rho,{\boldsymbol j})=0\ \Rightarrow\ {\boldsymbol j}=0$. For an optimal pair $(\rho,{\boldsymbol j} ) $ satisfying $ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t =0 $ we deduce that ${\boldsymbol j}_t=0$ for a.e.~$t\in (0,\tau)$. The continuity equation then implies $\rho_0=\rho_1$. \normalcolor \textit{\ref{tpc:2}} This can easily be checked by using the existence of minimizers for $\mathscr{W}(\tau,\rho_0, \rho_1)$. \textit{\ref{tpc:3}} Assume without loss of generality that $\liminf_{n\to+\infty} \mathscr{W}(\tau_n,\rho_0^n, \rho_1^n) < \infty$. By \eqref{eq:188bis} \normalcolor we use that, for every $n\in \mathbb{N}$ and setting $\overline \tau = \sup_n \tau_n$, \[ \mathscr{W}(\tau_n,\rho_n^0, \rho_n^1) = \mathscr{W}_{\tau_n} (1,\rho_n^0, \rho_n^1) \leq \mathscr{W}_{\overline \tau} (1,\rho_n^0, \rho_n^1) \stackrel{(*)}= \int_0^{1} \mathscr R_{\overline \tau}(\rho_t^n,{\boldsymbol j}_t^n) \,\mathrm{d} t, \] where the identity $(*)$ holds for an optimal pair $(\rho^n,{\boldsymbol j}^n) \in \CEP{0}{1}{\rho_0^n}{\rho_1^n}$. Applying Proposition~\ref{prop:compactness}, we obtain the existence of $(\rho, {\boldsymbol j} ) \in \CEP 01{\rho_0}{\rho_1}$ such that, up to a subsequence, \begin{equation} \label{tilde-rho-j-n} \begin{aligned} &{\rho}_s^n \to {\rho}_s \text{ setwise in } {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \quad \text{for all } s \in [0,1]\,,\\ &{{\boldsymbol j}}^n \to {{\boldsymbol j}} \text{ setwise in } {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(\color{purple} [0,1]\color{black}{\times}E)\,, \end{aligned} \end{equation} Arguing as in Proposition~\ref{prop:compactness} and using the joint lower semicontinuity of $\hat \Psi$, we find that \[ \liminf_{n\to\infty} \int_0^{1} \mathscr R_{\tau_n}\left( {\rho}_s^n , {{\boldsymbol j}}_s^n \right) \mathrm{d} s \geq \int_0^{1} \mathscr R_\tau\left( {\rho}_s , {{\boldsymbol j}}_s \right) \mathrm{d} s \ge \mathscr W_\tau(1,\rho_0,\rho_1)= \mathscr{W}(\tau,\rho_0,\rho_1). \] \textit{\ref{tpc:4}} If we denote by $\mathscr R_0$ the dissipation associated with $\hat\Psi(\cdot,0)$, given by $\hat\Psi(w,0) = +\infty$ for $w\not=0$ and $\hat\Psi(0,0)=0$, we find \begin{equation} \label{eq:189} \mathscr R_0(\rho,{\boldsymbol j})<{+\infty}\quad\Rightarrow\quad {\boldsymbol j}=0. \end{equation} By the same argument as for part~\ref{tpc:3}, every subsequence of $\rho_n$ has a converging subsequence in the setwise topology; the lower semicontinuity result of the proof of part~\ref{tpc:3} shows that any limit point must coincide with $\rho$. \textit{\ref{tpc:5}} The argument combines \eqref{eq:189} and part~\ref{tpc:3}. \end{proof} \subsection{The action functional \texorpdfstring{$\mathbb W$}W and its properties} The construction of $\mathscr R$ and $\mathscr{W}$ above proceeded in the order $\mathscr R \rightsquigarrow \mathscr{W}$: we first constructed $\mathscr R$, and then $\mathscr{W}$ was defined in terms of~$\mathscr R$. It is a natural question whether one can invert this construction: given $\mathscr{W}$, can one reconstruct~$\mathscr R$, or at least integrals of the form $\int_a^b \mathscr R\,\mathrm{d} t$? The answer is positive, as we show in this section. Given a functional $\mathscr{W}$ satisfying the properties~\eqref{assW}, we define the `$\mathscr{W}$-action' of a curve $\rho:[a,b]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ as \begin{equation} \label{def-tot-var} \VarW \rho ab: = \sup \left \{ \sum_{j=1}^M \DVT{t^j - t^{j-1}}{\rho(t^{j-1})}{\rho(t^j)} \, : \ (t^j)_{j=0}^M \in \mathfrak{P}_f([a,b]) \right\} , \end{equation} for all $[a,b]\subset [0,T]$ where $\mathfrak{P}_f([a,b])$ denotes the set of all partitions of a given interval $[a,b]$. If $\mathscr{W}$ is defined by~\eqref{def-psi-rig}, then each term in the sum above is defined as an optimal version of $\int_{t^{j-1}}^{t^j} \mathscr R(\rho_t,\cdot)\,\mathrm{d} t$, and we might expect that $\VarW \rho ab$ is an optimal version of $\int_a^b \mathscr R(\rho_t,\cdot)\,\mathrm{d} t$. This is indeed the case, as is illustrated by the following analogue of~\cite[Th.~5.17]{DolbeaultNazaretSavare09}: \begin{prop} \label{t:R=R} Let $\mathscr{W}$ be given by~\eqref{def-psi-rig}, and let $\rho:[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$. Then $\VarW \rho 0T<{+\infty}$ if and only if there exists a measurable map ${\boldsymbol j} :[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ such that $(\rho,{\boldsymbol j} )\in\CE 0T$ with $\int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\mathrm{d} t<{+\infty}$\,. In that case, \begin{equation} \label{calR-leq-VarW} \VarW \rho0T\leq \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t, \end{equation} and there exists a unique ${\boldsymbol j}_{\rm opt}$ such that equality is achieved. The optimal ${\boldsymbol j}_{\rm opt}$ is skew-symmetric, i.e.\ ${\boldsymbol j}_{\rm opt}= \tj_{\rm opt}$ \color{black} (cf.~Remark~\ref{rem:skew-symmetric}). \end{prop} Prior to proving Proposition \ref{t:R=R}, we establish the following approximation result. \begin{lemma} \label{l:convergence-of-interpolations} Let $\rho:[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ satisfy $\VarW {\rho}0T<{+\infty}$. For a sequence of partitions $P_n =(t_n^j)_{j=0}^{M_n}\in \mathfrak{P}_f([0,T])$ with fineness $\tau_n = \max_{j=1,\ldots, M_n} (t_n^j{-}t_n^{j-1})$ converging to zero, let $\rho^n :[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ satisfy \[ \rho^n(t_n^j) = \rho(t_n^j)\quad\text{for all } j =1,\ldots, M_n \qquad\text{and}\qquad \sup\nolimits_{n\in\mathbb{N}} \VarW {\rho^n}0T < {+\infty}. \] Then $\rho^n(t) \to \rho(t)$ setwise \color{black} for all $t\in[0,T]$ as $n\to\infty$. \end{lemma} \begin{proof} First of all, observe that by the symmetry of $\Psi$, also the time-reversed curve $\check \rho(t):= \rho(T-t)$ satisfies $\VarW {\check{\rho}}0T<{+\infty}$. Let $\piecewiseConstant {\mathsf{t}}{n}$ and $\underpiecewiseConstant {\mathsf{t}}{n}$ be the piecewise constant interpolants associated with the partitions $P_n$, cf.\ \eqref{nodes-interpolants}. Fix $t\in [0,T]$; we estimate \begin{align*} \mathscr{W}\bigl(2(\piecewiseConstant\mathsf{t} n-t),\rho^n(t),\rho(t)\bigr) &\stackrel{(1)}{\leq} \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\rho^n(t),\rho^n(\piecewiseConstant {\mathsf{t}}{n}(t))\bigr) + \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\rho(\piecewiseConstant {\mathsf{t}}{n}(t)),\rho(t)\bigr) \\ &= \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\rho^n(t),\rho^n(\piecewiseConstant {\mathsf{t}}{n}(t))\bigr) + \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\check \rho(T-\piecewiseConstant {\mathsf{t}}{n}(t)),\check \rho(T-t)\bigr)\\ &\leq \VarW {\rho^n}t{\piecewiseConstant {\mathsf{t}}{n}(t)} + \VarW{\check \rho}{T-\piecewiseConstant {\mathsf{t}}{n}(t)}{T-t}\\ &\leq \sup_{n\in\mathbb{N}} \VarW {\rho^n} 0T + \VarW{\check \rho}0T =: C<{+\infty}, \end{align*} where (1) follows from property \eqref{e:psi3} of $\mathscr{W}$. Consequently, by property~\eqref{e:psi4} it follows that $\rho^n(t) \to \rho(t)$ setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ for all $t\in[0,T]$. \end{proof} We are now in a position to prove Proposition~\ref{t:R=R}: \begin{proof}[Proof of Proposition~\ref{t:R=R}] One implication is straightforward: if a pair $(\rho,{\boldsymbol j} )$ exists, then \[ \mathscr{W}(t-s,\rho_s,\rho_t) \stackrel{\eqref{def-psi-rig}}\leq \int_s^t \mathscr R(\rho_r,{\boldsymbol j}_r)\,\mathrm{d} r,\qquad \text{for all }0\leq s<t\leq T, \] and therefore $\VarW \rho0T<{+\infty}$ and \eqref{calR-leq-VarW} holds. To prove the other implication, assume that $\VarW \rho0T<{+\infty}$. Choose a sequence of partitions $P_n =(t_n^j)_{j=0}^{M_n}\in \mathfrak{P}_f([0,T])$ that becomes dense in the limit $n\to\infty$. For each $n\in\mathbb{N}$, construct a pair $(\rho^n,{\boldsymbol j}^n)\in \CE 0T$ as follows: On each time interval $[t_n^{j-1},t_n^j]$, let $(\rho^n,{\boldsymbol j}^n)$ be given by Corollary~\ref{c:exist-minimizers} as the minimizer under the constraint $\rho^n(t_n^{j-1}) = \rho(t_n^{j-1})$ and $\rho^n(t_n^j) = \rho(t_n^j)$, namely \begin{equation} \label{minimizer-cost} \DVT{t_n^{j}{-}t_n^{j-1}}{\rho(t_n^{j-1})}{ \rho(t_n^{j})} = \int_{t_n^{j-1}}^{t_n^{j}}\mathscr R(\rho_r^n,{\boldsymbol j}_r^n) \,\mathrm{d} r\,. \end{equation} By concatenating the minimizers on each of the intervals a pair $(\rho^n,{\boldsymbol j}^n)\in \CE0T$ is obtained, thanks to Lemma \ref{l:concatenation&rescaling}. By construction we have the property \begin{align} \label{eq:VarW-rhon-calR} &\VarW{\rho^n}0T = \int_0^T \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\,\mathrm{d} t. \end{align} Also by optimality we have \[ \VarW{\rho^n}{t_n^{j-1}}{t_n^j} = \mathscr{W}\bigl(t_n^j-t_n^{j-1},\rho(t_n^{j-1}),\rho(t_n^j)\bigr) \leq \VarW\rho{t_n^{j-1}}{t_n^j}, \] which implies by summing that \begin{equation} \label{ineq:VarW-rhon-rho} \VarW{\rho^n}0T\leq \VarW\rho0T. \end{equation} By Lemma~\ref{l:convergence-of-interpolations} we then find that $\rho^n(t)\to \rho(t)$ setwise \color{black} as $n\to\infty$ for each $t\in[0,T]$. Applying Proposition~\ref{prop:compactness}, we find that ${\boldsymbol j}^n(\mathrm{d} t\,\mathrm{d} x\,\mathrm{d} y):= {\boldsymbol j}_t^n(\mathrm{d} x\,\mathrm{d} y)\,\mathrm{d} t$ setwise \color{black} converges along a subsequence to a limit ${\boldsymbol j}$. The limit $ {\boldsymbol j}$ can be disintegrated as ${\boldsymbol j}(\mathrm{d} t\,\mathrm{d} x\,\mathrm{d} y) = \lambda(\mathrm{d} t) \, {\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y) $ \color{black} for a measurable family $({\boldsymbol j}_t)_{t\in[0,T]}$, and the pair $(\rho,{\boldsymbol j})$ \color{black} is an element of $\CE0T$. In addition we have the lower-semicontinuity property \begin{equation} \label{ineq:lsc:j-tilde-j} \liminf_{n\to\infty} \int_0^T \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\,\mathrm{d} t \geq \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\color{black}\,\mathrm{d} t. \end{equation} We then have the series of inequalities \begin{align*} \VarW\rho 0T &\stackrel{\eqref{ineq:VarW-rhon-rho}} \geq \limsup_{n\to\infty} \VarW{\rho^n}0T \stackrel{\eqref{eq:VarW-rhon-calR}} = \limsup_{n\to\infty} \int_0^T \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\,\mathrm{d} t\\ & \stackrel{\eqref{ineq:lsc:j-tilde-j}}{\geq} \int_0^T \mathscr R(\rho_t, {\boldsymbol j}_t) \color{black}\,\mathrm{d} t \stackrel{\eqref{calR-leq-VarW}}{\geq} \VarW \rho0T, \end{align*} which implies that $\int_0^T \mathscr R(\rho_t, {\boldsymbol j}_t) \color{black} \,\mathrm{d} t = \VarW \rho0T$. Finally, the uniqueness of ${\boldsymbol j}$ \color{black} is a consequence of the strict convexity of $\Upsilon(u_1,u_2,\cdot)$, cf.\ Lemma~\ref{lem:Upsilon-properties}. \color{black} Similarly, the skew-symmetry of ${\boldsymbol j}$ follows from the strict convexity of $\Upsilon(u_1,u_2,\cdot)$, the symmetry of $\Upsilon(\cdot,\cdot,w)$, and the invariance of the continuity equation~\eqref{eq:ct-eq-def} under the `skew-symmetrization' ${\boldsymbol j} \mapsto \tj$, cf.\ Remark \ref{rem:skew-symmetric}. \color{black} \end{proof} \section{The Fisher information \texorpdfstring{$\mathscr{D} $}{D} and the definition of solutions} \label{s:Fisherinformation} With the definitions and the properties that we established in the previous section we have given a rigorous meaning to the first term in the functional $\mathscr L$ in~\eqref{eq:def:mathscr-L}. In this section we continue with the second term in the integral, often called \emph{Fisher information}, after the canonical version in diffusion problems~\cite{Otto01}. Section \ref{ss:5.2} is devoted to\ \begin{enumerate}[label=(\alph*)] \item A rigorous definition of the Fisher information $\mathscr{D}(\rho)$ (Definition~\ref{def:Fisher-information}). \end{enumerate} In several practical settings, such as the proof of existence that we give in Section~\ref{s:MM}, it is important to have lower semicontinuity of $\mathscr{D}$: this is proved in Proposition \ref{PROP:lsc}. \par We are then in a position to give \begin{enumerate}[resume,label=(\alph*)] \item a rigorous definition of solutions to the $(\mathscr E, \mathscr R, \mathscr R^*)$ system (Definition~\ref{def:R-Rstar-balance}). \end{enumerate} In Section~\ref{ss:def-sol-intro} we explained that the Energy-Dissipation balance approach to defining solutions is based on the fact that $\mathscr L(\rho,{\boldsymbol j} ) \geq 0$ for all $(\rho,{\boldsymbol j} )$ by the validity of a suitable chain-rule inequality. \begin{enumerate}[resume,label=(\alph*)] \item A rigorous proof of this chain-rule inequality, involving $\mathscr R$ and $\mathscr{D}$, is given in Corollary~\ref{cor:CH3}, which is based on Theorem~\ref{th:chain-rule-bound}). \end{enumerate} This establishes the inequality $\mathscr L(\rho,{\boldsymbol j} ) \geq 0$. Hence, we can rigorously deduce that the opposite inequality $\mathscr L(\rho,{\boldsymbol j} ) \leq 0$ characterizes the property that $(\rho,{\boldsymbol j} )$ is a solution to the $(\mathscr E, \mathscr R, \mathscr R^*)$ system. Theorem \ref{thm:characterization} provides an additional characterization of this solution concept. \par Finally, in Sections~\ref{subsec:main-properties} and \ref{ss:5.4}, \color{black} \begin{enumerate}[resume,label=(\alph*)] \item \normalcolor we prove existence, uniqueness and stability of solutions under suitable convexity/l.s.c.~conditions on of $\mathscr{D}$ (Theorems \ref{thm:existence-stability} and \ref{thm:uniqueness}). We also discuss their asymptotic behaviour and the role of the invariant measures $\pi$. \end{enumerate} Throughout this section we adopt \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}}. \subsection{The Fisher information \texorpdfstring{$\mathscr{D} $}D} \label{ss:Fisher} Formally, the Fisher information is the second term in~\eqref{eq:def:mathscr-L}, namely \[ \mathscr{D}(\rho) = \mathscr R^*\Bigl(\rho,-\relax \dnabla \upphi(u)\Bigr) = \color{ddcyan} \frac12\normalcolor \iint_E \Psi^*\bigl( -\relax(\upphi'(u(y))-\upphi'(u(x))\bigr) \boldsymbol\upnu_\rho(\mathrm{d} x \, \mathrm{d} y),\qquad \rho = u\pi\, . \] In order to give a precise meaning to this formulation when $\upphi$ is not differentiable at $0$ (as, for instance, in the case of the Boltzmann entropy function~\eqref{logarithmic-entropy}), we use the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ defined in \eqref{eq:182}. \begin{definition}[The Fisher-information functional $\mathscr{D} $] \label{def:Fisher-information} The Fisher information $\mathscr{D}: \mathrm{D}(\mathscr E)\to [0,{+\infty}]$ is defined as \begin{equation} \label{eq:def:D} \mathscr{D} (\rho) := \displaystyle \frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi\bigl(u(x),u(y)\bigr)\, \boldsymbol \teta(\mathrm{d} x\,\mathrm{d} y) \qquad \text{for } \rho = u\pi\,. \end{equation} \end{definition} \normalcolor \begin{example}[The Fisher information in the quadratic and in the $\cosh$ case] \label{ex:D} For illustration we recall the two expressions for ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ from Example~\ref{ex:Dpm} for the linear equation~\eqref{eq:fokker-planck} with quadratic and cosh-type potentials $\Psi^*$ : \begin{enumerate} \item If $\Psi^*(s)=s^2/2$ , then \[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) = \begin{cases} \frac{1}{2}(\log(u)-\log(v))(u-v) & \text{if } u,\, v>0, \\ 0 & \text{if } u=v=0, \\ {+\infty} & \text{if } u=0 \text{ and } v \neq 0, \text{ or vice versa}. \end{cases} \] \item If $\Psi^*(s)=4\bigl(\cosh(s/2)-1\bigr)$, then \[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) = 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2\qquad \forall\, (u,v) \in [0,{+\infty}) \times [0,{+\infty}). \] \end{enumerate} These two examples of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ are convex. \end{example} Let us discuss the lower-semicontinuity properties of $\mathscr{D}$. In accordance with the Minimizing-Movement approach carried out in Section \ref{ss:MM}, we will just be interested in lower semicontinuity of $\mathscr{D}$ along sequences with bounded energy $\mathscr E$. Now, \color{black} since sublevels of the energy~$\mathscr E$ are relatively compact with respect to setwise convergence (by part~\ref{cond:setwise-compactness-superlinear} of Theorem~\ref{thm:L1-weak-compactness}), there is no difference between narrow and setwise lower semicontinuity of $\mathscr{D}$. \color{black} \begin{prop}[\textbf{Lower semicontinuity of $\mathscr{D} $}] \label{PROP:lsc} \upshape Assume either that $\pi$ is purely atomic or that the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex on $\mathbb{R}_+^2$. Then $\mathscr{D}$ is (sequentially) lower semicontinuous with respect to setwise convergence, i.e.,\ for all $(\rho^n)_n,\, \rho \in \mathrm{D}(\mathscr E) $ \begin{equation} \label{lscD} \rho^n \to \rho \text{ setwise in } {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\quad \Longrightarrow \quad \mathscr{D}(\rho) \leq \liminf_{n\to\infty} \mathscr{D}(\rho^n)\,. \end{equation} \end{prop} \begin{proof} When $\pi$ is purely atomic, setwise convergence implies pointwise convergence $\pi$-a.e.~for the sequence of the densities, so that \eqref{lscD} follows by Fatou's Lemma. A standard argument, still based on Fatou's Lemma, shows that the functional \begin{equation} \label{eq:98} u\mapsto \iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u(x),u(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \end{equation} is lower semicontinuous with respect to ~the strong topology in $L^1(V,\pi)$: it is sufficient to check that $u_n\to u$ in $L^1(V,\pi)$ implies $(u_n^-,u_n^+)\to (u^-,u^+)$ in $L^1(E,\boldsymbol \teta)$. If ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex on $\mathbb{R}_+^2$, then the functional \eqref{eq:98} is also lower semicontinuous with respect to the weak topology in $L^1(V,\pi)$. On the other hand, since $\rho_n$ and $\rho$ are absolutely continuous with respect to ~$\pi$, $\rho_n\to\rho$ setwise if and only if $\mathrm{d}\rho_n/\mathrm{d}\pi\rightharpoonup \mathrm{d}\rho/\mathrm{d}\pi$ weakly in $L^1(V,\pi)$ (see Theorem~\ref{thm:equivalence-weak-compactness}). \color{black} \end{proof} \normalcolor \subsection{The definition of solutions: \texorpdfstring{$\mathscr R/\mathscr R^*$}{R/R*} Energy-Dissipation balance } \label{ss:5.2} We are now in a position to formalize the concept of solution. \begin{definition}[$(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance] \label{def:R-Rstar-balance} We say that a curve $\rho: [0,T] \to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system, if it \normalcolor satisfies the {\em $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance}: \begin{enumerate} \item $\mathscr E(\rho_0)<{+\infty}$; \item There exists a measurable family $({\boldsymbol j}_t)_{t\in [0,T]} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ such that $(\rho,j)\in \CE0T$ with \begin{equation} \label{R-Rstar-balance} \int_s^t \left( \mathscr R(\rho_r, {\boldsymbol j}_r) + \mathscr{D}(\rho_r) \right) \mathrm{d} r+ \mathscr E(\rho_t) = \mathscr E(\rho_s) \qquad \text{for all } 0 \leq s \leq t \leq T. \end{equation} \end{enumerate} \end{definition} \begin{remark}\label{rem:properties} \begin{enumerate} \item Since $(\rho,{\boldsymbol j})\in \CE 0T$, the curve $\rho$ is absolutely continuous with respect to the total variation distance. \item The Energy-Dissipation balance \eqref{R-Rstar-balance} written for $s=0$ and $t=T$ implies that $(\rho,{\boldsymbol j})\in \CER 0T$ as well. Moreover, $t\mapsto \mathscr E(\rho_t)$ takes finite values and it is absolutely continuous in the interval $[0,T]$. \item The chain-rule estimate \eqref{eq:CR2} implies the following important corollary: \begin{cor}[Chain-rule estimate III] \label{cor:CH3} For any curve $(\rho,{\boldsymbol j})\in \CE 0T$, \begin{equation} \label{eq:CR3} \mathscr L_T(\rho,{\boldsymbol j}):= \int_0^T \left( \mathscr R(\rho_r, {\boldsymbol j}_r) + \mathscr{D}(\rho_r) \right) \mathrm{d} r+ \mathscr E(\rho_T) -\mathscr E(\rho_0)\geq 0 . \end{equation} \end{cor} \noindent It follows that the Energy-Dissipation balance \eqref{R-Rstar-balance} is equivalent to the Energy-Dissipation Inequality \begin{equation} \label{EDineq} \mathscr L_T(\rho,{\boldsymbol j})\leq 0. \end{equation} \end{enumerate} \end{remark} Let us give an equivalent characterization of solutions to the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system. Recalling the definition~\eqref{eq:184} of the map $\rmF$ in the interior of $\mathbb{R}_+^2$ and the definition~\eqref{eq:102} of ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi$, we first note that $\rmF$ can be extended to a function defined in $\mathbb{R}_+^2$ with values in the extended real line $[-\infty,+\infty]$ by \begin{equation} \label{eq:101} \mathrm F_0(u,v):= \begin{cases} \big(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi(u,v)\big)\upalpha(u,v)&\text{if }\upalpha(u,v)>0,\\ 0&\text{if }\upalpha(u,v)=0. \end{cases} \end{equation} where we set $(\Psi^*)'(\pm\infty):=\pm\infty$. The function $\mathrm F_0$ is skew-symmetric. \begin{theorem} \label{thm:characterization} A curve $(\rho_t)_{t\in [0,T]}$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system \color{black} iff \begin{enumerate}[label=(\arabic*)] \item\label{thm:characterization-first} $\rho_t=u_t\pi\ll\pi$ for every $t\in [0,T]$ and $t\mapsto u_t$ is an absolutely continuous a.e.~differentiable map with values in $L^1(V,\pi)$; \item $\mathscr E(\rho_0)<{+\infty}$; \item \label{thm:characterization-finite-F0} We have \begin{equation} \int_0^T \iint_E |\mathrm F_0(u_t(x),u_t(y))|\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t<{+\infty};\label{eq:190} \end{equation} and \begin{equation} \label{eq:191} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u_t(x),u_t(y))={\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u_t(x),u_t(y))\quad \text{for $\lambda\otimes\boldsymbol \teta$-a.e.~$(t,x,y)\in [0,T]\times E$}. \end{equation} In particular the complement $U'$ of the set \begin{equation} U:=\{(t,x,y)\in [0,T]\timesE: \mathrm F_0(u_t(x),u_t(y))\in \mathbb{R}\}\label{eq:192} \end{equation} is $(\lambda \otimes \boldsymbol \teta)$-negligible and $\mathrm F_0$ takes finite values $(\lambda\otimes\boldsymbol \teta)$-a.e.~in $[0,T]\times E$; \item\label{thm:characterization-last} Setting \begin{equation} \label{eq:183} 2{\boldsymbol j}_t(\mathrm{d} x,\mathrm{d} y)=-\mathrm F_0(u_t(x),u_t(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y), \end{equation} we have $(\rho,{\boldsymbol j})\in \CE 0T$. In particular, \begin{equation} \label{eq:179} \dot u_t(x)=\int_V \mathrm F_0(u_t(x),u_t(y))\,\kappa(x,\mathrm{d} y) \quad\text{for $(\lambda \otimes \pi)$-a.e.~}(t,x,y)\in [0,T]\times E. \end{equation} \end{enumerate} \end{theorem} \begin{proof} Let $\rho_t=u_t\pi$ be a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system \color{black} with the corresponding flux ${\boldsymbol j}_t$. By Corollary \ref{cor:propagation-AC} we can find a skew-symmetric measurable map $\xi:(0,T)\times E\to \mathbb{R}$ such that ${\boldsymbol j}_\lambda=\xi\upalpha(u^-,u^+)\lambda\otimes\boldsymbol \teta$ and \eqref{eq:42}, \eqref{eq:45} hold. Taking into account that ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and applying the equality case of Corollary \ref{th:chain-rule-bound2}, we complete the proof of one implication. Suppose now that $\rho_t$ satisfies all the above conditions \ref{thm:characterization-first}--\ref{thm:characterization-last}; we want to apply formula \eqref{eq:CR} of Theorem \ref{th:chain-rule-bound} for $\upbeta=\upphi$. For this we write the shorthand $u^-,u^+$ for $u_t(x),u_t(y)$ and set $w=-\mathrm F_0(u^-,u^+)$. We verify the equality conditions~\eqref{eq:109} of Lemma~\ref{le:trivial-but-useful}: \begin{itemize} \item At $(t,x,y)$ where $\upalpha(u^-,u^+) = 0$, we have by definition $w = -\rmF_0(u^-,u^+)=0$; \item At $(\lambda\otimes{\boldsymbol\vartheta})$--a.e.\ $(t,x,y)$ where $\upalpha(u^-,u^+) >0$, $\rmF_0(u^-,u^+)$ is finite by condition~\ref{thm:characterization-finite-F0}, and by~\eqref{eq:101} it follows that $(\Psi^*)'\bigl({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi(u^-,u^+)\bigr)$ is finite and therefore ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi(u^-,u^+)$ is finite. The final condition $-w=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\big)\upalpha(u,v)$ then follows by the definition of $w$. \end{itemize} By Lemma~\ref{le:trivial-but-useful} therefore we have at $(\lambda\otimes{\boldsymbol\vartheta})$--a.e.\ $(t,x,y)$ \begin{align*} -\rmB_\upphi(u^-,u^+,w) = \Upsilon(u^-,u^+,-w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-(u^-,u^+) \stackrel{\eqref{eq:191}}= \Upsilon(u^-,u^+,-w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u^-,u^+). \end{align*} In particular $\rmB_\upphi$ is nonpositive, and the integrability condition \eqref{ass:th:CR} is trivially satisfied. Integrating~\eqref{eq:CR} in time we find~\eqref{R-Rstar-balance}. \end{proof} \begin{remark} \label{rmk:why-interesting-1} \upshape By Theorem \ref{thm:characterization}(3), along a solution $\rho_t = u_t \pi$ of the $(\mathscr E, \mathscr R, \mathscr R^*)$ system, the functions $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ coincide. Recall that, in general, we only have $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^- \leq {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$, and the inequality can be strict, as in the examples of the linear equation \eqref{eq:fokker-planck} with the Boltzmann entropy and the quadratic and $\cosh$-dissipation potentials discussed in Ex.\ \ref{ex:Dpm}. There, $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ differ on the boundary of $\mathbb{R}^2$. Therefore, \eqref{eq:191} encompasses the information that the pair $(u_t(x),u_t(y))$ stays in the interior of $\mathbb{R}^2$ $(\lambda{\otimes}\boldsymbol \teta)$-a.e.\ in $[0,T]\times E$. \end{remark} \color{black} \subsection{Existence and uniqueness of solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system} \label{subsec:main-properties} Let us now collect a few basic structural properties of solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance. Recall that we will always adopt \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}}. Following an argument by Gigli~\cite{Gigli10} we first use convexity of $\mathscr{D}$ to deduce uniqueness. \begin{theorem}[Uniqueness] \label{thm:uniqueness} Suppose that $\mathscr{D}$ is convex and the energy density $\upphi$ is strictly convex. Suppose that $\rho^1,\, \rho^2$ satisfy the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance \eqref{R-Rstar-balance} and are identical at time zero. Then $\rho_t^1 = \rho_t^2$ for every $t\in [0,T]$. \end{theorem} \begin{proof} Let ${\boldsymbol j}^i\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}((0,T)\times E)$ satisfy $\mathscr L_t(\rho^i,{\boldsymbol j}^i)=0$ and let us set \begin{displaymath} \rho_t:=\frac 12(\rho_t^1+\rho_t^2),\quad {\boldsymbol j}:=\frac12({\boldsymbol j}^1+{\boldsymbol j}^2). \end{displaymath} By the linearity of the continuity equation we have that $(\rho,{\boldsymbol j})\in \CE 0T$ with $\rho_0=\rho^1_0=\rho^2_0$, so that by convexity \begin{align*} \mathscr E(\rho_t) &\ge\mathscr E(\rho_0)- \int_0^t \left( \mathscr R(\rho_r, {\boldsymbol j}_r) + \mathscr{D}(\rho_r) \right) \mathrm{d} r \\&\ge \mathscr E(\rho_0)- \frac12\int_0^t \left( \mathscr R(\rho^1_r, {\boldsymbol j}^1_r) + \mathscr{D}(\rho^1_r) \right) \mathrm{d} r - \frac12\int_0^t \left( \mathscr R(\rho^2_r, {\boldsymbol j}^2_r) + \mathscr{D}(\rho^2_r) \right) \mathrm{d} r \\& =\frac 12\mathscr E(\rho^1_t)+\frac12\mathscr E(\rho^2_t). \end{align*} Since $\mathscr E$ is strictly convex we deduce $\rho^1_t=\rho^2_t$. \end{proof} \begin{theorem}[Existence and stability] \label{thm:existence-stability} Let us suppose that the Fisher information functional $\mathscr{D}$ is lower semicontinuous with respect to ~setwise convergence (e.g.~if $\pi$ is purely atomic, or ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex, see Proposition \ref{PROP:lsc}). \begin{enumerate}[label=(\arabic*)] \item \label{thm:existence-stability:p1} For every $\rho_0\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ with $\mathscr E(\rho_0)<{+\infty}$ there exists a solution $\rho:[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ of the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system starting from $\rho_0$. \item \label{thm:existence-stability:p2} Every sequence $(\rho^n_t)_{t\in [0,T]}$ of solutions to the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system such that \begin{equation} \label{eq:195} \sup_{n\in \mathbb{N}} \mathscr E(\rho^n_0)<{+\infty} \end{equation} has a subsequence setwise converging to a limit $(\rho_t)_{t\in [0,T]}$ for every $t\in [0,T]$. \item \label{thm:existence-stability:p3} Let $(\rho^n_t)_{t\in [0,T]}$ is a sequence of solutions, with corresponding fluxes $({\boldsymbol j}^n_t)_{t\in[0,T]}$. Let $\rho^n_t$ converge setwise to $\rho_t$ for every $t\in [0,T]$, and assume that \begin{equation} \lim_{n\to\infty}\mathscr E(\rho^n_0)=\mathscr E(\rho_0).\label{eq:194} \end{equation} Then $\rho$ is a solution as well, with flux ${\boldsymbol j}$, and the following additional convergence properties hold: \begin{subequations} \label{eq:196} \begin{align} \label{eq:196a} \lim_{n\to\infty}\int_0^T\mathscr R(\rho_t^n,{\boldsymbol j}_t^n)\,\mathrm{d} t&= \lim_{n\to\infty}\int_0^T\mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t,\\ \label{eq:196b} \lim_{n\to\infty}\int_0^T\mathscr{D}(\rho_t^n)\,\mathrm{d} t&= \lim_{n\to\infty}\int_0^T\mathscr{D}(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t,\\ \label{eq:196c} \lim_{n\to\infty}\mathscr E(\rho^n_t)&=\mathscr E(\rho_t)\quad \text{for every }t\in [0,T]. \end{align} \end{subequations} If moreover $\mathscr E$ is strictly convex then $\rho^n$ converges uniformly in $[0,T]$ with respect to the total variation distance. \end{enumerate} \end{theorem} \begin{proof} Part \textit{\ref{thm:existence-stability:p2}} follows immediately from Proposition \ref{prop:compactness}. For part \textit{\ref{thm:existence-stability:p3}}, the three statements of~\eqref{eq:196} \emph{as inequalities} \color{purple} $\leq$ \color{black} follow from earlier results: for~\eqref{eq:196a} this follows again from Proposition~\ref{prop:compactness}, for~\eqref{eq:196b} from Proposition~\ref{PROP:lsc}, and for~\eqref{eq:196c} from Lemma~\ref{l:lsc-general}. Using these inequalities to pass to the limit in the equation $\mathscr L_T(\rho^n,{\boldsymbol j}^n)=0$ we obtain that $\mathscr L_T(\rho,{\boldsymbol j})\le 0$. On the other hand, since $\mathscr L_T(\rho,{\boldsymbol j})\geq0$ by the chain-rule estimate~\eqref{eq:CR3}, \color{purple} standard arguments yield \color{black} the equalities in~\eqref{eq:196}. When $\mathscr E$ is strictly convex, we obtain the convergence in $L^1(V,\pi)$ of the densities $u^n_t=\mathrm{d} \rho^n_t/\mathrm{d}\pi$ for every $t\in [0,T]$. We then use the equicontinuity estimate \eqref{eq:65} of Proposition \ref{prop:compactness} to conclude uniform convergence of the sequence $(\rho_n)_n$ with respect to the total variation distance. For part \textit{\ref{thm:existence-stability:p1}}, when the density $u_0$ of $\rho_0$ takes value in a compact interval $[a,b]$ with $0<a<b<\infty$, the existence of a solution follows by Theorem \ref{thm:sg-sol-is-var-sol} below. The general case follows by a standard approximation of $u_0$ by truncation and applying the stability properties of parts \textit{\ref{thm:existence-stability:p2}} and \textit{\ref{thm:existence-stability:p3}}. \end{proof} \subsection{Stationary states and attraction} \label{ss:5.4} Let us finally make a few comments on stationary measures and on the asymptotic behaviour of solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. The definition of invariant measures was already given in Section~\ref{subsub:kernels}, and we recall it for convenience. \begin{definition}[Invariant and stationary measures] Let $\rho=u\pi\in D(\mathscr E)$ be given. \begin{enumerate} \item We say that $\rho$ is \emph{invariant} if $\kernel\kappa\rho(\mathrm{d} x\mathrm{d} y )= \rho(\mathrm{d} x)\kappa(x,\mathrm{d} y)$ has equal marginals, i.e.\ ${\mathsf x}_\# \kernel\kappa\rho = {\mathsf y}_\# \kernel\kappa\rho$. \item We say that $\rho$ is \emph{stationary} if the constant curve $\rho_t\equiv \rho$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. \end{enumerate} \end{definition} Note that we always assume that $\pi$ is invariant (see Assumption~\ref{ass:V-and-kappa}). It is immediate to check that \begin{align} \label{eq:197} \rho\text{ is stationary}\quad \Longleftrightarrow\quad \mathscr{D}(\rho)=0 \quad &\Longleftrightarrow\quad {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u(x),u(y))=0\quad\text{$\boldsymbol \teta$-a.e.} \end{align} If a measure $\rho$ is invariant, then $u=\mathrm{d} \rho/\mathrm{d}\pi$ satisfies \begin{equation} \label{eq:198} u(x)=u(y)\quad\text{for $\boldsymbol \teta$-a.e.~$(x,y)\in E$}, \end{equation} which implies~\eqref{eq:197}; therefore invariant measures are stationary. Depending on the system, the set of stationary measures might also contain non-invariant measures, as the next example shows. \begin{example} Consider the example of the cosh-type dissipation~\eqref{choice:cosh}, \[ \upalpha(u,v) := \sqrt{uv}, \quad\Psi^*(\xi) := 4\Bigl(\cosh\frac\xi2-1\Bigr), \] but combine this with a Boltzmann entropy with an additional multiplicative constant $0<\gamma\leq 1$: \[ \upphi(s) := \gamma(s\log s - s + 1). \] The case $\gamma=1$ corresponds to the example of~\eqref{choice:cosh}, and for general $0<\gamma\leq 1$ we find that \[ \rmF(u,v) = u^{\frac{1-\gamma}2}v^{\frac{1+\gamma}2} - u^{\frac{1+\gamma}2}v^{\frac{1-\gamma}2}, \] resulting in the evolution equation (see~\eqref{eq:180}) \[ \partial_t u(x) = \int_{y\in V} \Bigl[u(x)^{\frac{1-\gamma}2}u(y)^{\frac{1+\gamma}2} - u(x)^{\frac{1+\gamma}2}u(y)^{\frac{1-\gamma}2}\Bigr]\, \kappa(x,\mathrm{d} y). \] When $0<\gamma<1$, any function of the form $u(x) = \mathbbm{1}\{x\in A\}$ for $A\subset V$ is a stationary point of this equation, and equivalently any measure $\pi \mres A$ is a stationary solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. For $0<\gamma<1$ therefore the set of stationary measures is much larger than just invariant measures. \end{example} As in the case of linear evolutions, $(\mathscr E,\mathscr R,\mathscr R^*)$ systems behave well with respect to decomposition of $\pi$ into mutually singular invariant measures. \begin{theorem}[Decomposition] \label{thm:decomposition} Let us suppose that $\pi=\pi^1+\pi^2$ with $\pi^1,\pi^2\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ mutually singular and invariant. Let $\rho:[0,T]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ be a curve with $\rho_t=u_t\pi\ll\pi$ and let $\rho^i_t:=u_t\pi^i$ be the decomposition of $\rho_t$ with respect to ~$\pi^1$ and $\pi^2$. Then $\rho$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system if and only if each curve $\rho^i_t$, $i=1,2$, is a solution of the $(\mathscr E^i,\mathscr R^i,(\mathscr R^i)^*)$ system, where $\mathscr E^i(\mu):=\mathscr F_\upphi(\mu|\pi^i)$ is the relative entropy with respect to the measures $\pi^i$ and and $\mathscr R^i,(\mathscr R^i)^*$ are induced by $\pi^i$. \end{theorem} \begin{remark} It is worth noting that when $\upalpha$ is $1$-homogeneous then $\mathscr R^i=\mathscr R$ and $(\mathscr R^i)^*=\mathscr R^*$ do not depend on $\pi^i$, cf.\ Corollary \ref{cor:decomposition}. The decomposition is thus driven just by the splitting of the entropy $\mathscr E$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:decomposition}] Note that the assumptions of invariance and mutual singularity of $\pi^1$ and $\pi^2$ imply that~$\boldsymbol \teta$ has a singular decomposition ${\boldsymbol\vartheta} = {\boldsymbol\vartheta}^1 + {\boldsymbol\vartheta}^2 := \kernel\kappa{\pi^1} + \kernel\kappa{\pi^2}$, where the $\kernel\kappa{\pi^i}$ are symmetric. It then follows that $\mathscr E(\rho_t)=\mathscr E^1(\rho^1_t)+\mathscr E^2(\rho^2_t)$ and $\mathscr{D}(\rho_t)=\mathscr{D}^1(\rho^1_t)+\mathscr{D}^2(\rho^2_t)$, where \begin{displaymath} \mathscr{D}^i(\rho^i)=\frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u(x),u(y))\,\boldsymbol \teta^i(\mathrm{d} x,\mathrm{d} y). \end{displaymath} Finally, Corollary~\ref{cor:decomposition} shows that decomposing ${\boldsymbol j}$ as the sum ${\boldsymbol j}^1+{\boldsymbol j}^2$ where ${\boldsymbol j}^i\ll\boldsymbol \teta^i$, the pairs $(\rho^i,{\boldsymbol j}^i)$ belong to $\CE 0T$ and $\mathscr R(\rho_t,{\boldsymbol j}_t)= \mathscr R^1(\rho^1_t,{\boldsymbol j}^1_t)+ \mathscr R^2(\rho^2_t,{\boldsymbol j}^2_t)$. \end{proof} \begin{theorem}[Asymptotic behaviour] Let us suppose that the only stationary measures are multiples of $\pi$, and that $\mathscr{D}$ is lower semicontinuous with respect to setwise convergence. Then every solution $\rho:[0,\infty)\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ of the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system converges setwise to $c\pi$, where $c:=\rho_0(V)/\pi(V)$. \end{theorem} \begin{proof} Let us fix a vanishing sequence $\tau_n\downarrow0$ such that $\sum_n\tau_n={+\infty}$. Let $\rho_\infty$ be any limit point with respect to ~setwise convergence of the curve $\rho_t$ along a diverging sequence of times $t_n\uparrow{+\infty}$. Such a point exists since the curve $\rho$ is contained in a sublevel set of $\mathscr E$. Up to extracting a further subsequence, it is not restrictive to assume that $t_{n+1}\ge t_n+\tau_n$. Since \begin{align*} \sum_{n\in \mathbb{N}}\int_{t_n}^{t_n+\tau_n}\Big(\mathscr R(\rho_t,{\boldsymbol j}_t)+\mathscr{D}(\rho_t)\Big)\,\mathrm{d} t \le \int_0^{{+\infty}}\Big(\mathscr R(\rho_t,{\boldsymbol j}_t)+\mathscr{D}(\rho_t)\Big)\,\mathrm{d} t\le \mathscr E(\rho_0)<\infty \end{align*} and the series of $\tau_n$ diverges, we find \begin{displaymath} \liminf_{n\to{+\infty}}\frac1{\tau_n}\int_{t_n}^{t_n+\tau_n}\mathscr{D}(\rho_t)\,\mathrm{d} t=0,\quad \lim_{n\to\infty}\int_{t_n}^{t_n+\tau_n}\mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t=0. \end{displaymath} Up to extracting a further subsequence, we can suppose that the above $\liminf$ is a limit and we can select $t'_n\in [t_n,t_n+\tau_n]$ such that \begin{displaymath} \lim_{n\to\infty}\mathscr{D}(\rho_{t_n'})=0,\quad \lim_{n\to\infty}\int_{t_n}^{t_n'}\mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t=0. \end{displaymath} Recalling the definition \eqref{def-psi-rig} of the Dynamical-Variational Transport cost and the monotonicity with respect to $\tau$, we also get $\lim_{n\to\infty} \mathscr W(\tau_n,\rho_{t_n},\rho_{t_n'})=0$, so that Theorem \ref{thm:props-cost}(5) and the relative compactness of the sequence $(\rho_{t_n'})_n$ yield $\rho_{t_n'}\to \rho_\infty$ setwise. The lower semicontinuity of $\mathscr{D}$ yields $\mathscr{D}(\rho_\infty)=0$ so that $\rho_\infty=c\pi$ thanks to the uniqueness assumption and to the conservation of the total mass. Since we have uniquely identified the limit point, we conclude that the \emph{whole} curve $\rho_t$ converges setwise to $\rho_\infty$ as $t\to{+\infty}$. \end{proof} \section{Dissipative evolutions in $L^1(V,\pi)$} \label{s:ex-sg} In this section we construct solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ formulation by studying their equivalent characterization as abstract evolution equations in $L^1(V,\pi)$. Throughout this section we adopt Assumption~\ref{ass:V-and-kappa}. \subsection{Integro-differential equations in $L^1$} Let $J\subset \mathbb{R}$ be a closed interval (not necessarily bounded) and let us first consider a map ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}:E\times J^2\to \mathbb{R}$ with the following properties: \begin{subequations} \label{subeq:G} \begin{enumerate} \item measurability with respect to ~$(x,y)\in E$: \begin{equation} \label{eq:113} \text{for every $u,v\in J$ the map } (x,y)\mapsto {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\text{ is measurable}; \end{equation} \item continuity with respect to ~$u,v$ and linear growth: there exists a constant $M>0$ such that \begin{equation} \label{eq:114} \begin{gathered} \text{for every }(x,y)\in E\quad (u,v)\mapsto {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\text{ is continuous and } \\ |{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)|\le M(1+|u|+|v|) \quad \text{for every }u,v\in J, \end{gathered} \end{equation} \item skew-symmetry: \begin{equation} \label{eq:129} {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)=-{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(y,x;v,u),\quad \text{for every } (x,y)\in E,\ u,v\in J, \end{equation} \item $\ell$-dissipativity: there exists a constant $\ell\ge0$ such that for every $(x,y)\in E$, $u,u',v\in J$: \begin{equation} \label{eq:130} u\le u'\quad\Rightarrow\quad {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u',v)- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\le \ell(u'-u). \end{equation} \end{enumerate} \end{subequations} \begin{remark} \label{rem:spoiler} Note that \eqref{eq:130} is surely satisfied if ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ is $\ell$-Lipschitz in $(u,v)$, uniformly with respect to ~$(x,y)$. The `one-sided Lipschitz condition'~\eqref{eq:130} however is weaker than the standard Lipschitz condition; this type of condition is common in the study of ordinary differential equations, since it is still strong enough to guarantee uniqueness and non-blowup of the solutions (see e.g.~\cite[Ch.~IV.12]{HairerWanner96}). Let us also remark that \eqref{eq:129} and \eqref{eq:130} imply the reverse monotonicity property of ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ with respect to ~$v$, \begin{equation} \label{eq:130bis} v\ge v'\quad\Rightarrow\quad {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v')- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\le \ell(v-v')\,, \end{equation} and the joint estimate \begin{equation} \label{eq:120} u\le u',\ v\ge v'\quad\Rightarrow\quad {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u',v')- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\le \ell\big[(u'-u)+(v-v')\big]. \end{equation} \end{remark} Let us set $L^1(V,\pi;J):=\{u\in L^1(V,\pi):u(x)\in J\ \text{for $\pi$-a.e.~$x\in V$}\}$. \begin{lemma} \label{le:tedious} Let $u:V\to J$ be a measurable $\pi$-integrable function. \begin{enumerate} \item We have \begin{equation} \label{eq:140} \int_V\big|{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))\big|\,\kappa(x,\mathrm{d} y)<{+\infty} \quad\text{for $\pi$-a.e.~$x\in V$}, \end{equation} and the formula \begin{equation} \label{eq:115} \boldsymbol G[u](x):= \int_V {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))\,\kappa(x,\mathrm{d} y) \end{equation} defines a function $\boldsymbol G[u]$ in $L^1(V,\pi)$ that only depends on the Lebesgue equivalence class of $u$ in $L^1(V,\pi)$. \item The map $\boldsymbol G:L^1(V,\pi;J)\to L^1(V,\pi)$ is continuous. \item The map $\boldsymbol G$ is $(\ell\, \|\kappa_V\|_\infty \color{black})$-dissipative, in the sense that \color{red} for all $h>0$,\color{black} \begin{equation} \label{eq:141} \big\|(u_1- u_2)-h (\boldsymbol G[u_1]-\boldsymbol G[u_2])\big\|_{L^1(V,\pi)}\ge (1-\color{red} 2 \color{black} \ell \|\kappa_V|_\infty \color{black} \,h)\|u_1-u_2\|_{L^1(V,\pi)} \end{equation} for every $u_1,u_2\in L^1(V,\pi;J)$. \item If $a\in J$ satisfies \begin{equation} \label{eq:123a} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;a,a)\le {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;a,v)\quad \text{for every }(x,y)\in E,\ v\ge a\,, \end{equation} then for every function $u\in L^1(V,\pi;J)$ we have \begin{equation} \label{eq:142} \begin{aligned} u\ge a\text{ $\pi$-a.e.}\quad &\Rightarrow\quad \lim_{h\downarrow0}\frac1h \int_V \Big(a-(u+h\boldsymbol G[u])\Big)_+\,\mathrm{d} \pi=0\,. \end{aligned} \end{equation} If $b\in J$ satisfies \begin{equation} \label{eq:123b} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;b,b)\ge {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;b,v)\quad \text{for every }(x,y)\in E,\ v\le b, \end{equation} then for every function $u\in L^1(V,\pi;J)$ we have \begin{equation} u\le b\text{ $\pi$-a.e.}\quad \Rightarrow\quad \lim_{h\downarrow0}\frac 1h\int_V \Big(u+h\boldsymbol G[u]-b\Big)_+\,\mathrm{d} \pi=0\,.\label{eq:145} \end{equation} \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} Since ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ is a Carath\'eodory function, for every measurable $u$ and every $(x,y)\in E$ the map $(x,y)\mapsto {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))$ is measurable. Since \begin{equation} \label{eq:143} \begin{aligned} \iint_E |{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y)|\,\kappa(x,\mathrm{d} y)\pi(\mathrm{d} x)&= \iint_E |{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y)|\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \\&\le M \|\kappa_V\|_\infty \color{black} \bigg(1+2\int_V |u|\,\mathrm{d}\pi\bigg)\,, \end{aligned} \end{equation} the first claim follows by Fubini's Theorem \cite[II, 14]{Dellacherie-Meyer78}. \noindent \textit{(2)} Let $(u_n)_{n\in \mathbb{N}}$ be a sequence of functions strongly converging to $u$ in $L^1(V,\pi;J)$. Up to extracting a further subsequence, it is not restrictive to assume that $u_n$ also converges to $u$ pointwise $\pi$-a.e. We have \begin{equation} \label{eq:144} \big\|\boldsymbol G[u_n]-\boldsymbol G[u]\big\|_{L^1(V,\pi)}= \iint_E \Big|{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u_n(x),u_n(y))- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))\Big|\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\, . \end{equation} Since the integrand $g_n$ in \eqref{eq:144} vanishes $\boldsymbol \teta$-a.e.\ in $E$ as $n\to\infty$, by the generalized Dominated Convergence Theorem (see for instance \cite[Thm.\ 4, page 21]{Evans-Gariepy} it is sufficient to show that there exist positive functions $h_n$ pointwise converging to $h$ such that \begin{displaymath} g_n\le h_n\ \boldsymbol \teta\text{-a.e.~in $E$},\qquad \lim_{n\to\infty}\iint_E h_n\,\mathrm{d}\boldsymbol \teta=\iint_E h\,\mathrm{d}\boldsymbol \teta. \end{displaymath} We select \color{purple} $h_n(x,y):=M(2+|u_n(x)|+|u_n(y)|+|u(x)|+|u(y)|)$ and $ h(x,y):=2M(1+|u(x)|+|u(y)|)$. \color{black} This proves the result. \noindent \textit{(3)} Let us set \begin{displaymath} \mathfrak s(r):= \begin{cases} 1&\text{if }r>0\,,\\ -1&\text{if }r\le 0\,, \end{cases} \end{displaymath} \color{red} and observe that the left-hand side of \eqref{eq:141} may be estimated from below by \begin{align*} \big\|(u_1- u_2)-h (\boldsymbol G[u_1]-\boldsymbol G[u_2])\big\|_{L^1(V,\pi)} &\ge \|u_1-u_2\|_{L^1(V,\pi)} \\ &\hspace{2em}- h\int_V \mathfrak s(u_1-u_2)\big(\boldsymbol G[u_1]-\boldsymbol G[u_2]\big) \,\mathrm{d}\pi \end{align*} for all $h>0$. Therefore, \color{black} estimate \eqref{eq:141} follows if we prove that \begin{equation} \label{eq:132} \delta:=\int_V \mathfrak s(u_1-u_2)\big(\boldsymbol G[u_1]-\boldsymbol G[u_2]\big) \,\mathrm{d}\pi \le 2\ell \|\kappa_V\|_\infty \color{black} \, \|u_1-u_2\|_{L^1(V,\pi)}. \end{equation} Let us set \begin{displaymath} \Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y):= {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u_{1}(x),u_{1}(y))- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u_{2}(x),u_{2}(y)), \end{displaymath} and \begin{equation} \label{eq:133} \Delta_\mathfrak s(x,y):=\mathfrak s(u_{1}(x)-u_{2}(x))-\mathfrak s(u_{1}(y)-u_{2}(y)). \end{equation} Since $ \Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y)=-\Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(y,x)$, using \eqref{eq:129} we have \begin{align*} \delta= \int_V \mathfrak s\big(u_{1}-u_{2})\,\big(\boldsymbol G[u_{1}]-\boldsymbol G[u_{2}]\big)\,\mathrm{d}\pi &=\iint_E \mathfrak s(u_{1}(x)-u_{2}(x))\Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y) \,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \\&= \frac12\iint_E \Delta_\mathfrak s(x,y) \Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y) \,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) . \end{align*} Setting $\Delta(x):=u_{1}(x)-u_{2}(x)$ we observe that by \eqref{eq:120} \begin{align*} \Delta(x)>0,\ \Delta(y)>0\quad&\Rightarrow\quad \Delta_\mathfrak s (x,y)=0,\\ \Delta(x)\le 0,\ \Delta(y)\le 0\quad&\Rightarrow\quad \Delta_\mathfrak s (x,y)=0,\\ \Delta(x)\le0,\ \Delta(y)>0 \quad&\Rightarrow\quad \Delta_\mathfrak s (x,y)=-2,\ \Delta_G(x,y)\ge-\ell\big(\Delta(y)-\Delta(x)\big)\\ \Delta(x)>0,\ \Delta(y)\le 0\quad&\Rightarrow\quad \Delta_\mathfrak s (x,y)=2,\ \Delta_G(x,y)\le \ell\big(\Delta(x)-\Delta(y)\big). \end{align*} We deduce that \begin{displaymath} \delta\le \ell \iint_E \Big[|u_{1}(x)-u_{2}(x)|+ |u_{1}(y)-u_{2}(y)|\Big]\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\le 2\ell \|\kappa_V\|_\infty \color{black} \,\|u_1-u_2\|_{L^1(V,\pi)}. \end{displaymath} \noindent \textit{(4)} We will only address the proof of property \eqref{eq:142}, as the argument for \eqref{eq:145} is completely analogous. \color{black} \color{red} Suppose that $u\ge a$ $\pi$-a.e. \color{black} Let us first observe that if $u(x)=a$, \color{red} then from \eqref{eq:123a},\color{black} \begin{displaymath} \boldsymbol G[u](x)= \int_V {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;a,u(y))\,\kappa(x,\mathrm{d} y)\ge 0\,. \end{displaymath} We set $f_h(x):=h^{-1}(a-u(x))-\boldsymbol G[u](x)$, observing that $f_h(x)$ is monotonically decreasing to $-\infty$ if $u(x)>a$ and $f_h(x)=-\boldsymbol G[u](x)\le 0$ if $u(x)=a$, so that $\lim_{h\downarrow0}\big(f_h(x)\big)_+=0$. Since $\big(f_h\big)_+\le \big(\!-\!\boldsymbol G[u]\big)_+$ we can apply the Dominated Convergence Theorem \color{red} to obtain \color{black} \begin{displaymath} \lim_{h\downarrow0} \int_V \big(f_h(x)\big)_+\,\pi(\mathrm{d} x)=0\,, \end{displaymath} \color{red} thereby concluding the proof.\color{black} \end{proof} In what follows, we shall address the Cauchy problem \color{black} \begin{subequations} \label{eq:119-Cauchy} \begin{align} \label{eq:119} \dot u_t&=\boldsymbol G[u_t]\quad\text{in $L^1(V,\pi)$ for every }t\ge0,\\ \label{eq:119-0} u\restr{t=0}&=u_0. \end{align} \end{subequations} \begin{lemma}[Comparison principles] \label{le:positivity} Let us suppose that the map ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ satisfies {\rm (\ref{subeq:G}a,b,c)} with $J=\mathbb{R}$. \begin{enumerate} \item If $\bar u\in \mathbb{R}$ satisfies \begin{equation} \label{eq:123abis} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,\bar u)\le {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,v)\quad \text{for every }(x,y)\in E,\ v\ge \bar u, \end{equation} then for every initial datum $u_0\ge\bar u$ the solution $u$ of \eqref{eq:119-Cauchy} satisfies $u_t\ge \bar u$ $\pi$-a.e.~for every $t\ge0$. \item If $\bar u\in \mathbb{R}$ satisfies \begin{equation} \label{eq:123bbis} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,\bar u)\ge {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,v)\quad \text{for every }(x,y)\in E,\ v\le \bar u, \end{equation} then for every initial datum $u_0\le\bar u$ the solution $u$ of \eqref{eq:119-Cauchy} satisfies $u_t\le \bar u$ $\pi$-a.e.~for every $t\ge0$. \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} Let us first consider the case $\bar u=0$. We define a new map $\overline {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ by symmetry: \begin{equation} \label{eq:126} \overline {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v):={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,|v|) \end{equation} which satisfies the same structural properties (\ref{subeq:G}a,b,c), and moreover \begin{equation} \label{eq:127} 0=\overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;0,0) \color{red} \le \color{black} \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;0,v)\quad \text{for every } x,y\in V,\ v\in \mathbb{R}. \end{equation} We call $\overline\boldsymbol G$ the operator induced by $\overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$, and $\bar u$ the solution curve \color{black} of the corresponding Cauchy problem starting from the same (nonnegative) initial datum $u_0$. If we prove that $\bar u_t\ge0$ for every $t\ge0$, then $\bar u_t$ is also the unique solution of the original Cauchy problem \eqref{eq:119-Cauchy} induced by ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$, so that we obtain the positivity of $u_t$. Note that \eqref{eq:127} \color{red} and property \eqref{eq:130} \color{black} yield \begin{equation} \label{eq:124} \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\ge \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)-\overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;0,v)\ge \color{red} \ell\,u\qquad\text{for $u\le 0$}\,.\color{black} \end{equation} \color{red} We set $\upbeta(r):=r_-=\max(0,-r)$ and $P_t:=\{x\in V:\bar u_t(x)<0\}$ for each $t\ge 0$. Due to the Lipschitz continuity of $\upbeta$, the map $t\mapsto b(t):=\int_V \upbeta(\bar u_t)\,\mathrm{d}\pi$ is absolutely continuous. Hence, the chain-rule formula applies, which, together with \eqref{eq:124} gives \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} b(t) &= -\int_{P_t} \overline\boldsymbol G[\bar u_t](x)\,\pi(\mathrm{d} x) = -\iint_{P_t\times V} \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u_t(x),\bar u_t(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \\ &\le \ell \iint_{P_t\times V} (-\bar u_t(x))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) = \ell \iint_E \upbeta(\bar u_t(x))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \le \ell \|\kappa_V\|_\infty \color{black} b(t)\,. \end{align*} \color{black} Since $b$ is nonnegative and $b(0)=0$, we conclude, \color{red} by Gronwall's inequality, \color{black} that $b(t)=0$ for every $t\ge0$ and therefore $\bar u_t\ge0$. In order to prove the the statement for a general $\bar u\in \mathbb{R}$ it is sufficient to consider the new operator $\widetilde {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v):={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u+\bar u,v+\color{red}\bar u\color{ddcyan})$, and to consider the curve $\widetilde u_t:=u_t-\bar u$ starting from the nonnegative initial datum $\widetilde u_0:=u_0-\bar u$. \noindent \textit{(2)} It suffices to apply the transformation $\widetilde{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v):=-{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;-u,-v)$ and set \color{black} $\widetilde u_t:=-u_t$. We then apply the previous claim, yielding \color{black} the lower bound $-\bar u$. \end{proof} We can now state our main result concerning the well-posedness of the Cauchy problem~\eqref{eq:119-Cauchy}. \color{black} \begin{theorem} \label{thm:localization-G} Let $J\subset \mathbb{R}$ be a closed interval of $\mathbb{R}$ and let $G:E\times J^2\to\mathbb{R}$ be a map satisfying conditions {\rm(\ref{subeq:G})}. Let us also suppose that, if $a=\inf J>-\infty$ then \eqref{eq:123a} holds, and that, if $b=\sup J<+\infty$ then \color{black} \eqref{eq:123b} holds. \begin{enumerate} \item For every $u_0\in L^1(V,\pi;J)$ there exists a unique curve $u\in \rmC^1([0,\infty);L^1(V,\pi; J))$ solving the Cauchy problem \eqref{eq:119-Cauchy}. \item $\int_V u_t\,\mathrm{d}\pi=\int_V u_0\,\mathrm{d} \pi$ for every $t\ge0$. \item If $u,v$ are two solutions with initial data $u_0,v_0\in L^1(V,\pi;J)$ respectively, then \begin{equation} \label{eq:146} \|u_t-v_t\|_{L^1(V,\pi)}\le \rme^{2 \|\kappa_V\|_\infty \color{black} \ell\, t}\|u_0-v_0\|_{L^1(V,\pi)}\quad \text{for every }t\ge0. \end{equation} \item If $\bar a\in J$ satisfies condition \eqref{eq:123a} and $u_0\ge \bar a$, then $u_t\ge \bar a$ for every $t\ge0$. Similarly, if $\bar b\in J$ satisfies condition \eqref{eq:123b} and $u_0\le \bar b$, then $u_t\le \bar b$ for every $t\ge0$. \item If $\ell=0$, then the evolution is order preserving: if $u,v$ are two solutions with initial data $u_0,v_0$ then \begin{equation} \label{eq:128} u_0\le v_0\quad\Rightarrow\quad u_t\le v_t\quad\text{for every }t\ge0. \end{equation} \end{enumerate} \end{theorem} \color{black} \begin{proof} Claims \textit{(1), (3), (4)} follow by the abstract generation result of \cite[\S 6.6, Theorem 6.1]{Martin76} applied to the operator $\boldsymbol G$ defined in the closed convex subset $D:=L^1(V,\pi;J)$ of the Banach space $L^1(V,\pi)$. \color{red} For the theorem to apply, \color{black} one has to check the continuity of $\boldsymbol G\color{red} :D\to L^1(V,\pi)\color{ddcyan}$ (Lemma \ref{le:tedious}(2)), its dissipativity \eqref{eq:141}, and the property \begin{displaymath} \liminf_{h\downarrow0} h^{-1} \inf_{v\in D} \|u+h\boldsymbol G[u]-v\|_{L^1(V,\pi)}=0 \quad\text{for every }u\in D\,. \end{displaymath} When $J=\mathbb{R}$, the inner infimum always is zero; if $J$ is a bounded interval $[a,b]$ then the property above follows from the estimates of Lemma \ref{le:tedious}(4), \color{red} since for any $u\in D$, \[ \inf_{v\in D}\int_V |u + h \boldsymbol G[u]-v|\,\mathrm{d}\pi \le \int_V \Bigl(a- (u+h\boldsymbol G[u])\Bigr)_+\mathrm{d}\pi + \int_V \Bigl(u+h\boldsymbol G[u]-b\Bigr)_+\mathrm{d}\pi\,. \] \color{black} When $J=[a,\infty)$ or $J = (-\infty,b]$ a similar reasoning applies. Claim \textit{(2)} is an immediate consequence of \eqref{eq:129}. Finally, when $\ell=0$, claim \textit{(5)} follows from the Crandall-Tartar Theorem \cite{Crandall-Tartar80}, stating that a non-expansive map in $L^1$ (cf.\ \eqref{eq:146}) \color{black} that satisfies claim \textit{(2)} is also order preserving. \end{proof} \subsection{Applications to dissipative evolutions} Let us now consider the map $\mathrm F: (0,+\infty)^2 \to \mathbb{R}$ \color{black} induced by the system $(\Psi^*,\upphi,\upalpha)$, first introduced in~\eqref{eq:184}, \begin{equation} \label{eq:appA-w-Psip-alpha} \mathrm F(u,v) := (\Psi^*)'\bigl( \upphi'(v)-\upphi'(u)\bigr)\, \upalpha(u,v) \quad\text{for every }u,v>0\,, \end{equation} with the corresponding integral operator: \begin{equation} \label{eq:162} \boldsymbol F[u](x):=\int_V \mathrm F(u(x),u(y))\,\kappa(x,\mathrm{d} y)\,. \end{equation} Since $\Psi^*$, $\upphi$ are $\rmC^1$ \color{red} convex \color{black} functions on $(0,{+\infty})$ and $\upalpha$ is locally Lipschitz in $(0,{+\infty})^2$ it is easy to check that $\mathrm F$ satisfies properties (\ref{subeq:G}a,b,c,d) in every compact subset $J\subset (0,{+\infty})$ and conditions \eqref{eq:123a}, \eqref{eq:123b} at every point $a,b\in J$. In order to focus on the structural properties of the \color{purple} {associated evolution problem, cf.\ \eqref{eq:159} below,} \color{black} we will mostly confine our analysis to the regular case, according to the following: \color{black} \begin{Assumptions}{$\rmF$} \label{ass:F} \color{black} The map $\mathrm F$ defined by \eqref{eq:appA-w-Psip-alpha} satisfies \color{red} the following properties:\color{black} \begin{gather} \label{eq:157} \mathrm F\text{ admits a continuous extension to $[0,\infty)$, } \intertext{and for every $R>0$ there exists $\ell_R\ge 0$ such that } \label{eq:158} v\le v'\quad\Rightarrow\quad \mathrm F(u,v)-\mathrm F(u,v')\le \ell_R\, (v'-v) \quad\text{for every }u,v,v'\in [0,R]. \end{gather} If moreover \eqref{eq:158} is satisfied in $[0,{+\infty})$ for some constant $\ell_\infty\ge 0$ and there exists a constant $M$ such that \begin{equation} \label{eq:161} |\mathrm F(u,v)|\le M(1+u+v)\quad\text{for every }u,v\ge0\,, \end{equation} we say that $(\rmF_\infty)$ holds. \end{Assumptions} Note that \eqref{eq:157} is always satisfied if $\upphi$ is differentiable at $0$. Estimate \eqref{eq:158} is also true if in addition $\upalpha$ is Lipschitz. However, as we have shown in Section \ref{subsec:examples-intro}, there are important examples in which $\upphi'(0)=-\infty$, but \eqref{eq:157} and \eqref{eq:158} hold nonetheless. Theorem \ref{thm:localization-G} yields the following general result: \begin{theorem} \label{thm:ODE-well-posedness} Consider the Cauchy problem \begin{equation} \label{eq:159} \dot u_t=\boldsymbol F[u_t] \quad t\ge0,\quad u\restr{t=0}=u_0. \end{equation} for a given nonnegative $u_0\in L^1(V,\pi)$. \begin{enumerate}[label=(\arabic*)] \item \label{thm:ODE-well-posedness-ex} For every $u_0\in L^1(V,\pi;J)$ with $J$ a compact subinterval of $(0,{+\infty})$ there exists a unique bounded and nonnegative solution $u\in \rmC^1([0,\infty);L^1(V,\pi;J))$ of \eqref{eq:159}. We will denote by $({\mathsf S}_t)_{t\ge0}$ the corresponding $\rmC^1$-semigroup \color{red} of nonlinear operators, \color{black} mapping $u_0$ to the value $u_t={\mathsf S}_t[u_0]$ at time $t$ of the solution $u$. \item $\int_V u_t\,\mathrm{d}\pi=\int_V u_0\,\mathrm{d}\pi$ for every $t\ge0$. \item If $a\le u_0\le b$ $\pi$-a.e.~in $V$, then $a\le u_t\le b$ $\pi$-a.e.~for every $t\ge0$. \item\label{thm:ODE-well-posedness-Lip} The solution satisfies the Lipschitz estimate \eqref{eq:146} (with $\ell=\ell_R$) and the order preserving property if $\ell_R=0$. \item If Assumption \ref{ass:F} holds, then $({\mathsf S}_t)_{t\ge0}$ can be extended to a semigroup defined on every essentially bounded nonnegative $u_0\in L^1(V,\pi)$ and satisfying the same properties \ref{thm:ODE-well-posedness-ex}--\ref{thm:ODE-well-posedness-Lip} above. \item If additionally $(\rmF_\infty)$ holds, then $({\mathsf S}_t)_{t\ge0}$ can be extended to a semigroup defined on every nonnegative $u_0\in L^1(V,\pi)$ and satisfying the same properties \ref{thm:ODE-well-posedness-ex}--\ref{thm:ODE-well-posedness-Lip} above. \end{enumerate} \end{theorem} We now show that the solution $u$ given by Theorem~\ref{thm:ODE-well-posedness} is also a solution in the sense of the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance. \begin{theorem} \label{thm:sg-sol-is-var-sol} Assume~\ref{ass:V-and-kappa}, \ref{ass:Psi}, \ref{ass:S}. Let $u_0 \in L^1(V;\pi)$ be nonnegative and $\pi$-essentially valued in a compact interval $J$ of $(0,\infty)$ and let $u={\mathsf S}[u_0] \in \mathrm{C}^1 ([0,{+\infty});L^1(V,\pi;J))$ be the solution to \eqref{eq:159} given by Theorem~\ref{thm:ODE-well-posedness}. \normalcolor Then the pair $(\rho,{\boldsymbol j} )$ given by \begin{align*} \rho_t (\mathrm{d} x)&: = u_t(x) \pi(\mathrm{d} x)\,,\\ \color{red} 2\color{black}{\boldsymbol j}_t (\mathrm{d} x\, \mathrm{d} y)&:= w_t(x,y)\, \boldsymbol \teta(\mathrm{d} x\, \mathrm{d} y)\,,\qquad w_t(x,y):=-\rmF(u_t(x),u_t(y))\,, \end{align*} is an element of $\CE0{+\infty}$ and satisfies the $(\mathscr E,\mathscr R$,$\mathscr R^*)$ Energy-Dissipation balance \eqref{R-Rstar-balance}. If\/ $\rmF$ satisfies the stronger assumption\/ \ref{ass:F}, then the same result holds for every essentially bounded and nonnegative initial datum. Finally, if also $(\rmF_\infty)$ holds, the above result is valid for every nonnegative $u_0\in L^1(V,\pi)$ with $\rho_0=u_0\pi\in D(\mathscr E)$. \end{theorem} \begin{proof} Let us first consider the case when $u_0$ satisfies $0<a\le u_0\le b<{+\infty}$ $\pi$-a.e.. Then, the solution $u={\mathsf S}[u_0]$ satisfies the same bounds, the map $w_t$ is uniformly bounded and $ \upalpha(u_t(x),u_t(y))\ge \upalpha(a,a)>0$, so that $(\rho,{\boldsymbol j})\in \CER 0T.$ We can thus apply Theorem \ref{thm:characterization}, obtaining the Energy-Dissipation balance \begin{equation} \label{eq:164L} \mathscr E(\rho_0)-\mathscr E(\rho_T)= \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t+ \int_0^T \mathscr{D}(\rho_t)\,\mathrm{d} t, \qquad\text{or equivalently}\quad \mathscr L(\rho,{\boldsymbol j})=0. \end{equation} In the case $0\leq u_0\leq b$ we can argue by approximation, setting $u_{0}^a:=\max\{u_0, a\}$, $a>0$, and considering the solution $u_t^a:={\mathsf S}_t[u_0^a]$ with divergence field $\color{red} 2\color{black} {\boldsymbol j}_t^a(\mathrm{d} x,\mathrm{d} y)=-\rmF(u_t^{a}(x),u_t^{a}(y))\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)$. Theorem \ref{thm:ODE-well-posedness}(4) shows that $u_t^a\to u_t$ strongly in $L^1(V,\pi)$ as $a\downarrow0$, \color{red} and consequently also ${\boldsymbol j}_\lambda^a\to {\boldsymbol j}_\lambda$ \color{black} setwise. Hence, we can pass to the limit in \eqref{eq:164L} (written for $(\rho^a,{\boldsymbol j}^a)$ thanks to Proposition \ref{prop:compactness} and Proposition \ref{PROP:lsc}), obtaining $\mathscr L(\rho,{\boldsymbol j})\le 0$, which is still sufficient to conclude that $(\rho,{\boldsymbol j})$ is a solution thanks to Remark \ref{rem:properties}(3). Finally, if $(\rmF_\infty)$ holds, we obtain the general result by a completely analogous argument, approximating $u_0$ by the sequence $u_0^b:=\min\{u_0, b\}$ and letting $b\uparrow{+\infty}$. \end{proof} \normalcolor \section{Existence via Minimizing Movements} \label{s:MM} In this section we construct solutions to the $(\mathscr E,\mathscr R,\mathscr R^*)$ formulation via the \emph{Minimizing Movement} approach. The method uses only fairly general properties of $\mathscr{W}$, $\mathscr E$, and the underlying space, and it may well have broader applicability than the \color{purple} measure-space \color{black} setting that we consider here (see Remark \ref{rmk:generaliz-topol}). Therefore we formulate the results in a slightly more general setup. We consider a topological space \begin{equation} \label{ambient-topological} (X,\sigma) = {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \text{ endowed with the \normalcolor setwise topology}. \end{equation} For consistency with the above definition, in this section we will use use the abstract notation $\weaksigmatoabs$ to denote setwise convergence in $X= {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$. Although throughout this paper we adopt the Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}, in this chapter we will base the discussion only on the following properties: \begin{Assumptions}{Abs} \label{ass:abstract} \begin{enumerate} \item the Dynamical-Variational Transport (DVT) cost $\mathscr{W}$ enjoys properties \eqref{assW}; \item the driving functional $\mathscr E$ enjoys the typical lower-semicontinuity and coercivity properties underlying the variational approach to gradient flows: \begin{subequations}\label{conditions-on-S} \begin{align} \label{e:phi1} &\mathscr E \geq 0 \quad \text{and} \quad \mathscr E \ \text{is $\sigma$-sequentially lower semicontinuous};\\ &\exists \rho^{*} \in X \quad\text{such that}\quad \forall\, \tau>0, \notag\\ \label{e:phipsi1} &\qquad \text{the map $\rho \mapsto \mathscr{W}(\tau,\rho^{*}, \rho) + \mathscr E(\rho)$ has $\sigma$-sequentially compact sublevels.} \end{align} \end{subequations} \end{enumerate} \end{Assumptions} Assumption~\ref{ass:abstract} is implied by Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}. The properties~\eqref{assW} are the content of Theorem~\ref{thm:props-cost}; condition \eqref{e:phi1} follows from Assumption~\ref{ass:S} and Lemma~\ref{PROP:lsc}; condition~\eqref{e:phipsi1} follows from the superlinearity of $\upphi$ at infinity and Prokhorov's characterization of compactness in the space of finite measures~\cite[Th.~8.6.2]{Bogachev07}. \subsection{The Minimizing Movement scheme and the convergence result} \label{ss:MM} The classical `Minimizing Movement' scheme for metric-space gradient flows~\cite{DeGiorgiMarinoTosques80,AmbrosioGigliSavare08} starts by defining approximate solutions through incremental minimization, \[ \rho^n \in \mathop{\rm argmin}_{\rho} \left( \frac1{2\tau} d(\rho^{n-1},\rho)^2 + \mathscr E(\rho)\right). \] In the context of this paper the natural generalization of the expression to be minimized is $\DVT\tau{\rho^{n-1}}\rho + \mathscr E(\rho)$. This can be understood by remarking that if $\mathscr R(\rho,\cdot)$ is quadratic, then it formally generates a metric \begin{align*} \frac12 d(\mu,\nu)^2 &= \inf\left\{ \int_0^1 \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t \, : \, \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \ \rho_0 = \mu, \text{ and }\rho_ 1 = \nu\right\}\\ &= \tau \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t \, : \, \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \ \rho_0 = \mu, \text{ and }\rho_ \tau = \nu\right\}\\ &= \tau \DVT\tau{\mu}\nu. \end{align*} In this section we set up the approximation scheme featuring the cost $\mathscr{W}$. We consider a partition $ \{t_{\tau}^0 =0< t_{\tau}^1< \ldots<t_{\tau}^n < \ldots< t_{\tau}^{N_\tau-1}<T\le t_{\tau}^{N_\tau}\} $, with fineness $\tau : = \max_{i=n,\ldots, N_\tau} (t_{\tau}^{n} {-} t_{\tau}^{n-1})$, of the time interval $[0,T]$. The sequence of approximations $(\rho_\tau^n)_n$ is defined by the \color{purple} following \color{black} recursive minimization scheme. Fix $\rho^\circ\in X$. \begin{problem} \label{pr:time-incremental} Given $\rho_\tau^0:=\rho^\circ,$ find $\rho_\tau^1, \ldots, \rho_\tau^{N_\tau} \in X $ fulfilling \begin{equation} \label{eq:time-incremental} \rho_\tau^n \in \mathop{\rm argmin}_{v \in X} \Bigl\{ \mathscr{W}(t_{\tau}^n -t_{\tau}^{n-1}, \rho_\tau^{n-1}, v) +\mathscr E(v)\Bigr\} \quad \text{for $n=1, \ldots, {N_\tau}.$} \end{equation} \end{problem} \begin{lemma} \label{lemma:exist-probl-increme} Under assumption \ref{ass:abstract}, for any $\tau >0$ Problem \ref{pr:time-incremental} admits a solution $\{\rho_\tau^n\}_{n=1}^{{N_\tau}}\subset X$. \end{lemma} \par We denote by $\piecewiseConstant \rho \tau$ and $\underpiecewiseConstant \rho \tau$ the left-continuous and right-continuous piecewise constant interpolants of the values $\{\rho_\tau^n\}_{n=1}^{{N_\tau}}$ on the nodes of the partition, fulfilling $\piecewiseConstant \rho \tau(t_{\tau}^n)=\underpiecewiseConstant \rho \tau(t_{\tau}^n)=\rho_\tau^n$ for all $n=1,\ldots, {N_\tau}$, i.e., \begin{equation} \label{pwc-interp} \piecewiseConstant \rho \tau(t)=\rho_\tau^n \quad \forall t \in (t_{\tau}^{n-1},t_{\tau}^n], \quad \quad \underpiecewiseConstant \rho \tau(t)=\rho_\tau^{n-1} \quad \forall t \in [t_{\tau}^{n-1},t_{\tau}^n), \quad n=1,\ldots, {N_\tau}. \end{equation} Likewise, we denote by $\piecewiseConstant {\mathsf{t}}{\tau}$ and $\underpiecewiseConstant {\mathsf{t}}{\tau}$ the piecewise constant interpolants $\piecewiseConstant {\mathsf{t}}{\tau}(0): = \underpiecewiseConstant {\mathsf{t}}{\tau}(0): =0$, $ \piecewiseConstant {\mathsf{t}}{\tau}(T): = \underpiecewiseConstant {\mathsf{t}}{\tau}(T): =T$, and \begin{equation} \label{nodes-interpolants} \piecewiseConstant {\mathsf{t}} \tau(t)=t_{\tau}^n \quad \forall t \in (t_{\tau}^{n-1},t_{\tau}^n], \quad \quad \underpiecewiseConstant {\mathsf{t}} \tau(t)=t_{\tau}^{n-1} \quad \forall t \in [t_{\tau}^{n-1},t_{\tau}^n)\,. \end{equation} \par We also introduce another notion of interpolant of the discrete values $\{\rho_\tau^n\}_{n=0}^{N_\tau}$ introduced by De Giorgi, namely the \emph{variational interpolant} $\pwM\rho\tau : [0,T]\to X$, which is defined in the following way: the map $t\mapsto \pwM \rho\tau(t)$ is Lebesgue measurable in $(0, T )$ and satisfies \begin{equation} \label{interpmin} \begin{cases}\quad \pwM \rho\tau(0)=\rho^\circ, \quad \text{and, for } t=t_{\tau}^{n-1} + r \in (t_{\tau}^{n-1}, t_{\tau}^{n}], \\\quad \pwM \rho\tau(t) \in \displaystyle \mathop{\rm argmin}_{\mu \in X} \left\{ \mathscr{W}(r, \rho_\tau^{n-1}, \mu) +\mathscr E(\mu)\right\} \end{cases} \end{equation} The existence of a measurable selection is guaranteed by \cite[Cor. III.3, Thm. III.6]{Castaing-Valadier77}. \par It is natural to introduce the following extension of the notion of \emph{(Generalized) Minimizing Movement}, which is typically given in a metric setting \cite{Ambr95MM,AmbrosioGigliSavare08}. For simplicity, we will continue to use the classical terminology. \begin{definition} \label{def:GMM} We say that a curve $\rho: [0,T] \to X$ is a \emph{Generalized Minimizing Movement} for the energy functional $\mathscr E$ starting from the initial datum $\rho^\circ\in \mathrm{D}(\mathscr E)$, if there exist a sequence of partitions with fineness $(\tau_k)_k$, $\tau_k\downarrow 0$ as $k\to\infty$, and, correspondingly, a sequence of discrete solutions $(\piecewiseConstant \rho {\tau_k})_k$ such that, as $k\to\infty$, \begin{equation} \label{sigma-conv-GMM} \piecewiseConstant \rho {\tau_k}(t) \weaksigmatoabs \rho(t) \qquad \text{for all } t \in [0,T]. \end{equation} We shall denote by $\GMM{\mathscr E,\mathscr{W}}{\rho^\circ}$ the collection of all Generalized Minimizing Movements for $\mathscr E$ starting from $\rho^\circ$. \end{definition} We can now state the main result of this section. \begin{theorem} \label{thm:construction-MM} Under \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}}, let the lower-semicontinuity Property \eqref{lscD} be satisfied. Then $\GMMT{\mathscr E,\mathscr{W}}{0}{T}{\rho^\circ} \neq \emptyset$ and every $\rho \in \GMMT{\mathscr E,\mathscr{W}}{0}{T}{\rho^\circ}$ satisfies the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance (Definition~\ref{def:R-Rstar-balance}). \end{theorem} Throughout Sections \ref{ss:aprio}--\ref{ss:compactness} we will first prove an abstract version of this theorem as Theorem~\ref{thm:abstract-GMM} below, under \textbf{Assumption~\ref{ass:abstract}}. Indeed, therein we could `move away' from the context of the `concrete' gradient structure for the Markov processes, and carry out our analysis in a general topological setup (cf.\ Remark \ref{rmk:generaliz-topol} ahead). In Section~\ref{ss:pf-of-existence} we will `return' to the problem under consideration and deduce the proof of Theorem\ \ref{thm:construction-MM} from Theorem\ \ref{thm:abstract-GMM}. \subsection{Moreau-Yosida approximation and generalized slope} \label{ss:aprio} Preliminarily, let us observe some straightforward consequences of the properties of the transport cost: \begin{enumerate} \item the `generalized triangle inequality' from \eqref{e:psi3} entails that for all $m \in \mathbb{N}$, for all $(m+1)$-ples $(t, t_1, \ldots, t_m) \in (0,{+\infty})^{m+1}$, and all $(\rho_0, \rho_1, \ldots, \rho_m) \in X^{m+1}$, we have \begin{equation} \label{eq1} \mathscr{W}(t,\rho_0,\rho_{m}) \leq \sum_{k=1}^{m} \mathscr{W}(t_k,\rho_{k-1},\rho_{k}) \qquad \text{if\, $t=\sum_{k=1}^m t_k$.} \end{equation} \item Combining \eqref{e:psi2} and \eqref{e:psi3} we deduce that \begin{equation} \label{monotonia} \mathscr{W}(t,\rho,\mu) \leq \mathscr{W}(s,\rho,\mu) \quad \text{ for all } 0<s<t \text{ and for all } \rho, \mu \in X. \end{equation} \end{enumerate} In the context of metric gradient-flow theory, the `Moreau-Yosida approximation' (see e.g.~\cite[Ch.~7]{Brezis11} or~\cite[Def.~3.1.1]{AmbrosioGigliSavare08}) provides an approximation of the driving functional that is finite and sub-differentiable everywhere, and can be used to define a generalized slope. We now construct the analogous objects in the situation at hand. Given $r>0$ and $\rho \in X$, we define the subset $J_r(\rho)\subset X$ by $$ J_r(\rho) := \mathop{\rm argmin}_{\mu \in X} \Bigl\{ \mathscr{W}(r,\rho,\mu) + \mathscr E(\mu)\Bigr\} $$ (by Lemma \ref{lemma:exist-probl-increme}, this set is non-empty) and define \begin{equation} \label{def:gen} \gen(r,\rho):= \inf_{\mu \in X} \left\{ \mathscr{W}(r,\rho,\mu) + \mathscr E(\mu)\right\}= \mathscr{W}(r,\rho, \rho_r) + \mathscr E(\rho_r) \quad \forall\, \rho_r \in J_r(\rho). \end{equation} In addition, for all $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$, we define the \emph{generalized slope}\begin{equation} \label{def:nuovo} \mathscr{S}(\rho):= \limsup_{r \downarrow 0}\frac{\mathscr E(\rho) -\gen(r,\rho) }{r} = \limsup_{r \downarrow 0}\frac{\sup_{\mu \in X} \left\{ \mathscr E(\rho) -\mathscr{W}(r,\rho,\mu) -\mathscr E(\mu)\right\} }{r}\,. \end{equation} Recalling the \emph{duality formula} for the local slope (cf.\ \cite[Lemma 3.15]{AmbrosioGigliSavare08}) and the fact that $\mathscr{W}(\tau,\cdot, \cdot)$ is a proxy for $\frac1{2\tau}d^2(\cdot, \cdot)$, it is immediate to recognize that the generalized slope is a surrogate of the local slope. Furthermore, as we will see that its definition is somehow tailored to the validity of Lemma \ref{lemma-my-1} \color{purple} ahead. \color{black} Heuristically, the generalized slope $\mathscr{S}(\rho)$ coincides with the Fisher information $\mathscr{D}(\rho) = \mathscr R^*(\rho,-{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}\mathscr E(\rho))$. This can be recognized, again heuristically, by fixing a point $\rho_0$ and considering curves $\rho_t := \rho_0 -t\odiv {\boldsymbol j} $, for a class of fluxes ${\boldsymbol j} $. We then calculate \begin{align*} \mathscr R^*(\rho_0,-{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}\mathscr E(\rho_0)) &= \sup_{{\boldsymbol j} }\, \bigl\{ -{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}\mathscr E(\rho_0)\cdot {\boldsymbol j} - \mathscr R(\rho_0,{\boldsymbol j} )\bigr\}\\ &= \sup_{\boldsymbol j} \lim_{r\to0} \frac1r \biggl\{ \mathscr E(\rho_0) - \mathscr E(\rho_r) - \int_0^r \mathscr R(\rho_t,{\boldsymbol j} )\, \mathrm{d} t\biggr\}. \end{align*} In Theorem~\ref{th:slope-Fish} below we rigorously prove that $\mathscr{S}\geq \mathscr{D}$ using this approach. The following result collects some properties of $\genn r$ and $\mathscr{S}$. \begin{lemma} \label{lemma-my-1} For all $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$ and for every selection $ \rho_r \in J_r(\rho)$ \begin{align} & \label{gen1} \gen(r_2, \rho) \leq \gen(r_1,\rho) \leq \mathscr E(\rho) \quad \text{for all } 0 <r_1<r_2; \\ & \label{gen2} \rho_r \weaksigmatoabs\rho \ \text{as $r \downarrow 0$,} \quad \mathscr E(\rho)= \lim_{r \downarrow 0} \gen(r,\rho); \\ & \label{gen3} \frac{\rm d}{{\rm d}r} \gen(r,\rho) \leq - \mathscr{S}(\rho_r) \quad \text{for a.e.\ }\ r>0. \end{align} In particular, for all $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$ \begin{align} & \label{nuovopos} \mathscr{S}(\rho) \geq 0 \quad \ \text{and} \\ & \label{ineqenerg} \mathscr{W}(r_0, \rho, \rho_{r_0}) + \int_{0}^{r_0} \mathscr{S}(\rho_r) \, {\rm d}r \leq \mathscr E(\rho) -\mathscr E(\rho_{r_0}) \end{align} for every $ r_{0}>0$ and $ \rho_{r_0} \in J_{r_0}(\rho)$. \end{lemma} \begin{proof} Let $r>0$, $\rho\in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$, and $\rho_r\in J_r(\rho)$. It follows from \eqref{def:gen} and \eqref{e:psi2} that \begin{equation} \label{eq3} \gen(r,\rho)= \mathscr{W}(r,\rho,\rho_r) + \mathscr E(\rho_r) \leq \mathscr{W}(r,\rho,\rho) + \mathscr E(\rho) = \mathscr E(\rho) \quad \forall \, r>0, \rho \in X; \end{equation} in the same way, one checks that for all $\rho\in X$ and $0 <r_1<r_2$, \[ \gen(r_2,\rho) -\gen(r_1, \rho) \leq \mathscr{W}(r_2, \rho_{r_1}, \rho) + \mathscr E(\rho_{r_1}) -\mathscr{W}(r_1, \rho_{r_1}, \rho) - \mathscr E(\rho_{r_1}) \stackrel{\eqref{monotonia}}\leq 0, \] which implies \eqref{gen1}. Thus, the map $r \mapsto \gen(r,\rho)$ is non-increasing on $(0,{+\infty})$, and hence almost everywhere differentiable. Let us fix a point of differentiability $r>0$. For $h>0$ and $\rho_r \in J_r (\rho)$ we then have \begin{align*} \frac{\gen(r+h,\rho)-\gen(r,\rho)}{h} & = \frac1h\, {\inf_{v \in X} \Bigl\{\mathscr{W}(r+h, \rho,v) +\mathscr E(v) -\mathscr{W}(r, \rho,\rho_r) -\mathscr E(\rho_r) \Bigr\}} \\ & \leq \frac1h \, {\inf_{v \in X} \Bigl\{\mathscr{W}(h, \rho_r,v) +\mathscr E(v) -\mathscr E(\rho_r) \Bigr\}}, \end{align*} the latter inequality due to \eqref{e:psi3}, so that $$ \begin{aligned} \frac{\rm d}{{\rm d}r} \gen(r,\rho) & \leq \liminf_{h \downarrow 0} \,\frac1h \,{\inf_{v \in X} \Bigl\{\mathscr{W}(h, \rho_r,v) +\mathscr E(v) -\mathscr E(\rho_r) \Bigr\}} \\ & = - \limsup_{h \downarrow 0}\, \frac1h \, {\sup_{v \in X} \Bigl\{-\mathscr{W}(h, \rho_r,v) -\mathscr E(v) +\mathscr E(\rho_r) \Bigr\}}, \end{aligned} $$ whence \eqref{gen3}. Finally, \eqref{eq3} yields that, for any $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$ and any selection $\rho_r \in J_r(\rho)$, one has $ \sup_{r>0} \mathscr{W}(r,\rho,\rho_r) <+ \infty .$ Therefore, \eqref{e:psi4} entails the first convergence in \eqref{gen2}. Furthermore, we have $$ \mathscr E(\rho) \geq \limsup_{r \downarrow 0} \gen(r,\rho)\geq \liminf_{r \downarrow 0} \left(\mathscr{W}(r,\rho,\rho_r) + \mathscr E(\rho_r)\right) \geq \liminf_{r \downarrow 0}\mathscr E(\rho_r)\geq \mathscr E(\rho), $$ where the first inequality again follows from \eqref{eq3}, and the last one from the $\sigma$-lower semicontinuity of $\mathscr E$. This implies the second statement of~\eqref{gen2}. \end{proof} \subsection{\emph{A priori} estimates} Our next result collects the basic estimates on the discrete solutions. In order to properly state it, we need to introduce the `density of dissipated energy' associated with the interpolant $\piecewiseConstant \rho{\tau}$, namely the piecewise constant function $\piecewiseConstant {\mathsf{W}}{\tau}:[0,T] \to [0,{+\infty})$ defined by \begin{align} \notag \piecewiseConstant {\mathsf{W}}{\tau}(t)&:= \frac{\mathscr{W}(t_{\tau}^{n}-t_{\tau}^{n-1}, \rho_\tau^{n-1}, \rho_\tau^{n})}{t_{\tau}^{n}-t_{\tau}^{n-1}} \quad t\in (t_{\tau}^{n-1}, t_{\tau}^n], \quad n=1,\ldots, {N_\tau}, \\ \text{so that}\quad \int_{t_{\tau}^{j-1}}^{t_{\tau}^n}\piecewiseConstant {\mathsf{W}}{\tau}(t)\, {\rm d}t&= \sum_{k=j}^n \mathscr{W}(t_{\tau}^{k}-t_{\tau}^{k-1}, \rho_\tau^{k-1}, \rho_\tau^{k}) \quad \text{for all } 1 \leq j < n \leq {N_\tau}. \label{density-W} \end{align} \begin{prop}[Discrete energy-dissipation inequality and \emph{a priori} estimates] We have \begin{align} \label{discr-enineq-var} & \mathscr{W}(t-\underpiecewiseConstant \mathsf{t}{\tau}(t), \underpiecewiseConstant \rho{\tau}(t), \pwM {\rho}{\tau}(t)) + \int_{\underpiecewiseConstant \mathsf{t}{\tau}(t)}^{t} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\pwM {\rho}{\tau}(t))\leq \mathscr E(\underpiecewiseConstant \rho{\tau}(t)) \quad \text{for all } 0 \leq t \leq T\,, \\ & \label{discr-enineq} \int_{\underpiecewiseConstant \mathsf{t}{\tau}(s)}^{\piecewiseConstant \mathsf{t}{\tau}(t)} \piecewiseConstant {\mathsf{W}}{\tau}(r)\, {\rm d}r + \int_{\underpiecewiseConstant \mathsf{t}{\tau}(s)}^{\piecewiseConstant \mathsf{t}{\tau}(t)} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\piecewiseConstant \rho\tau(t)) \leq \mathscr E(\underpiecewiseConstant \rho\tau(s)) \qquad \text{for all } 0\leq s \leq t \leq T\,, \end{align} and there exists a constant $C>0$ such that for all $\tau>0$ \begin{gather} \label{est-diss} \int_0^T \piecewiseConstant{\mathsf{W}}\tau (t)\, \mathrm{d} t \leq C, \qquad \int_0^T \mathscr{S}(\pwM {\rho}{\tau}(t)) \, {\rm d}t \leq C. \end{gather} Finally, there exists a $\sigma$-sequentially compact subset $K\subset X$ such that \begin{equation}\label{aprio1} \piecewiseConstant \rho{\tau}(t),\, \underpiecewiseConstant \rho{\tau}(t),\, \pwM \rho{\tau}(t)\, \in K \quad \text{$\forall\, t \in [0,T]$ and $\tau >0$}. \end{equation} \end{prop} \begin{proof} From \eqref{ineqenerg} we directly deduce, for $t \in (t_{\tau}^{j-1}, t_{\tau}^j]$, \begin{equation} \label{eq4} \mathscr{W}(t-t_{\tau}^{j-1}, \rho_\tau^{j-1}, \pwM {\rho}{\tau}(t)) + \int_{t_{\tau}^{j-1}}^{t} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\pwM {\rho}{\tau}(t))\leq \mathscr E(\rho_\tau^{j-1}), \end{equation} which implies \eqref{discr-enineq-var}; in particular, for $t= t_{\tau}^j$ one has \begin{equation} \label{eq4bis} \int_{t_{\tau}^{j-1}}^{t_{\tau}^j}\piecewiseConstant {\mathsf{W}}{\tau}(t)\, {\rm d}t + \int_{t_{\tau}^{j-1}}^{t_{\tau}^j} \mathscr{S}(\pwM {\rho}{\tau}(t)) \, {\rm d}t +\mathscr E(\rho_\tau^{j})\leq \mathscr E(\rho_\tau^{j-1}). \end{equation} The estimate~\eqref{discr-enineq} follows upon summing \eqref{eq4bis} over the index $j$. Furthermore, applying \eqref{eq1}--\eqref{monotonia} one deduces for all $1 \leq n \leq N_\tau$ that \begin{equation} \label{est-basis} \mathscr{W}(n\tau,\rho_0,\rho_\tau^{n}) +\mathscr E(\rho_\tau^{n}) \leq \int_0^{t_{\tau}^n}\piecewiseConstant {\mathsf{W}}{\tau}(r)\, {\rm d}r + \int_0^{t_{\tau}^n} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\rho_\tau^{n}) \leq \mathscr E(\rho_0). \end{equation} In particular, \eqref{est-diss} follows, as well as $\sup_{n=0,\ldots,N_\tau} \mathscr E(\rho_\tau^{n}) \leq C$. Then, \eqref{eq4} also yields $\sup_{t\in [0,T]} \mathscr E(\pwM {\rho}{\tau}(t)) \leq C$. Next we show the two estimates \begin{align} \label{est1} \mathscr{W}(2T, \rho^*,\piecewiseConstant \rho{\tau}(t)) +\mathscr E(\piecewiseConstant \rho{\tau}(t)) \leq C, \\ \label{est2} \mathscr{W}(2T, \rho^*, \pwM {\rho}{\tau}(t)) + \mathscr E(\pwM {\rho}{\tau}(t)) \leq C \,. \end{align} Recall that $\rho^*$ is introduced in Assumption~\ref{ass:abstract}. To deduce \eqref{est1}, we use the triangle inequality for $\mathscr{W}$. Preliminarily, we observe that $ \mathscr{W}(t, \rho^*, \rho_0) <{+\infty}$ for all $t>0$. In particular, let us fix an arbitrary $ m \in \{1,\ldots, N_\tau\}$ and let $C^*: = \mathscr{W}(t_{\tau}^{ m}, \rho^*, \rho_0) $. We have for any $n$, \begin{align*} \mathscr{W}(2T, \rho^*,\rho_\tau^n) &\leq \mathscr{W}(2T-t_{\tau}^n, \rho^*, \rho_0) +\mathscr{W}(t_{\tau}^n, \rho_0,\rho_\tau^n) \stackrel{(1)}{\leq} \mathscr{W}(t_{\tau}^{ m}, \rho^*, \rho_0) +\mathscr{W}(t_{\tau}^n, \rho_0,\rho_\tau^n) \\ & \leq C^* +\mathscr{W}(t_{\tau}^n, \rho_0,\rho_\tau^n) \quad \text{for all } n \in \{1, \ldots,N_\tau\}, \end{align*} where for (1) we have used that $ \mathscr{W}(2T-t_{\tau}^n, \rho^*, \rho_0) \leq \mathscr{W}(t_{\tau}^{ m}, \rho^*, \rho_0) $ since $2T- t_{\tau}^n \geq t_{\tau}^{ m} $. Thus, in view of \eqref{est-basis} we we deduce \begin{align} \notag \mathscr{W}(2T, \rho^*,\piecewiseConstant \rho{\tau}(t)) +\mathscr E(\piecewiseConstant \rho{\tau}(t)) &\leq C^* +\mathscr{W}(\piecewiseConstant \mathsf{t} \tau(t), \rho_0, \piecewiseConstant \rho{\tau}(t) ) +\mathscr E(\piecewiseConstant \rho{\tau}(t)) \\ &\leq C^* +\mathscr E(\rho_0) \leq C \quad \text{for all } t \in [0, T]\,, \label{eq5} \end{align} i.e.\ the desired \eqref{est1}. \par Likewise, adding \eqref{eq4} and \eqref{eq4bis} one has $ \mathscr{W}(t, \rho_0, \pwM {\rho}{\tau}(t)) + \mathscr E(\pwM {\rho}{\tau}(t)) \leq \mathscr E(\rho_0) $, whence \eqref{est2} with arguments similar to those in the previous lines. \end{proof} \subsection{Compactness result} \label{ss:compactness} The main result of this section, Theorem \ref{thm:abstract-GMM} below, states that $\GMMT{\mathscr E,\mathscr{W}}0T{\rho^\circ} $ is non-empty, and that any curve $\rho \in \GMMT{\mathscr E,\mathscr{W}}0T{\rho^\circ}$ fulfills an `abstract' version \eqref{limit-enineq} of the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation estimate~\eqref{EDineq}, obtained by passing to the limit in the discrete inequality~\eqref{discr-enineq}. We recall the $\mathscr{W}$-action of a curve $\rho:[0,T]\to X$, defined in~\eqref{def-tot-var} as \begin{equation*} \VarW \rho ab: = \sup \left \{ \sum_{j=1}^M \DVT{t^j - t^{j-1}}{\rho(t^{j-1})}{\rho(t^j)} \, : \ (t^j)_{j=0}^M \in \mathfrak{P}_f([a,b]) \right\} \end{equation*} for all $[a,b]\subset [0,T]$, where $\mathfrak{P}_f([a,b])$ is the set of all finite partitions of the interval $[a,b]$. We also introduce the \emph{relaxed generalized slope} $\mathscr{S}^-: {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} (\mathscr E) \to [0,{+\infty}]$ of the driving energy functional $\mathscr E$, namely the relaxation of the generalized slope $\mathscr{S}$ along sequences with bounded energy: \begin{equation} \label{relaxed-nuovo} \mathscr{S}^-(\rho) : = \inf\biggl\{ \liminf_{n\to\infty} \mathscr{S}(\rho_n) \, : \ \rho_n\weaksigmatoabs \rho, \ \sup_{n\in \mathbb{N}} \mathscr E(\rho_n) <{+\infty}\biggr\}\,. \end{equation} We are now in a position to state and prove the `abstract version' of Theorem \ref{thm:construction-MM}. \begin{theorem} \label{thm:abstract-GMM} Under \textbf{Assumption~\ref{ass:abstract}}, let $\rho^\circ \in \mathrm{D}(\mathscr E)$. Then, for every vanishing sequence $(\tau_k)_k$ there exist a (not relabeled) subsequence and a $\sigma$-continuous curve $\rho : [0,T]\to X$ such that $\rho(0) = \rho^\circ$, and \begin{equation} \label{convergences-interpolants} \piecewiseConstant \rho{\tau_k}(t),\, \underpiecewiseConstant \rho{\tau_k}(t),\, \pwM \rho{\tau_k}(t) \weaksigmatoabs\rho(t) \qquad \text{for all } t \in [0,T], \end{equation} and $\rho$ satisfies the Energy-Dissipation estimate \begin{equation} \label{limit-enineq} \VarW \rho0t + \int_0^t \mathscr{S}^-(\rho(r)) \mathrm{d} r +\mathscr E(\rho(t)) \leq \mathscr E(\rho_0) \qquad \text{for all } t \in [0,T]. \end{equation} \end{theorem} \begin{remark} \label{rmk:generaliz-topol} \upshape Theorem \ref{thm:abstract-GMM} could be extended to a topological space where the cost $\mathscr{W}$ and the energy functional $\mathscr E$ satisfy the properties listed at the beginning of the section. \end{remark} \begin{proof} Consider a sequence $\tau_k \downarrow 0$ as $k\to\infty$. \emph{Step 1: Construct the limit curve $\ol\rho$. } We first define the limit curve $\ol\rho$ on the set $A: = \{0\} \cup N$, with $N$ a countable dense subset of $(0,T]$. Indeed, in view of \eqref{aprio1}, with a diagonalization procedure we find a function $\overline\rho : A \to X$ and a (not relabeled) subsequence such that \begin{equation} \label{step1-constr} \piecewiseConstant \rho{\tau_k}(t) \weaksigmatoabs\overline\rho(t) \quad \text{for all } t \in A \quad \text{and} \quad \overline\rho(t) \in K \text{ for all } t \in A . \end{equation} In particular, $ \overline\rho(0)=\rho^\circ$. We next show that $\ol\rho$ can be uniquely extended to a $\sigma$-continuous curve $\ol\rho:[0,T]\to X$. Let $s,t\in A$ with $s<t$. By the lower-semicontinuity property~\eqref{lower-semicont} we have \begin{align*} \DVT {t-s} {\ol\rho(s)} {\ol\rho(t)} &\leq\liminf_{k\to\infty} \DVT {t-s} {\piecewiseConstant\rho{\tau_k}(s)} {\piecewiseConstant\rho{\tau_k}(t)} \stackrel{\eqref{density-W}}\leq\liminf_{k\to\infty} \int_{\underpiecewiseConstant {\mathsf{t}}{\tau_{k}} (s)}^{\piecewiseConstant {\mathsf{t}}{\tau_{k}} (t)} \piecewiseConstant {\mathsf{W}}{\tau_{k}} (r) \,\mathrm{d} r\\ & \stackrel{(1)}{\leq} \liminf_{k\to\infty} \mathscr E(\piecewiseConstant \rho{\tau_{k}} (t_1) ) \stackrel{(2)}{\leq} \mathscr E(\rho_0), \end{align*} where {(1)} follows from \eqref{discr-enineq} (using the lower bound on $\mathscr E$), and {(2)} is due to the fact that $t\mapsto \mathscr E(\piecewiseConstant \rho{\tau_{k}}(t))$ is nonincreasing. By the property~\eqref{e:psi6} of $\mathscr{W}$, this estimate is a form of uniform continuity of $\ol\rho$, and we now use this to extend $\ol\rho$. Fix $t\in [0,T]\setminus A$, and choose a sequence $t_m\in A$, $t_m\to t$, with the property that $\ol\rho(t_m)$ $\sigma$-converges to some $\tilde\rho$. For any sequence $s_m\in A$, $s_m\to t$, we then have \[ \sup_{m} \DVT {|t_m-s_m|}{\ol\rho(s_m)}{\ol\rho(t_m)} < {+\infty}, \] and since $|t_m-s_m|\to0$, property~\eqref{e:psi6} implies that ${\ol\rho(s_m)}\weaksigmatoabs \tilde \rho$. This implies that along any converging sequence $t_m\in A$, $t_m\to t$ the sequence $\ol\rho(t_m)$ has the same limit; therefore there is a unique extension of $\ol\rho$ to $[0,T]$, that we again indicate by $\ol\rho$. By again applying the lower-semicontinuity property~\eqref{lower-semicont} we find that \[ \DVT {|t-s|}{\ol\rho(s)}{\ol\rho(t)} \leq \mathscr E(\rho_0) \qquad \text{for all }t,s\in [0,T], \ s\not=t, \] and therefore the curve $[0,T]\ni t\mapsto \ol\rho(t)$ is $\sigma$-continuous. \emph{Step 2: Show convergence at all $t\in [0,T]$.} Now fix $t\in [0,T]$; we show that $\piecewiseConstant \rho{\tau_k}(t)$, $\underpiecewiseConstant\rho{\tau_k}(t)$, and $\pwM\rho{\tau_k}(t)$ each converge to $\ol\rho(t)$. Since $\piecewiseConstant \rho{\tau_k}(t)\in K$, there exists a convergent subsequence $\piecewiseConstant\rho{\tau_{k_j}}(t)\weaksigmatoabs\tilde \rho$. Take any $s\in A$ with $s\not=t$. Then \begin{align*} \DVT {|t-s|}{\tilde \rho}{\ol\rho(s)} \leq \liminf_{j\to\infty} \DVT {|t-s|}{\piecewiseConstant\rho{\tau_{k_j}}(t)}{\piecewiseConstant\rho{\tau_{k_j}}(s)} \leq \mathscr E(\rho_0)\leq C, \end{align*} by the same argument as above. Taking the limit $s\to t$, property~\eqref{e:psi6} and the continuity of $\ol\rho$ imply $\tilde\rho= \ol\rho(t)$. Therefore $\piecewiseConstant\rho{\tau_{k_j}}(t)\weaksigmatoabs \ol\rho(t)$ along each subsequence $\tau_{k_j}$, and consequently also along the whole sequence $\tau_k$. Estimates \eqref{discr-enineq-var} \& \eqref{discr-enineq} also give at each $t\in (0,T]$ \[ \limsup_{k\to\infty} \mathscr{W}(t-\underpiecewiseConstant \mathsf{t}{\tau_k}(t), \underpiecewiseConstant \rho{\tau_k}(t), \piecewiseConstant {\rho}{\tau_k}(t)) \leq \mathscr E(\rho_0), \qquad \limsup_{k\to\infty} \mathscr{W}(t-\underpiecewiseConstant \mathsf{t}{\tau_k}(t), \underpiecewiseConstant \rho{\tau_k}(t), \pwM {\rho}{\tau_k}(t)) \leq \mathscr E(\rho_0), \] so that, again using the compactness information provided by \eqref{aprio1} and property \eqref{e:psi6} of the cost $\mathscr{W}$, it is immediate to conclude \eqref{convergences-interpolants}. \emph{Step 3: Derive the energy-dissipation estimate.} Finally, let us observe that \begin{equation} \label{liminf-var} \liminf_{k\to\infty} \int_0^{\piecewiseConstant {\mathsf{t}}{\tau_k}(t)} \piecewiseConstant {\mathsf{W}}{\tau_k}(r) \mathrm{d} r \geq \VarW \rho0t \quad \text{for all } t \in [0,T]. \end{equation} Indeed, for any partition $\{ 0=t^0<\ldots <t^j <\ldots<t^M = t\}$ of $[0,t]$ we find that \begin{align*} \sum_{j=1}^{M} \DVT{t^j-t^{j-1}}{\rho(t^{j-1})}{\rho(t^j)} &\stackrel{(1)}{\leq} \liminf_{k\to\infty} \sum_{j=1}^{M} \DVT{\piecewiseConstant {\mathsf{t}}{\tau_k}(t^j)-{\piecewiseConstant {\mathsf{t}}{\tau_k}(t^{j-1})}}{\piecewiseConstant \rho{\tau_k}(t^{j-1})}{\piecewiseConstant \rho{\tau_k}(t^j)} \\ &= \liminf_{k\to\infty} \int_0^{\piecewiseConstant {\mathsf{t}}{\tau_k}(t)} \piecewiseConstant {\mathsf{W}}{\tau_k}(r) \,\mathrm{d} r, \end{align*} with (1) due to \eqref{lower-semicont}. Then \eqref{liminf-var} follows by taking the supremum over all partitions. On the other hand, by Fatou's Lemma we find that \[ \liminf_{k\to\infty} \int_0^{\piecewiseConstant {\mathsf{t}}{\tau_k}(t)} \mathscr{S} (\pwM \rho{\tau_k}(r)) \,\mathrm{d} r \geq \int_0^t \mathscr{S}^-(\rho(r)) \mathrm{d} r, \] while the lower semicontinuity of $\mathscr E$ gives \[ \liminf_{k\to\infty} \mathscr E(\piecewiseConstant \rho{\tau_k}(t)) \geq \mathscr E(\rho(t)) \] so that \eqref{limit-enineq} follows from taking the $\liminf_{k\to\infty}$ in \eqref{discr-enineq} for $s=0$. \end{proof} \subsection{Proof of Theorem~\ref{thm:construction-MM}} \label{ss:pf-of-existence} Having established the abstract compactness result of Theorem~\ref{thm:abstract-GMM}, we now apply this to the proof of Theorem~\ref{thm:construction-MM}. As described above, under \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}} the conditions of Theorem~\ref{thm:abstract-GMM} are fulfilled, and Theorem~\ref{thm:abstract-GMM} provides us with a curve $\rho:[0,T]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ \color{red} that is continuous with respect to setwise convergence \color{black} such that \begin{equation} \label{ineq:pf-abs-to-concrete} \VarW \rho 0t+ \int_0^t \mathscr{S}^-(\rho(r)) \mathrm{d} r +\mathscr E(\rho(t)) \leq \mathscr E(\rho_0) \qquad \text{for all } t \in [0,T]. \end{equation} To conclude the proof of Theorem~\ref{thm:construction-MM}, we now show that \color{ddcyan} the Energy-Dissipation inequality \eqref{EDineq} can be derived from \eqref{ineq:pf-abs-to-concrete}. \normalcolor We first note that Corollary~\ref{c:exist-minimizers} implies the existence of a flux ${\boldsymbol j}$ such that $(\rho,{\boldsymbol j})\in \CE 0T$ and $\VarW \rho 0T = \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t$. Then from Corollary~\ref{cor:cor-crucial} below, we find that $\mathscr{S}^-(\rho(r))\geq \mathscr{D}(\rho(r))$ for all \color{purple} $r \in [0,T]$. \color{black} Combining these results with~\eqref{ineq:pf-abs-to-concrete} we find the required estimate~\eqref{EDineq}. It remains to prove the inequality $\mathscr{S}^-\geq \mathscr{D}$, which follows from the corresponding inequality $ \mathscr{S}\geq \mathscr{D}$ for the non-relaxed slope (Theorem~\ref{th:slope-Fish}) with the lower semicontinuity of $\mathscr{D}$ that is assumed in Theorem~\ref{thm:construction-MM}. This is the topic of the next section. \subsection{The generalized slope bounds the Fisher information} We recall the definition of the generalized slope $\mathscr{S}$ from~\eqref{def:nuovo}: \[ \mathscr{S}(\rho):= \limsup_{r \downarrow 0}\sup_{\mu \in X} \frac1r \Bigl\{ \mathscr E(\rho) -\mathscr E(\mu)-\mathscr{W}(r,\rho,\mu) \Bigr\} \,. \] Given the structure of this definition, the proof of the inequality $\mathscr{S}\geq \mathscr{D}$ naturally proceeds by constructing an admissible curve $(\rho, {\boldsymbol j})\in\CE0T$ such that $\rho\restr{t=0}=\rho$ and such that the expression in braces can be related to $\mathscr{D}(\rho)$. For the systems of this paper, the construction of such a curve faces three technical difficulties: the first is that $\rho$ needs to remain nonnegative, the second is that $\upphi'$ may be unbounded at zero, and the third is that the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v)$ in~\eqref{eq:182} that defines $\mathscr{D}$ may be infinite when $u$ or $v$ is zero (see Example~\ref{ex:D}). We first prove a lower bound for the generalized slope $\mathscr{S}$ involving ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$, under the basic conditions on the $(\mathscr E, \mathscr R, \mathscr R^*)$ system presented in Section \ref{s:assumptions}. \color{black} \begin{theorem} \label{th:slope-Fish} Assume \ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}. Then \begin{equation} \label{ineq:nuovo-FI1} \mathscr{S} (\rho) \geq \frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u(x),u(y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \quad \text{for all } \rho=u\pi \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E). \end{equation} \end{theorem} \begin{proof} Let us fix $\rho_0=u_0\pi\in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$, a bounded measurable skew-symmetric map \begin{displaymath} \xi:E\to \mathbb{R} \quad\text{with } \xi(y,x)=-\xi(x,y),\quad |\xi(x,y)|\le \Xi<\infty \quad\text{for every $(x,y)\in E$,} \end{displaymath} the Lipschitz functions \color{red} $q(r):=\min(r, 2(r-1/2)_+)$ \normalcolor (approximating the identity far from $0$) and $h(r):=\max(0,\min(2-r, 1))$ (cutoff for $r\ge 2$), and the Lipschitz regularization of $\upalpha$ \begin{displaymath} \upalpha_\eps(u,v):=\eps q(\upalpha(u,v)/\eps). \end{displaymath} We introduce the field ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\eps:E\times\mathbb{R}_+^2\to \mathbb{R}$ \begin{equation} \label{eq:166} {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\eps(x,y;u,v):=\xi(x,y)g_\eps(u,v)\,, \end{equation} where \[ g_\eps(u,v):=\upalpha_\eps(u,v)\,h(\eps \max(u, v))q(\min(1,\min(u,v)/\epsilon))\,, \] which vanishes if \color{red} $\upalpha(u,v)<\eps/2$ or $\min(u,v)<\eps/2$ or $\max(u, v)\ge 2/\eps$, \normalcolor and coincides with $\upalpha$ if \color{red} $\upalpha\ge \eps$, $\min(u,v)\ge \eps$, and $\max(u, v)\le 1/\eps$. \normalcolor Since $g_\eps$ is Lipschitz, it is easy to check that ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\eps$ satisfies all the assumptions (\ref{subeq:G}a,b,c,d) and also \eqref{eq:123a} for $a=0$, since \color{red} $0=g_\eps(0,0)\le g_\eps(0,v)$ for every $v\ge 0$ and every $(x,y)\in E$. \normalcolor It follows that for every nonnegative $u_0\in L^1(X,\pi)$ there exists a unique nonnegative solution $u^\eps\in \rmC^1([0,\infty);L^1(V,\pi))$ of the Cauchy problem \eqref{eq:119-Cauchy} \color{red} induced by ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\eps$ \normalcolor with initial datum $u_0$ and the same total mass. \color{red} Henceforth, we set $\rho_t^\eps = u_t^\eps\pi$ for all $t\ge 0$. \normalcolor Setting $\color{red} 2 \normalcolor{\boldsymbol j}_t^\eps(\mathrm{d} x,\mathrm{d} y):=w_t^\eps(x,y) \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) $, where $w_t^\eps(x,y):={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\eps(x,y;u_t(x),u_t(y))$, it is also easy to check that $(\rho^\eps,{\boldsymbol j}^\eps)\in \CER 0T$, since $g_\eps(u,v)\le \upalpha(u,v)$ and \color{red} \begin{displaymath} |w_t^\eps(x,y)|\le |\xi| \upalpha(u_t^\eps(x),u_t^\eps(y)){\raise.3ex\hbox{$\chi$}}_{U_\eps(t)}(x,y)\qquad \text{for $(x,y)\in E$}\,, \end{displaymath} where $U_\eps(t):=\{(x,y)\in E: g_\epsilon(u_t^\eps(x),u_t^\eps(y))>0\}$, thereby yielding \[ \Upsilon(u_t^\eps(x),u_t^\eps(y),w_t^\eps(x,y))\le \Psi(\Xi)\upalpha(2/\eps,2/\eps)\,. \]\normalcolor Finally, recalling \eqref{eq:102} and \eqref{eq:105}, we get \begin{displaymath} |\color{purple} \rmB_\upphi \color{black}(u_t^\eps(x),u_t^\eps(y),w_t^\eps(x,y))| \le \Xi \big(\upphi'(2/\eps)-\color{red} \upphi'(\eps/2) \normalcolor \big) \upalpha(2/\eps,2/\eps). \end{displaymath} Thus, we can apply Theorem \ref{th:chain-rule-bound} obtaining \begin{equation} \label{eq:167} \mathscr E(\rho_0)-\mathscr E(\rho_\tau^\eps)= -\color{red}\frac{1}{2}\normalcolor\int_0^\tau \iint_E\rmB_\upphi(u_t^\eps(x),u_t^\eps(y),w_t^\eps(x,y))\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t, \end{equation} and consequently \begin{equation} \label{eq:168} \begin{aligned} \mathscr{S}(\rho_0)&\ge \limsup_{\tau\downarrow0}\tau^{-1} \Big(\mathscr E(\rho_0)-\mathscr E(\rho_\tau^\eps)- \int_0^\tau \mathscr R(\rho_t^\eps,{\boldsymbol j}_t^\eps)\,\mathrm{d} t\Big) \\&= \color{red}\frac{1}{2}\normalcolor\iint_E \Big(\rmB_\upphi(u_0(x),u_0(y),w_0^\eps(x,y))- \Upsilon(u_0(x),u_0(y), w_0^\eps(x,y))\Big)\,\boldsymbol \teta(\mathrm{d} x,\mathrm{d} y). \end{aligned} \end{equation} Let us now set $\Delta_k$ to be the truncation of $\upphi'(u_0(x))-\upphi'(u_0(y))$ to $[-k,k]$, i.e. \color{red} \[ \Delta_k(x,y):=\max\Bigl\{-k,\min\bigl[k, \upphi'(u_0(x))-\upphi'(u_0(y))\bigr]\Bigr\}\,, \] and $\xi_k(x,y):= (\Psi^*)'(\Delta_k(x,y))$ for each $k\in\mathbb{N}$. Notice that $\xi_k$ is a bounded measurable skew-symmetric map satisfying $|\xi_k(x,y)|\le k$ for every $(x,y)\in E$ and $k\in\mathbb{N}$. Therefore, inequality \eqref{eq:168} holds for $w_0^\eps(x,y) = \xi_k(x,y)\,g_\eps(u_0(x),u_0(y))$, $(x,y)\in E$. We then observe from Lemma~\ref{le:trivial-but-useful}\ref{le:trivial-but-useful:ineq} that \begin{equation} \label{eq:171} \begin{aligned} (\upphi'(u_0(x))-\upphi'(u_0(y)))\cdot \xi_k(x,y)&\ge \Delta_k(x,y)\xi_k(x,y) \\ &= \Psi(\xi_k(x,y))+\Psi^*(\Delta_k(x,y))\,, \end{aligned} \end{equation} and from $g_\epsilon(u,v)\le \upalpha(u,v)$ that \normalcolor \begin{equation} \label{eq:172} \begin{aligned} \Upsilon(u_0(x),u_0(y), w_0^\eps(x,y)) &=\Psi\left(\frac{\xi_k(x,y) g_\eps(u_0(x),u_0(y))}{\upalpha(u_0(x),u_0(y))}\right) \upalpha(u_0(x),u_0(y)) \\ &\le \Psi(\xi_k(x,y))\upalpha(u_0(x),u_0(y))\,. \end{aligned} \end{equation} Substituting these bounds in \eqref{eq:168} and passing to the limit as $\eps\downarrow0$ we obtain \begin{equation} \label{eq:173} \mathscr{S}(\rho)\ge \color{red}\frac{1}{2}\normalcolor\iint_E \Psi^*(\Delta_k(x,y))\upalpha(u_0(x),u_0(y))\, \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y)\,. \end{equation} We eventually let $k\uparrow\infty$ and obtain \eqref{ineq:nuovo-FI1}. \color{black} \end{proof} In the next proposition we finally bound $\mathscr{S}$ from below by the Fisher information, by relying on the existence of a solution to the $(\mathscr E,\mathscr R,\mathscr R^*)$ system, as shown in Section~\ref{s:ex-sg}. \color{black} \begin{prop} \label{p:slope-geq-Fish} Let us suppose that for $\rho\in D(\mathscr E)$ there exists a solution to the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. Then the generalized slope bounds the Fisher information from above: \begin{equation} \label{ineq:nuovo-FI} \mathscr{S} (\rho) \geq \mathscr{D} (\rho) \quad \text{for all } \rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E). \end{equation} \end{prop} \begin{proof} Let $\rho_t = u_t \pi$ be a solution to the $(\mathscr E,\mathscr R,\mathscr R^*)$ system with initial datum $\rho_0\in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$. Then, we can find \color{red} a family $({\boldsymbol j}_t)_{t\ge 0}\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ \color{black} such that $(\rho,{\boldsymbol j} )\in \CE0{+\infty}$ and \[ \mathscr E(\rho_t) +\int_0^t \bigl[\mathscr R(\rho_r,{\boldsymbol j}_r)+\mathscr{D}(\rho_r)\bigr] \,\mathrm{d} r = \mathscr E(\rho_0)\qquad \text{for all }t\geq0. \] Therefore \begin{align*} \mathscr{S}(\rho_0) &\geq \liminf_{t\downarrow 0} \frac1t \Bigl[ \mathscr E(\rho_0) - \mathscr E(\rho_t) - \DVT t{\rho_0}{\rho_t}\Bigr]\\ &\geq \liminf_{t\downarrow 0} \frac1t \Bigl[ \mathscr E(\rho_0) - \mathscr E(\rho_t) - \int_0^t \mathscr R(\rho_r,{\boldsymbol j}_r) \,\mathrm{d} r\Bigr] = \liminf_{t\downarrow 0} \frac1t \int_0^t \mathscr{D}(\rho_r)\, \mathrm{d} r\,. \end{align*} Since $u_t\to u_0$ in $L^1(V;\pi)$ as $t\to0$ and since $\mathscr{D}$ is lower semicontinuous with respect to $L^1(V,\pi)$-convergence (\color{red} see the proof of \color{black} Proposition~\ref{PROP:lsc}), with a change of variables we find \[ \mathscr{S}(\rho_0) \geq \liminf_{t\downarrow 0} \int_0^1 \mathscr{D}(\rho_{ts})\, \mathrm{d} s \geq \mathscr{D}(\rho_0). \qedhere \] \end{proof} We then easily get the desired lower bound for $\mathscr{S}^-$ in terms of $\mathscr{D}$, under the condition that the latter functional is lower semicontinuous (recall that Proposition \ref{PROP:lsc} provides sufficient conditions for the lower semicontinuity of $\mathscr{D}$): \color{black} \begin{cor} \label{cor:cor-crucial} Let us suppose that \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, \ref{ass:S}} hold and that $\mathscr{D}$ is lower semicontinuous with respect to ~setwise convergence. Then \begin{equation} \label{desired-rel} \mathscr{S}^- (\rho) \geq \mathscr{D} (\rho) \quad \text{for all } \rho \in \mathrm{D}(\mathscr E). \end{equation} \end{cor} \noindent \begin{remark} \label{rmk:Mark} The combination of Theorem \ref{th:slope-Fish}, Proposition \ref{p:slope-geq-Fish}, and Corollary~\ref{cor:cor-crucial} illustrates why we introduced both ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$. For the duration of this remark, consider both the functional $\mathscr{D}$ that is defined in~\eqref{eq:def:D} in terms of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$, and a corresponding functional ~$\mathscr{D}^-$ defined in terms of the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$: \[ \mathscr{D}^- (\rho) := \displaystyle \frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi\bigl(u(x),u(y)\bigr)\, \boldsymbol \teta(\mathrm{d} x\,\mathrm{d} y) \qquad \text{for } \rho = u\pi\,. \] In the two guiding cases of Example~\ref{ex:Dpm}, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex and lower semicontinuous, but ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ is only lower semicontinuous. As a result, $\mathscr{D}$ is lower semicontinuous with respect to setwise convergence, but $\mathscr{D}^-$ is not (indeed, \color{black} consider e.g.\ a sequence $\rho_n$ converging setwise to $\rho$, with $\mathrm{d}\rho_n/\mathrm{d}\pi$ given by characteristic functions of some sets $A_n$, where the sets $A_n$ are chosen such that for the limit the density $\mathrm{d}\rho/\mathrm{d}\pi$ is strictly positive and non-constant; then $\mathscr{D}^-(\rho_n)=0$ for all $n$ while $\mathscr{D}^-(\rho)>0$). Setwise lower semicontinuity of $\mathscr{D}$ is important for two reasons: first, this is required for stability of solutions of the Energy-Dissipation balance under convergence in some parameter (evolutionary $\Gamma$-convergence), which is a hallmark of a good variational formulation; and secondly, the proof of existence using the Minimizing-Movement approach \color{black} requires the bound~\eqref{desired-rel}, for which $\mathscr{D}$ also needs to be lower semicontinuous. This explains the importance of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$, and it also explains why we defined the Fisher information $\mathscr{D}$ in terms of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and not in terms of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$. On the other hand, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ is straightforward to determine, and in addition the weaker control of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ is still sufficient for the chain rule: it is ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ that appears on the right-hand side of~\eqref{eq:CR2}. Note that if ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ itself is convex, then it coincides with ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$. \end{remark} \appendix \section{Continuity equation} \label{appendix:proofs} In this Section we complete the analysis of the continuity equation by carrying out the proofs of Lemma \ref{l:cont-repr} and Corollary \ref{c:narrow-ct}. \begin{proof}[Proof of Lemma \ref{l:cont-repr}] The distributional identity \color{black} \eqref{eq:90} yields that for every $\zeta\in \mathrm{C}_{\mathrm{b}}(V,\tau)$ the map \[ t\mapsto \rho_t(\zeta): = \int_V \zeta(x) \rho_t (\mathrm{d} x )\quad\text{belongs to $W^{1,1}(a,b)$}, \] with distributional derivative \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}\rho_t(\zeta) = \iint_{E} \overline\nabla\zeta\,\mathrm{d} {\boldsymbol j}_t= -\int_V \zeta\,\mathrm{d}\odiv {\boldsymbol j}_t \quad\text{for almost all $ t \in [a,b]$.}\label{eq:13} \end{equation} Hence, setting $\mathfrak d_t:=|\odiv {\boldsymbol j}_t|\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, we have \begin{equation} \label{distributional-derivative} \left| \frac{\mathrm{d}}{\mathrm{d} t}\rho_t(\zeta)\right| \leq \int_V |\zeta| \,\mathrm{d} \mathfrak d_t\le \|\zeta\|_{\mathrm{C}_{\mathrm{b}}(V)}|\odiv {\boldsymbol j}_t|(V)\le 2\|\zeta\|_{\mathrm{C}_{\mathrm{b}}(V)}| {\boldsymbol j}_t|(E), \end{equation} where we used the fact that \begin{equation*} \label{eq:53} \mathfrak d_t=|{\mathsf x}_\sharp ({\boldsymbol j}_t-{\mathsf s}_\sharp {\boldsymbol j}_t)| = |{\mathsf x}_\sharp {\boldsymbol j}_t-{\mathsf y}_\sharp {\boldsymbol j}_t|\le | {\mathsf x}_\sharp {\boldsymbol j}_t|+ |{\mathsf y}_\sharp{\boldsymbol j}_t| \end{equation*} which implies \[ \mathfrak d_t (V) \le 2|{\boldsymbol j}_t|(E). \] \normalcolor Hence, the set $L_\zeta$ of the Lebesgue points of $t\mapsto \rho_t(\zeta)$ has full Lebesgue measure. Choosing $\zeta\equiv 1$ one immediately recognizes that $\rho_t(V)$ is (essentially) constant: it is not restrictive to normalize it to $1$ for convenience. Let us now consider a countable set \color{purple} $Z=\{\zeta_k\}_{k\in \mathbb{N}}$ \color{black} of uniformly bounded functions in $\mathrm{C}_{\mathrm{b}}(V)$ such that \begin{displaymath} |\zeta_k|\le 1,\quad \mathsf d(\mu,\nu):=\sum_{k=1}^\infty 2^{-k}\Big|\int_V \zeta_k\,\mathrm{d}(\mu-\nu)\Big| \end{displaymath} is a distance inducing the weak topology in $ {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ \color{black} (see e.g.~\cite[\S\,5.1.1]{AmbrosioGigliSavare08}). By introducing the set $L_Z := \bigcap_{\zeta \in Z} L_\zeta$, it follows from \eqref{distributional-derivative} that \begin{equation} \label{eq:10} \mathsf d(\rho_s,\rho_t)\le 2 \int_s^t |{\boldsymbol j}_r|(E)\, \mathrm{d} r \end{equation} showing that the restriction of $\rho$ to $L_Z$ is continuous in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$. \color{black} Estimate~\eqref{distributional-derivative} also shows that for all $s,t\in L_Z$ with $s \leq t$ we have \begin{equation} |\rho_t(\zeta) {-} \rho_s(\zeta) | \leq \int_s^t \int_V |\zeta|\,\mathrm{d}\mathfrak d_r\,\mathrm{d} r\le 2\| \zeta\|_{\mathrm{C}_{\mathrm{b}}(V)} \int_s^t |{\boldsymbol j}_r|(E)\, \mathrm{d} r \qquad \text{for all } \zeta \in \mathrm{C}_{\mathrm{b}}(V). \label{eq:11} \end{equation} Taking the supremum with respect to ~$\zeta$ we obtain \begin{equation} \label{eq:12} \|\rho_t-\rho_s\|_{TV}\le 2 \int_s^t |{\boldsymbol j}_r|(E)\, \mathrm{d} r \qquad \text{ and all } s,t\in L_Z,\ s \leq t, \end{equation} which shows that the measures $(\rho_t)_{t\in L_Z}$ are uniformly continuous with respect to the total variation metric in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ \color{black} and thus can be extended to an absolutely continuous curve $\tilde\rho\in \mathrm{AC}(I;{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V))$ \color{black} satisfying \eqref{eq:12} for every $s,t\in I$. When $\varphi\in \mathrm{C}_{\mathrm{b}}(V)$, \eqref{2ndfundthm} immediately follows from \eqref{eq:13}. By a standard argument based on the functional monotone class Theorem \cite[\S 2.12]{Bogachev07} we can extend the validity of \eqref{2ndfundthm} to every bounded Borel function. If $\varphi\in \mathrm C^1([a,b];\mathrm{B}_{\mathrm b}(V))$, combining \eqref{eq:13} and the fact that the map $t\mapsto \int_V \varphi(t,x)\,\tilde\rho_t(\mathrm{d} x)$ is absolutely continuous we easily get \eqref{maybe-useful}. \end{proof} \begin{proof}[Proof of Corollary \ref{c:narrow-ct}] Keeping the same notation of the previous proof, if we define \begin{displaymath} \gamma:=\rho_0+\int_0^T \mathfrak d_t \,\mathrm{d} t \end{displaymath} then the estimate \eqref{distributional-derivative} shows that \begin{displaymath} \rho_t(B)\le \gamma(B)\quad\text{for every }B\in \mathfrak B, \end{displaymath} thus showing that $\rho_t=\tilde u_t\gamma$ for every $t\in [0,T]$ and \begin{equation} \label{eq:54} \|\rho_t-\rho_s\|_{TV}=\int_V |\tilde u_t-\tilde u_s|\,\mathrm{d}\gamma\le 2\int_s^t |{\boldsymbol j}_r|(E)\,\mathrm{d} r \quad\text{for every }0\le s<t\le T. \end{equation} \end{proof} We conclude with a result on the decomposition of the measure \color{purple} $ {\boldsymbol j} -\symmap {\boldsymbol j} = 2\tj $ \color{black} into its positive and negative part. \begin{lemma} \label{le:A1} If ${\boldsymbol j} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ and we set \begin{equation} \label{eq:14} \color{purple} {\boldsymbol j}^+:=( {\boldsymbol j} {-}\symmap {\boldsymbol j} )_+,\quad {\boldsymbol j}^-:=( {\boldsymbol j} {-}\symmap {\boldsymbol j} )_-, \color{black} \end{equation} then we have \begin{equation} \label{eq:15} {\boldsymbol j}^-=\symmap {\boldsymbol j}^+,\quad \color{purple} \odiv {\boldsymbol j}^+= \odiv {\boldsymbol j}. \color{black} \end{equation} When ${\boldsymbol j}$ is skew-symmetric, we also have \begin{equation} \label{eq:16} \color{purple} {\boldsymbol j}^+=2{\boldsymbol j}_+,\quad {\boldsymbol j}^-=-2{\boldsymbol j}_-. \color{black} \end{equation} \end{lemma} \begin{proof} \color{purple} By definition, we have ${\boldsymbol j}^+=2\tj_+$, ${\boldsymbol j}^-=2\tj_-$. Furthermore, $ \tj =-\symmap \tj =\symmap \tj _--\symmap \tj_+$, where the first equality follows from the fact that $\tj$ is skew-symmetric. Since $\symmap \tj_-\perp \symmap \tj_+$ we deduce that $\symmap \tj_+=\tj_-,$ $\symmap \tj_-=\tj_+$ and $\tj=\tj_+-\symmap \tj_+$, so that $\odiv {\boldsymbol j} = \odiv \tj =2\odiv \tj_+=\odiv {\boldsymbol j}^+$. \end{proof} \color{black} \section{Slowly increasing superlinear entropies} The main result of this Section is Lemma \ref{le:slowly-increasing-entropy} ahead, invoked in the proof of Proposition \ref{prop:compactness}. It provides the construction of a \emph{smooth} function estimating the entropy density $\upphi$ from below and such that the function $(r,s) \mapsto \Psi^*(A_\upomega(r,s)) \alpha(r,s)$ fulfills a suitable bound, cf.\ \eqref{eq:41} ahead. Prior to that, we prove the preliminary Lemmas \ref{le:alpha-behaviour} and \ref{le:sub} below. \color{black} \begin{lemma} \label{le:alpha-behaviour} Let us suppose that $\upalpha$ satisfies Assumptions \ref{ass:Psi}. Then for every $a\ge0$ \begin{equation} \label{eq:40} \lim_{r\to{+\infty}}\frac{\upalpha(r,a)}r= \lim_{r\to{+\infty}}\frac{\upalpha(a,r)}r=0. \end{equation} \end{lemma} \begin{proof} Since $\upalpha$ is symmetric it is sufficient to prove the first limit. Let us first observe that the concavity of $\upalpha$ yields the existence of the limit since the map $r\mapsto r^{-1}(\upalpha(r,a)-\upalpha(0,a))$ is decreasing, so that \begin{displaymath} \lim_{r\to{+\infty}}\frac{\upalpha(r,a)}r= \lim_{r\to{+\infty}}\frac{\upalpha(r,a)-\upalpha(0,a)}r= \inf_{r>0}\frac{\upalpha(r,a)-\upalpha(0,a)}r. \end{displaymath} Let us call $L(a)\in \mathbb{R}_+$ the above quantity. The inequality (following by the concavity of $\upalpha$ and the fact that $\upalpha(0,0)\ge0$) \begin{equation} \label{eq:concave-elementary} \upalpha(r,a)\le \lambda\upalpha(r/\lambda,a/\lambda)\quad\text{for every }\lambda\ge1 \end{equation} yields \begin{equation} \label{eq:32} L(a)=\lim_{r\to{+\infty}}\frac{\upalpha(r,a)}r\le \lim_{r\to{+\infty}}\frac{\upalpha(r/\lambda,a/\lambda)}{r/\lambda}= L(a/\lambda) \quad\text{for every }\lambda\ge1. \end{equation} For every $b\in (0,a)$ and $r>0$, setting $\lambda:=a/b>1$, we thus obtain \begin{displaymath} L(a)\le L(b)\le \frac{\upalpha(r,b)-\upalpha(0,b)}r \end{displaymath} Passing first to the limit as $b\downarrow0$ and using the continuity of $\upalpha$ we get \begin{displaymath} L(a)\le \frac{\upalpha(r,0)-\upalpha(0,0)}r \quad\text{for every $r>0$}. \end{displaymath} Eventually, we pass to the limit as $r\uparrow{+\infty}$ and we get $L(a)\le \upalpha^\infty(1,0)=0$ thanks to \eqref{alpha-0}. \end{proof} \begin{lemma} \label{le:sub} Let $f:\mathbb{R}_+\to \mathbb{R}_+$ be an increasing continuous function and $f_0\ge0$ with \begin{equation} \label{eq:43} \lim_{r\to{+\infty}}f(r)=\sup f={+\infty},\qquad \liminf_{r\downarrow0}\frac{f(r)-f_0}r\in (0,{+\infty}]. \end{equation} Then for every $g_0\in [0,f_0]$ there exists a $\rmC^\infty$ concave function $g:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:35} \forall\,r\in \mathbb{R}_+:g(r)\le f(r),\qquad g(0)=g_0,\qquad \lim_{r\to{+\infty}}g(r)={+\infty}. \end{equation} \end{lemma} \begin{proof} By subtracting $f_0$ and $g_0$ from $f$ and $g$, respectively, it is not restrictive to assume $f_0=g_0=0$. We will use a recursive procedure to construct a concave piecewise-linear function $g$ satisfying \eqref{eq:35}; a standard regularization yields a $\rmC^\infty$ map. We set \begin{equation} \label{eq:44} a:=\frac 13\liminf_{r\downarrow0}\frac{f(r)}r,\quad x_1:=\sup\Big\{x\in (0,1]:f(r)\ge 2 ar\text{ for every }r\in (0,x]\Big\}, \end{equation} and $\delta:=ax_1.$ We consider a strictly increasing sequence $(x_n)_{n\in\mathbb{N}}$, $n\in \mathbb{N}$, defined by induction starting from $x_0=0$ and $x_1$ as in \eqref{eq:44}, according to \begin{equation} \label{eq:25} x_{n+1}:=\min\Big\{x\ge 2x_n-x_{n-1}: f(x)\ge f(x_n)+\delta\Big\},\quad n\ge 1. \end{equation} Since $\lim_{r\to{+\infty}}f(r)={+\infty}$, the minimizing set in \eqref{eq:25} is closed and not empty, so that the algorithm is well defined. It yields a sequence $x_n$ satisfying \begin{equation} \label{eq:33} x_{n+1}-x_n\ge x_{n}-x_{n-1},\quad x_{n+1}\ge x_n+\delta\quad\text{for every }n\ge0, \end{equation} so that $(x_n)_{n\in \mathbb{N}}$ is strictly increasing and unbounded, and induces a partition $\{0=x_0<x_1<x_1<\cdots<x_n<\cdots\}$ of $\mathbb{R}_+$. We can thus consider the piecewise linear function $g:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:36} g(x_n):=n\delta,\quad g((1-t)x_n+t x_{n+1}):=(n+t)\delta\quad \text{for every }n\in \mathbb{N},\ t\in[0,1]. \end{equation} We observe that $g$ is increasing, $\lim_{r\to{+\infty}}g(r)={+\infty}$ and it is concave since \begin{displaymath} \frac{g(x_{n+1})-g(x_n)}{x_{n+1}-x_n}= \frac{\delta}{x_{n+1}-x_n}\topref{eq:33}\le \frac\delta{x_{n}-x_{n-1}}=\frac{g(x_{n})-g(x_{n-1})}{x_{n}-x_{n-1}}. \end{displaymath} Furthermore, $g$ is also dominated by $f$: in the interval $[x_0,x_1]$ this follows by \eqref{eq:44}. For $x\in [x_n,x_n+1]$ and $n\ge 1$, we observe that \eqref{eq:25} yields $f(x_{n+1})\ge f(x_n)+\delta$ so that by induction $f(x_n)\ge (n+1)\delta$; on the other hand \begin{displaymath} \text{for every $x\in [x_{n},x_{n+1}]$}:\quad g(x)\le g(x_{n+1})=(n+1) \delta\le f(x_n)\le f(x).\qedhere \end{displaymath} \end{proof} \begin{lemma} \label{le:slowly-increasing-entropy} Let $\Psi^*,\upalpha$ be satisfying Assumptions \ref{ass:Psi} and let $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$ be a convex superlinear function with $\upbeta'(r)\ge \upbeta_0'>0$ for a.e.~$r\in \mathbb{R}_+$. Then, there exists a $\rmC^\infty$ convex superlinear function $\upomega:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:41} \upomega(r)\le \upbeta(r),\qquad \Psi^*(\upomega'(s)-\upomega'(r))\upalpha(s,r)\le r+s\quad\text{for every }r,s\in \mathbb{R}_+. \end{equation} \end{lemma} \begin{proof} By a standard regularization, \color{ddcyan} we can always approximate $\upbeta$ by a smooth convex superlinear function $\tilde\upbeta\le \upbeta$ whose derivative is strictly positive, so that \color{black} it is not restrictive to assume that $\upbeta$ is of class $\rmC^2$. Let us set $r_0:=\inf\{r>0:\Psi^*(r)>0\}$ and let $P:(0,{+\infty})\to (r_0,{+\infty})$ be the inverse map of $\Psi^*$: $P$ is continuous, strictly increasing, and of class $\rmC^1$. Since $\upalpha$ is concave, the function $x \mapsto \upalpha(x,1)/x$ is nonincreasing in $(0,{+\infty})$; we can thus define the nondecreasing function $Q(x):=P(x/\upalpha(x,1))$ and the function \begin{displaymath} \gamma(x):=2g_0+\int_1^x \min(\upbeta''(y),Q'(y))\,\mathrm{d} y\quad \text{for every }x\ge 1,\quad g_0:=\frac12\min(\upbeta_0', Q(1))>0. \end{displaymath} By construction $\gamma(1)=2g_0= \min(\upbeta'_0,Q(1))\le \upbeta'(1)$ so that $\gamma(x)\le \min(\upbeta'(x),Q(x))$ for every $x\ge1$. We eventually set \begin{displaymath} f(t):=\frac{\rme^t}{\gamma(\rme^t)}\quad t\ge0. \end{displaymath} Clearly, we have $f(0)=2g_0$. Furthermore, we combine the estimate $\gamma(\rme^t) \leq Q(\rme^t) = P(\rme^t/\upalpha(\rme^t,1))$ with the facts that $\rme^t/\upalpha(\rme^t,1) \to +\infty$ as $t\to +\infty$, thanks to Lemma \ref{le:alpha-behaviour}, and that $P$ has sublinear growth at infinity, being the inverse function of $\Psi^*$. All in all, we conclude that \[ \lim_{t\to{+\infty}}f(t)={+\infty}. \] Therefore, we are in a position to \color{black} apply Lemma \ref{le:sub}, obtaining an increasing concave function $g:\mathbb{R}_+\to\mathbb{R}_+$ such that $g_0=g(0)\le g(t)\le f(t)$ and $\lim_{t\to{+\infty}}g(t)={+\infty}$. Since $g(0)\ge0$, the concaveness of $g$ yields $g(t'')-g(t')\le g(t''-t')$ for every $0\le t'\le t''$, so that the function $h(x):=g(\log (x\lor 1))$ satisfies $h(x)=g_0\le \upbeta'(x)$ for $x\in [0,1]$, and \begin{equation} \label{eq:48} h(z)\le \min(\upbeta'(z),Q(z))\quad\text{for every }z\ge 1,\quad h(y)-h(x)\le h(y/x)\quad \text{for every }0< x\le y. \end{equation} In fact, if $x\le 1$ we get \begin{displaymath} h(y)-h(x)=h(y)-g_0\le h(y)\le h(y/x) \end{displaymath} and if $x\ge 1$ we get \begin{displaymath} h(y)-h(x)\le g(\log y)-g(\log x)\le g(\log y-\log x)=g(\log(y/x))= h(y/x). \end{displaymath} Let us now define the convex function $\upomega(x):=\int_0^x h(y)\,\mathrm{d} y$ with $\upomega(0)=0$ and $\upomega'=h$. In particular $\upomega(x)\le \upbeta(x)$ for every $x\ge 0$. It remains to check the second inequality of \eqref{eq:41}. The case $r,s\le 1$ is trivial since $\upomega'(s)-\upomega'(r)=h(r)-h(s)=0$. We can also consider the case $\upomega'(r)\neq \upomega'(s)$ and $\upalpha(r,s)>0$; since \eqref{eq:41} is also symmetric, it is not restrictive to assume $r\le s$; by continuity, we can assume $r>0$. Recalling that $\upalpha(s,r)\le r\upalpha(s/r,1)$ if $0<r\le s$, and $(r+s)/r>s/r$, \eqref{eq:41} is surely satisfied if \begin{equation} \label{eq:49} \Psi^*(\upomega'(s)-\upomega'(r))\upalpha(s/r,1)\le s/r\quad \text{for every }0<r<s. \end{equation} Recalling that $\upomega'(s)-\upomega'(r)\le \upomega'(s/r)$ by \eqref{eq:48} and $\Psi^*$ is nondecreasing, \eqref{eq:49} is satisfied if \begin{equation} \label{eq:50} \Psi^*(\upomega'(s/r))\upalpha(s/r,1)\le s/r\quad \text{for every }0<r<s. \end{equation} After the substitution $t:=r/s$, \eqref{eq:50} corresponds to \begin{equation} \label{eq:51} \upomega'(t)\le P(t/\upalpha(t,1))=Q(t)\quad\text{for every }t\ge 1, \end{equation} which is a consequence of the first inequality of \eqref{eq:48}. \end{proof} \section{Connectivity by curves of finite action} \label{s:app-2} Preliminarily, with the reference measure $\pi \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}_+(V)$ and with the `jump equilibrium rate' $\boldsymbol \teta$ from \eqref{nu-pi} we associate the `graph divergence' operator $\odiv_{\pi,\boldsymbol \teta}: L^p(E;\boldsymbol \teta) \to L^p(V;\pi) $, $p\in [1,{+\infty}]$, defined as the transposed of the `graph gradient' $\dnabla:L^q(V;\pi) \to L^{q}(E;\boldsymbol \teta)$, with $q = p'$. Namely \[ \begin{gathered} \text{for } \zeta \color{black} \in L^p(E;\boldsymbol \teta), \qquad \xi = - \overline{\mathrm{div}}_{\pi,\boldsymbol \teta}( \zeta \color{black})\qquad \text{if and only if} \\ \int_{V} \xi(x) \omega(x) \pi(\mathrm{d} x) = \int_{E} \zeta \color{black}(x,y) \dnabla \omega(x,y) \boldsymbol \teta(\mathrm{d} x, \mathrm{d} y) \quad \text{for all } \omega \in L^q(V;\pi) \end{gathered} \] or, equivalently, \begin{equation} \label{in-terms-of-measures} \xi \pi = - \odiv( \zeta \color{black}\boldsymbol \teta) \end{equation} (with $\odiv$ the divergence operator from \eqref{eq:def:ona-div}) in the sense of measures. \par We can now first address the connectivity problem in the very specific setup \begin{equation} \label{alpha-equiv-1} \upalpha(u,v) \equiv 1 \qquad \text{for all } (u,v) \in [0,{+\infty}) \times [0,{+\infty}). \end{equation} Then, the action functional $\int \mathscr R$ is translation-invariant. Let us consider two measures $\rho_0, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}_+(V)$ such that for $i\in \{0,1\}$ there holds $\rho_i = u_i \pi$ with $u_i \in L_+^p(V;\pi)$ for some $p\in (1,{+\infty})$. Thus, we look for curves $\rho \in \ADM 0\tau{\rho_0}{\rho_1}$, with finite action, such that $\rho_t \ll \pi$, with density $u_t$, for almost all $t\in (0,\tau)$. Consequently, any flux $({\boldsymbol j}_t)_{t\in (0,\tau)}$ shall satisfy ${\boldsymbol j}_t \ll \boldsymbol \teta$ for a.a.\ $t\in (0,\tau)$ (cf.\ Lemma \ref{l:alt-char-R}). Taking into account \eqref{in-terms-of-measures}, the continuity equation reduces to \begin{equation} \label{cont-eq-densities} \dot{u}_t = - \overline{\mathrm{div}}_{\pi,\boldsymbol \teta}( \zeta_t \color{black}) \qquad \text{for a.e.\ }\, t \in (0,\tau) \end{equation} with $ \zeta_t = \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol \teta}$. Furthermore, we look for a connecting curve $\rho_t = u_t \pi$ with $u_t = (1{-}t)u_0 +t u_1$, so that \eqref{cont-eq-densities} becomes $ - \overline{\mathrm{div}}_{\pi,\boldsymbol \teta}( \zeta_t \color{black}) \equiv u_1 -u_0$. Hence, we can restrict to flux densities that are constant in time, i.e.\ $\zeta_t \equiv \zeta$ with $\zeta\in L^p(E;\boldsymbol \teta)$. \color{black} In this specific context, and if we further confine the discussion to the case $\Psi(r) = \frac1p |r|^p $ for $p\in (1,{+\infty})$, the minimal action problem becomes \begin{equation} \label{minimal-action} \inf\left \{ \frac1p \int_{E} |w|^p \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \, : \ w = 2\zeta \in L^p(E;\boldsymbol \teta), \ - \overline{\mathrm{div}}_{\pi,\boldsymbol \teta}(\zeta) \equiv u_1 -u_0 \right\} \color{black} \end{equation} Now, by a general duality result on linear operators, the operator $- \odiv_{\pi,\boldsymbol \teta}: L^p(E;\boldsymbol \teta) \to L^p(V;\pi)$ is surjective if and only if the graph gradient $\dnabla:L^q(V;\pi) \to L^{q}(E;\boldsymbol \teta)$ fulfills the following property: \[ \exists\, C>0 \ \ \forall\, \xi \in L^q(V;\pi) \text{ with } \int_V \xi \pi(\mathrm{d} x) =0 \text{ there holds } \| \xi\|_{L^q(V;\pi)} \leq C \| \dnabla \xi\|_{L^q(E;\boldsymbol \teta)}, \] namely the $q$-Poincar\'e inequality \eqref{q-Poinc}. We can thus conclude the following result. \begin{lemma} \label{l:intermediate-conn} Suppose that $\upalpha \equiv 1$, that $\Psi$ has $p$-growth (cf.\ \eqref{psi-p-growth}), and that the measures $(\pi,\boldsymbol \teta) $ satisfy a $q$-Poincar\'e inequality for $q=\tfrac p{p-1}$. Let $\rho_0, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ be given by $\rho_i = u_i \pi$, with positive $u_i \in L^p(V; \pi)$, for $i \in \{0,1\}$. Then, for every $\tau\in (0,1)$ we have $\DVT{\tau}{\rho_0}{\rho_1}<{+\infty}$. If $\Psi(r) = \frac1p |r|^p$, $q$-Poincar\'e inequality is also necessary for having $\DVT{\tau}{\rho_0}{\rho_1}<{+\infty}$. \end{lemma} We are now in a position to carry out the \begin{proof}[Proof of Proposition \ref{prop:sufficient-for-connectivity}] Assume that $\rho_0(V) = \int_V u_0 (x) \pi(\mathrm{d} x) = \pi(V)$. Hence, it is sufficient to provide a solution for the connectivity problem between $u_0$ and $u_1 \equiv 1$. We may also assume without loss of generality that $\upalpha(u,v) \geq \upalpha_0(u,v)$ with $\upalpha_0(u,v) = c_0 \min(u,v,1)$ for some $c_0>0$, so that \begin{equation} \label{inequ} \begin{aligned} \Psi\left(\frac w{\upalpha(u,v)} \right)\upalpha(u,v) \leq \Psi\left(\frac w{\upalpha_0(u,v)} \right)\upalpha_0(u,v) & \leq C_p \left( 1+ \left| \frac w{\upalpha_0(u,v)} \right|^p \right) \upalpha_0(u,v) \\ & \leq C_p c_0 + C_p |w|^p (\upalpha_0(u,v))^{1-p}\,, \end{aligned} \end{equation} where the first estimate follows from the convexity of $\Psi$ and the fact that $\Psi(0)=0$, yielding that $\lambda \mapsto \lambda \Psi(w/\lambda)$ is non-increasing. It is therefore sufficient to consider the case in which $c_0=C_p=1$, $\upalpha_0(u,v) = \min(u,v,1)$, and to solve the connectivity problem for $\tilde\Psi(r) = \frac1p |r|^p$. By Lemma \ref{l:intermediate-conn}, we may first find $w\in L^p(E;\boldsymbol \teta)$ solving the minimum problem \eqref{minimal-action} in the case $\upalpha \equiv 1$, so that the flux density $\zeta_t \equiv \frac12 w$ \color{black} is associated with the curve $u_t = (1{-}t)u_0 +t u_1$, $t\in [0,\tau]$. Then, we fix an exponent $\gamma>0$ and we consider the rescaled curve $\tilde{u}_t: = u_{t^\gamma}$, that fulfills $\partial_t \tilde{u}_t = - \overline{\mathrm{div}}_{\pi,\boldsymbol \teta}(\tilde{\zeta}_t) $ with $\tilde{\zeta}_t = \frac12 \tilde{w}_t = \frac12 \gamma t^{\gamma-1} w$. \color{black} Moreover, \begin{align*} \upalpha_0(\tilde{u}_t (x), \tilde{u}_t (y)) &= \min \{ (1{-}t^\gamma) u_0(x) + t^\gamma u_1(x), (1{-}t^\gamma) u_0(y) + t^\gamma u_1(y), 1\} \\ &\geq \min(t^\gamma, 1) = t^\gamma \end{align*} since $u_1(x) = u_1(y) =1$. By \eqref{inequ} we thus get \[ \begin{aligned} & \int_{E} \Psi \left(\frac{\tilde{w}_t(x,y)}{\upalpha(\tilde{u}_t (x), \tilde{u}_t (y))} \right) \upalpha(\tilde{u}_t (x), \tilde{u}_t (y)) \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) \\ &\quad \leq C_p c_0 \boldsymbol \teta(E)+ \int_{E} \gamma^p t^{p(\gamma{-}1)} |w(x,y)|^p t^{\gamma(1{-}p)} \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) = C_p c_0 \boldsymbol \teta(E)+ \gamma^p t^{\gamma-p} \|w\|_{L^p(E;\boldsymbol \teta)}^p\,. \end{aligned} \] Choosing $\gamma>p-1$ we conclude that \[ \int_0^\tau \int_{E} \Psi \left(\frac{\tilde{w}_t(x,y)}{\upalpha(\tilde{u}_t (x), \tilde{u}_t (y))} \right) \upalpha(\tilde{u}_t (x), \tilde{u}_t (y)) \boldsymbol \teta(\mathrm{d} x,\mathrm{d} y) <{+\infty}\, \] hence $\ADM 0\tau{\rho_0}{\rho_1} \neq \emptyset$. \end{proof} \color{black} \end{document}
arXiv
\begin{document} \ifTR \title{The role of individual compensation and acceptance decisions in crowdsourced delivery} \begin{abstract} \myabstract \textbf{Keywords: Crowdsourced delivery, occasional drivers, compensation schemes, acceptance uncertainty, crowdshipper behavior} \end{abstract} \else \RUNAUTHOR{\c{C}{\i}nar et al.} \RUNTITLE{The role of individual compensation and acceptance decisions in crowdsourced delivery} \TITLE{The role of individual compensation and acceptance decisions in crowdsourced delivery} \ARTICLEAUTHORS{ \AUTHOR{Alim Bu\u{g}ra \c{C}{\i}nar, Wout Dullaert, Markus Leitner, Rosario Paradiso, Stefan Waldherr} \AFF{Vrije Universiteit Amsterdam, Department of Operations Analytics, The Netherlands} \EMAIL{[email protected]} \EMAIL{[email protected]} \EMAIL{[email protected]} \EMAIL{[email protected]} \EMAIL{[email protected]}} \ABSTRACT{ \myabstract } \KEYWORDS{Crowdsourced delivery, occasional drivers, compensation schemes, acceptance uncertainty, crowdshipper behavior} \title{The role of individual compensation and acceptance decisions in crowdsourced delivery} \fi \section{Introduction} \label{sec:intro} Fueled by the Covid-19 pandemic, e-tailing and same-day home delivery have continued to grow. This increase in demand and rising customer expectations are forcing providers to improve their efficiency in last-mile distribution. At the same time, technological advances and the proliferation of the sharing economy, such as Airbnb, car-sharing systems, and ride-hailing systems like Uber, have popularized the gig economy. In this new paradigm, instead of hiring employees on long-term contracts, employers (companies or individuals) can offer small, short-term tasks (usually through an online platform) and compensation for performing the task. Gig workers can then agree to perform that task in exchange for the advertised compensation. A prime example of the gig economy is crowdsourced delivery, which is one of the most prominent research directions in city logistics \citep{kaspi2022directions}. Instead of using its own logistics system, a company can choose to offer delivery tasks to regular customers or other independent couriers (often referred to as occasional drivers) and compensate them accordingly. Ideally, tasks are assigned to occasional drivers who only need to make a small detour from their previously planned route to make the delivery. In this case, crowdsourced delivery has the potential to reduce overall delivery costs, better utilize transportation capacity, increase the flexibility of delivery capacity, and reduce overall traffic. Crowdsourced delivery has attracted interest from both industry and academia due to its great potential. \citet{alnaggar2021crowdsourced} and \citet{savelsbergh2022challenges} provide comprehensive overviews and classifications of industry applications and scientific contributions. Crowdsourced delivery adds significant complexity to the planning process because the behavior and availability of occasional drivers is often not known in advance. Integrating the behavior of occasional drivers into planning decisions, especially with respect to task acceptance, is a major challenge in this domain \citep{savelsbergh2022challenges}. In the case of instant delivery (e.g., food orders), the inability to find an occasional driver for an offered task may lead to customer dissatisfaction or significant additional costs. In the classical models, occasional drivers are assumed to accept all tasks for which the compensation is high enough to compensate for the additional costs incurred by the drivers for the detour, see e.g. \citet{Archetti2016, arslan_crowdsourced_2019}. More recently, acceptance probabilities have been investigated through behavioral studies and surveys of occasional drivers, see e.g. \citet{Le2019,devari_crowdsourcing_2017}. Such acceptance probabilities were used as fixed parameters in the task allocation problem by \citet{gdowska2018stochastic} and \citet{santini2022probabilistic}. The effect of compensation schemes on acceptance behavior was investigated by \citet{dayarian_crowdshipping_2020}, but only through a sensitivity analysis of fixed compensation options and acceptance thresholds drawn from a normal distribution. \citet{Barbosa2022} and \citet{hou_optimization_2022} integrate the impact of compensation on acceptance probabilities within task allocation. However, \citet{Barbosa2022} assume that the detour of all occasional drivers is the same for each task. Consequently, the compensations of all occasional drivers are identical. \citet{hou_optimization_2022} propose a sequential approach in which the compensation values are optimized for fixed task allocations that are decided in the first step. \paragraph{Contribution and outline} This paper is the first to incorporate task-acceptance probabilities of occasional drivers, which depend on the operator's compensation decisions, into an exact solution framework. To this end, we introduce a mixed-integer nonlinear programming (MINLP) formulation that optimizes the decision process by allowing for full flexibility in the compensations offered and the associated acceptance probabilities of occasional drivers. To model the latter, we introduce acceptance probability functions that estimate the probability that an occasional driver will accept a task. This estimation can be based on historical information, the attributes of occasional drivers and tasks, and the compensations offered. In our model, referred to as the \probname (\probabbr), we aim to minimize the total expected cost of assigning tasks to either company or occasional drivers, while ensuring that all delivery tasks are performed. Our setting is motivated by crowdsourcing for instant pickup and delivery problems, such as food delivery. However, our model is also applicable to scenarios with a single depot where in-store customers can be used to perform delivery tasks. Customer orders have to be delivered as quickly as possible via direct delivery, and multiple orders cannot be combined within a single trip by a company driver or occasional driver. Our main contributions can be summarized as follows: \begin{itemize} \item We introduce the \probabbr, which considers compensation-dependent task acceptance probability functions of occasional drivers and allows to derive optimal compensations offered to them while minimizing the expected cost to the operator. \item We introduce a MINLP formulation for the \probabbr\ and show how that it can be optimally solved via a two-stage approach that decomposes compensation and assignment decisions. \item We study two practically relevant classes of acceptance probability functions (linear and logistic acceptance probability functions) and derive explicit formulas for optimal compensation values in these cases. These results imply an exact linearization of our MINLP formulation, which can be solved in polynomial time. \item We study a generalization of the \probabbr, show that it is NP-hard, and propose an approximate linearization scheme for an appropriately extended MINLP formulation. \item We conduct an extensive computational study and sensitivity analysis on the main model parameters. Our results show that the compensation scheme proposed in this work consistently and significantly outperforms alternative established compensation schemes from the literature in terms of expected cost and distance. We also demonstrate that this scheme ensures higher satisfaction of occasional drivers by providing them with more and more successful offers. Finally, the extremely low runtimes on large instances consisting of 100 tasks and up to 150 occasional drivers and the computational complexity of the proposed method indicate that our approach can be applied to larger problem instances that could arise in real-life applications. \end{itemize} This paper is organized as follows. \cref{sec:literature} summarizes the related literature. \cref{sec:apuc} formally introduces the problem studied in this paper, provides a MINLP formulation, and discusses the considered acceptance probability functions. \cref{sec:ilp} introduces the theoretical results that lead to the exact linearizations and polynomial-time solvability of \probabbr. \cref{sec:non-separable} focuses on the aforementioned generalization of the \probabbr, shows that this variant is NP-hard, and introduces an approximate linearization scheme for this more general case. \cref{sec:experimental-setup} details the setup of our computational study, whose results and findings are discussed in \cref{sec:results}. We conclude in \cref{sec:conclusion}, where we also provide possible future research directions. The Appendix contains proofs of all theoretical results, and additional computational results \ifTR . \else are given in the e-companion. \fi \section{Literature review} \label{sec:literature} In this section, we review articles that address uncertainty in driver behavior and/or compensation optimization subjects in the context of crowdsourced delivery. The reviewed papers consist of those identified by \citet{savelsbergh2022challenges}, as well as more recent papers addressing both subjects. \cref{table:literature} provides an overview of these articles and classifies them according to three criteria: service type (following the classification of \citet{sampaio_chapter_2019}), compensation, and crowdshipper acceptance probability. In the following paragraph, we briefly summarize the articles that consider uncertainty in driver behavior without using acceptance probability functions. \cref{sec:compensation} discusses papers in which the compensation offered to drivers for performing tasks are treated as decisions that are independent of acceptance probabilities. Finally, \cref{sec:acceptance_uncertainty_and_compensation} provides details on articles that assume a relationship between compensation decisions and acceptance probabilities. \begin{table} \centering \caption{Literature overview} \begin{tabular}{l|cc|ccc|ccc} \toprule & \multicolumn{2}{c}{Service Type} & \multicolumn{3}{c}{\parbox{7em}{\centering Compensation}} & \multicolumn{3}{c}{\parbox{7em}{\centering Acceptance Probability}} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-6} \cmidrule(lr){7-9} Paper & \rot{Door-to-Door} & \rot{In-Store} & \rot{\parbox{7em}{\raggedright Driver Independent}} & \rot{\parbox{7em}{\raggedright Driver Dependent}} & \rot{\parbox{7em}{\raggedright Integrated}} & \rot{\parbox{7em}{\raggedright Driver Independent}} & \rot{\parbox{7em}{\raggedright Driver Dependent}} & \rot{\parbox{7em}{\raggedright Compensation Dependent}} \\ \midrule \citet{kafle2017design} & & \OK & & & & & & \\ \citet{gdowska2018stochastic} & & \OK & & & & \OK & & \\ \citet{allahviranloo2019dynamic} & \OK & & & & & & & \\ \citet{mofidi2019beneficial} & \OK & & & & & & & \\ \citet{dai2020workforce} & & \OK & & & & & & \\ \citet{ausseil2022supplier} & \OK & & & & & & & \\ \citet{behrendt2022prescriptive} & \OK & & & & & & & \\ \citet{santini2022probabilistic} & & \OK & & & & \OK & & \\ \citet{dahle2019pickup} & \OK & & & \OK & \OK & & & \\ \citet{yildiz2019service} & & \OK & \OK & & & \OK & & \\ \citet{cao_last-mile_2020} & & \OK & & \OK & & & & \\ \citet{le2021designing} & \OK & & & \OK & \OK & & & \\ \citet{Barbosa2022} & & \OK & \OK & & & \OK & & \OK \\ \citet{hou_optimization_2022} & & \OK & & \OK & & & \OK & \OK \\ Our Work & \OK & \OK & & \OK & \OK & & \OK & \OK \\ \bottomrule \end{tabular} \label{table:literature} \end{table} \citet{kafle2017design} and \citet{allahviranloo2019dynamic} focus on a bid submission setting in which a planner obtains bids that contain information about tasks and compensation amounts required by occasional drivers. \citet{mofidi2019beneficial} and \citet{ausseil2022supplier} consider approaches in which a planner offers menus of tasks to occasional drivers. \citet{behrendt2022prescriptive} study a scheduling problem that considers occasional drivers who are willing to commit to a time slot. \citet{dai2020workforce} focus on a workforce allocation problem that considers the allocation of occasional drivers to different restaurants in a meal delivery setting. Finally, \citet{gdowska2018stochastic} and \citet{santini2022probabilistic} study settings in which the acceptance probabilities of occasional drivers are considered but represented by fixed values. \subsection{Independent compensation decisions} \label{sec:compensation} \cite{dahle2019pickup} propose an extension of the pick-up and delivery vehicle routing problem with time windows in which the company owning the fleet can use occasional drivers to outsource some requests. They model the behavior of occasional drivers using personal threshold constraints, which indicate the minimum amount of compensation for which an occasional driver is willing to accept a task, and study the impact of the following three compensation schemes: fixed and equal compensation for each served request, compensation proportional to the cost of traveling from the pickup location to the delivery location, and compensation proportional to the cost of the detour taken by the occasional driver. Their results show that using occasional drivers can lead to savings of about 10-15\%, even when using a sub-optimal compensation scheme. They also show that these savings can be further increased by using more complex compensation schemes. \cite{yildiz2019service} model a crowdsourced meal delivery system as a multi-server queue with general arrival and service time distributions to investigate which restaurants should be included in the network, what payments should be offered to couriers, and whether full-time drivers should be used. They study the trade-off between profit and service quality and analyze the impact of unit revenue and vehicle speed. They derive optimal compensation amounts analytically, considering per-delivery and per-mile schemes, with respect to the optimal service area. They also investigate the impact of a given acceptance probability on the service area and compensation. \citet{cao_last-mile_2020} consider a sequential packing problem to model task deliveries by occasional drivers and a professional fleet. In their setting, a distribution center receives delivery tasks at the beginning of a day, assigns bundles of tasks to occasional drivers during the day, and delivers the remaining tasks by the company-owned fleet at the end of the day. To model the arrival of occasional drivers, a marked Poisson process is proposed that specifies the time at which an occasional driver requests a task bundle and the bundle size. The arrival rate is assumed to be the same for each task bundle and is influenced by an incentive rate that allows an occasional driver to earn the same profit by delivering any bundle. They optimize the incentive rate that minimizes total delivery costs. \cite{le2021designing} integrate task assignment, pricing, and compensation decisions. They assume that an occasional driver will accept a task if the compensation is above a given threshold and below an upper bound that the operator is willing to offer. These (distance-dependent) thresholds are derived from real-world surveys. The authors evaluate flat and individual compensation schemes under different levels of supply and demand. \subsection{Compensation dependent acceptance probability} \label{sec:acceptance_uncertainty_and_compensation} \citet{Barbosa2022} extend the problem proposed by \citet{Archetti2016} by integrating compensation decisions. As one of the few papers to date, they combine pricing, matching, and routing aspects while considering the possibility that occasional drivers may refuse tasks. The acceptance probability is modeled as a function of the compensation offered. However, the authors assume that the probability function is identical for each pair of occasional driver and task. This implies that decisions about compensations are reduced to defining a single compensation that is constant for each occasional driver offered a task. The considered problem is solved using a heuristic algorithm which builds upon the method proposed in \citet{gdowska2018stochastic}. \citet{hou_optimization_2022} propose an optimization framework for a crowdsourced delivery service. They model the problem as a discrete event system, where an event is a task or a driver entering the system. A two-stage procedure is proposed to solve the problem. Tasks are assigned in the first stage, which focuses on minimizing the total detour of assigned tasks. In the second stage, optimal compensations are calculated that minimize the total delivery cost for the given task. The problem setting also considers the possibility that occasional drivers may refuse tasks, and the acceptance probabilities are determined by a binomial logit discrete choice model. Studying the impact of compensation decisions on the acceptance behavior of occasional drivers has been identified as an important research direction. The first steps towards filling this research gap have been taken by \citet{Barbosa2022} and \citet{hou_optimization_2022}. \citet{Barbosa2022} do not consider individual properties of driver-task pairs and solve the resulting problem heuristically. Even though \citet{hou_optimization_2022} consider compensations for each assigned pair individually, their sequential algorithm computes optimal compensations once the assignments are fixed. As a result, their approach lacks an evaluation of all possible task assignment-compensation combinations. We aim to overcome these shortcomings by making assignment and compensation decisions simultaneously, and by proposing an exact method for solving the resulting optimization problem. \section{Problem definition\label{sec:apuc}} In the following, we formally introduce the \probname. The \probabbr aims to optimally assign the set of online orders (\emph{tasks}) $I$ to professional and occasional drivers. The set of occasional drivers $J$ represents ordinary customers and other independent couriers who express willingness to perform tasks from $I$. These drivers are not fully employed and perform individual delivery tasks on their own time. A solution to the \probabbr allocates each task $i\in I$ either by directly assigning it to a company driver or by offering the task to an occasional driver $j\in J$. The characteristics of the tasks (e.g., pickup and delivery locations) allow us to estimate the \emph{company costs} $c_i\ge 0$ for performing the task $i\in I$ by a professional driver. We assume that the number of professional drivers is sufficient to perform all tasks if none of them is offered to the occasional drivers and at most one task is offered to each occasional driver. Since each task is identified with its pickup and delivery location, the costs for company and occasional drivers can be encoded in the respective parameters without explicitly including these locations in the model. Therefore, our model generalizes both pickup and delivery problems as well as in-store customer scenarios. Occasional drivers are offered a compensation for performing a task, and they can choose whether to accept the task given their own utility with regard to compensation and delivery costs. The compensation dependent \emph{acceptance probability} $P_{ij}(C_{ij})$ specifies the probability that an occasional driver $j\in J$ will accept an offer to perform a task $i\in I$ when offered a compensation of $C_{ij} \ge 0$. Tasks refused by an occasional driver must be performed by a professional driver, incurring a penalized cost $c_i'\ge c_i$ to the company. A feasible solution of the \probabbr can be described as a one-to-one assignment $A \subset I_\mathrm{o} \times J_\mathrm{o}$ where $I_\mathrm{o}\subseteq I$ is the set of tasks offered to a subset of the occasional drivers $J_\mathrm{o}\subseteq J$. Every other task $i \in I \setminus I_\mathrm{o}$ is performed by a professional driver, while no tasks are offered to the occasional drivers belonging to $J \setminus J_\mathrm{o}$. The total expected cost of such a solution is represented by \begin{equation} \sum_{i\in I \setminus I_\mathrm{o}} c_i + \sum_{ (i,j) \in A} \left(P_{ij}(C_{ij}) C_{ij} + (1-P_{ij}(C_{ij})) c_i' \right), \label{eq:obj} \end{equation} which sums up the total company cost to perform each task $i\in I \setminus I_\mathrm{o}$ and the expected cost stemming from offering compensation $C_{ij}$ to driver $j$ to perform task $i$, for each $(i,j) \in A$. The expected cost for each $(i,j) \in A$ is given by the compensation $C_{ij}$ times the probability $P_{ij}(C_{ij})$ that occasional driver $j$ will accept task $i$, plus the penalized company cost $c'_{i}$ to perform task $i$, times the probability $1 - P_{ij}(C_{ij})$ that customer $j$ will refuse the task. The objective of the \probabbr is to define the sets $I_\mathrm{o}$ and $J_\mathrm{o}$, their one-to-one assignment $A \subset I_\mathrm{o} \times J_\mathrm{o}$, and the compensation $C_{ij}$ for each $(i,j) \in A$ in such a way that the expected total cost is minimized. \begin{figure} \caption{Instance and solution.} \label{fig:example_a} \caption{OD~1 accepts task 2.} \label{fig:example_b} \caption{OD~1 refuses task 2.} \label{fig:example_c} \caption{An instance and a solution of the \probabbr in which occasional driver (OD)~1 is offered a compensation of $C_{21}$ for task~2. Figures~\ref{fig:example_b} and \ref{fig:example_c} visualize the deliveries when OD~1 accepts or refuses task~2, respectively.} \label{fig:Example} \end{figure} \cref{fig:Example} provides an instance of the \probabbr and illustrates interactions between occasional drivers and the service provider, as well as the role of the acceptance probability. The instance consists of two tasks and two occasional drivers, i.e., $I = \{ 1, 2\}$ and $J = \{ 1, 2\}$. We consider the scenario where both occasional drivers are in-store customers at the store in the middle of the figure. The figure also indicates the delivery location of the two tasks as well as the home location of the occasional drivers (to illustrate the potential detour of potential occasional drivers to complete the task). \cref{fig:example_a} illustrates a solution for the instance in which task 1 is assigned to a professional driver (at cost $c_1$) and compensation $C_{21}$ is offered to occasional driver 1 for performing task 2, i.e., $I_{\mathrm{o}} = \{ 2\}$, $J_{\mathrm{o}} = \{ 1\}$, and $A = \{(2,1)\}$. Occasional driver 2 drives directly from the central depot to its final destination while occasional driver 1 must accept or refuse the task. With probability $P(C_{21})$, occasional driver 1 will accept the task (leading to total costs of $C_{21}$ + $c_1$) in which case they perform the task and then drive to their final destination, see \cref{fig:example_b}. With probability $1 - P(C_{21})$, they refuse the task and drive directly to their final destination. In this case, an additional professional driver must be used to perform task 1, and the total costs are equal to $c_1 + c_2'$, see \cref{fig:example_c}. Therefore, the expected total cost of the solution equals $P(C_{21})(C_{21} + c_1) + (1 - P(C_{21}))(c_1 + c'_2)$. \paragraph{Assumptions} The above definition of the \probabbr assumes a static scenario with complete information, where the set of tasks $I$ and the set of available occasional drivers $J$ are known at the time of the assignment (e.g., by requiring the occasional drivers to sign up for a certain time slot in order to be offered tasks). While the availability of an occasional driver is certain, their acceptance behavior is probabilistic. However, we assume that historical information combined with characteristics of tasks and occasional drivers (e.g., their destinations or historical acceptance behavior) allow estimation of compensation-dependent acceptance probability functions $P_{ij}: \mathbb{R}_+ \mapsto [0,1]$ for each $i\in I$ and $j\in J$. More precisely, we assume that these acceptance probabilities are \emph{separable}, i.e., that the probability of driver $j\in J$ for task $i\in I$ does not depend on other tasks $i'\in I\setminus \{i\}$ or compensations offered to other drivers $j'\in J\setminus \{j\}$, nor on their acceptance decisions. We argue that this assumption is realistic in the considered static setting where occasional drivers receive individual offers and thus have little opportunity to exchange information. Notice that information exchange between occasional drivers after their acceptance or refusal, which may influence their behavior, can be incorporated into the acceptance probabilities before the next time slot considered in our static setting. Finally, we also assume that $P_{ij}(0) = 0$ for all $i\in I, j \in J$, i.e., no occasional driver works for free and, consequently, is never offered a task without compensation. \subsection{Mixed integer nonlinear programming formulation\label{sec:mnlp}} Next, we introduce an MINLP formulation \eqref{eq:minlp} for the \probabbr that uses three sets of decision variables. Variables $y_i\in \{0,1\}$ indicate whether task $i\in I$ is performed by a professional driver, and variables $x_{ij}\in \{0,1\}$ indicate whether task $i\in I$ is offered to occasional driver $j\in J$. Variables $C_{ij}\ge 0$ represent the compensation offered to occasional driver $j\in J$ for performing task $i\in I$. \begin{subequations}\label{eq:minlp} \begin{align} \min \quad & \sum_{i\in I} \left( c_i y_i + \sum_{j\in J} \left( P_{ij}(C_{ij})C_{ij} + (1-P_{ij}(C_{ij})) c'_i \right) x_{ij} \right) \label{eq:minlp:obj} \\ \mbox{s.t.}\quad & \sum_{i\in I} x_{ij} \leq 1 & j\in J \label{eq:minlp:oc:assignment} \\ & \sum_{j\in J} x_{ij} + y_i = 1 & i\in I \label{eq:minlp:task:assignment} \\ & C_{ij} \le U_{ij} x_{ij} & i\in I,\ j\in J \label{eq:minlp:forcing} \\ & x_{ij} \in \{0,1\} & i\in I,\ j\in J \label{eq:minlp:x} \\ & y_i \in \{0,1\} & i\in I \label{eq:minlp:y} \\ & C_{ij} \ge 0 & i\in I,\ j\in J \label{eq:minlp:C} \end{align} \end{subequations} The objective function~\eqref{eq:minlp:obj} minimizes the expected total cost. Constraints~\eqref{eq:minlp:oc:assignment} ensure that at most one task is offered to each occasional driver. Equations~\eqref{eq:minlp:task:assignment} ensure that each task is either offered to an occasional driver or performed by a professional driver. Inequalities~\eqref{eq:minlp:forcing} ensure that the compensation offered is not greater than a given upper bound $U_{ij}$ if task $j\in J$ is offered to occasional driver $i\in I$. Note that the cost $c_i$ of a professional for task $i\in I$ can always be used as such an upper bound, since it will never be optimal to offer compensation that exceeds this value. Furthermore, constraints~\eqref{eq:minlp:forcing} also force $C_{ij}$ to zero if task $i\in I$ is not offered to occasional driver $j\in J$. Finally, constraints \eqref{eq:minlp:x}--\eqref{eq:minlp:C} define the domains of the variables. The difficulty of formulation~\eqref{eq:minlp} depends significantly on the considered probability function $P_{ij} : \mathbb{R}_+ \mapsto [0,1]$ of occasional drivers $j\in J$ for tasks $i\in I$. In the remainder of this section, we introduce two probability functions that can be used to model realistic acceptance behavior. Linearizations for these probability functions are discussed in \cref{sec:ilp}. \subsection{Linear acceptance probability\label{sec:mnlp:linear}} One way to model the acceptance behavior of occasional drivers is to assume that their willingness to perform a task increases linearly with the compensation offered. \citet{campbell_incentive_2006} propose a linear approach to model the time slot selection behavior of customers in the context of attended home delivery services. We adopt their idea to model the acceptance behavior of occasional drivers. For each task-driver pair, we assume an initial acceptance probability that depends on the pair's characteristics. We also assume that this probability can be increased linearly by the amount of compensation offered, at a rate that also depends on the pair's characteristics. Such linear acceptance models allow avoiding the oversimplified assumption of a given (task-specific) threshold above which an occasional driver will always perform a task. A linear model also has the advantage of having a small number of parameters that need to be estimated. We consider a linear model in which we assume that occasional driver $j\in J$ accepts task $i\in I$ with a given \emph{base probability} $\alpha_{ij}$, $0\le \alpha_{ij}\le 1$, if the compensation offered is greater than zero. Parameter $\beta_{ij}> 0$ specifies the rate at which the acceptance probability of occasional driver $j\in J$ increases for task $i\in I$ for a given compensation $C_{ij}$. Hence, the probability function is formally defined as \begin{equation} \label{eq:prob:linear} P_{ij}(C_{ij}) = \begin{cases} 0 & \mbox{if $C_{ij}=0$} \\ \min \{\alpha_{ij} + \beta_{ij} C_{ij}, 1 \} & \mbox{ otherwise } \end{cases}, \quad i\in I,\ j\in J. \end{equation} Considering function~\eqref{eq:prob:linear} in formulation~\eqref{eq:minlp}, we first observe that there always exists an optimal solution in which $C_{ij}>0$ if task $i\in I$ is offered to occasional driver $j\in J$, i.e., if $x_{ij}=1$. Since the acceptance probability for a compensation of zero is equal to zero, the alternative solution in which task $j$ is performed by a company driver is always at least as good (recall that we assumed that $c_i'\ge c_i$). Similarly, it is never optimal to offer a compensation that exceeds the cost of a professional driver for the same task, or that exceeds the minimum compensation that leads to an acceptance probability equal to one. Thus, $\min \{c_i, \frac{1 - \alpha_{ij}}{\beta_{ij}}\}$ is an upper bound on the compensation offered to occasional driver $j\in J$ for task $i\in I$ that can be used to (possibly) strengthen the upper bounds $U_{ij}$ in constraints~\eqref{eq:minlp:forcing}. Consequently, the variant of formulation~\eqref{eq:minlp} for the linear acceptance probability case is obtained by replacing the objective function~\eqref{eq:minlp:obj} by \begin{align} \min \quad & \sum_{i\in I} \left( c_i y_i + \sum_{j \in J} [(\alpha_{ij}+\beta_{ij} (C_{ij}-r_i)) C_{ij} + (1-\alpha_{ij}-\beta_{ij} (C_{ij} - r_i)) c'_i] x_{ij} \right). \label{eq:mnlp:obj:linear-prob} \end{align} \subsection{Logistic acceptance probability\label{sec:mnlp:logistic}} An alternative to modeling the acceptance behavior of occasional drivers that has been suggested in the literature is the use of logistic regression models. \citet{devari_crowdsourcing_2017} propose a logistic regression model to model the acceptance behavior of occasional drivers for performing a delivery for their friends. Similarly, \citet{Le2019} propose a binary-logit model where the willingness to work as an occasional driver is considered a dependent variable. Logistic regression is particularly appealing when historical or experimental data are available on the compensation offered for tasks and the resulting acceptance decisions. Such data can be used to fit a logistic regression model that predicts the acceptance probabilities of occasional drivers. Using parameters $\gamma_{ij}$ and $\delta_{ij}>0$, logistic acceptance probabilities are modeled using functions \begin{equation} \label{eq:prob:logistic} P_{ij}(C_{ij}) = \begin{cases} 0 & \mbox{if $C_{ij} = 0$} \\ \frac{1}{1 + e^{ - (\gamma_{ij} + \delta_{ij} C_{ij})}} & \mbox{ otherwise} \end{cases}, \quad i\in I,\ i\in J. \end{equation} A nonlinear model for the \probabbr when considering logistic acceptance probabilities is obtained from \eqref{eq:minlp} if the objective function \eqref{eq:minlp:obj} is replaced by \begin{align} \min \quad & \sum_{i\in I} \left( c_i y_i + \sum_{j \in J} [(\frac{1}{1 + e^{ - (\gamma_{ij} + \delta_{ij} C_{ij})}}) C_{ij} + (1-\frac{1}{1 + e^{ - (\gamma_{ij} + \delta_{ij} C_{ij})}}) c'_i] x_{ij} \right). \label{eq:mnlp:obj:logistic-prob} \end{align} \section{Mixed integer linear programming reformulations\label{sec:ilp}} A major advantage of the \probabbr over previously introduced models is its integration of assignment and compensation decisions, as well as its consideration of the uncertain acceptance behavior of occasional drivers. However, these aspects pose a significant challenge to the design of well-performing solution methods that derive optimal assignment and compensation decisions. In this section, we show how to overcome this challenge by exploiting some of the assumptions made in the \probabbr. We will show that, under certain conditions, the \probabbr can be solved optimally using a two-phase approach that first identifies optimal compensation values that are used as input to the second phase, in which assignment decisions are made. We also provide explicit formulas for optimal compensation values in the case of linear and logistic acceptance probabilities. These results allow us to reformulate MINLP~\eqref{eq:minlp} and its variants for the latter two acceptance probability functions as mixed-integer linear programs (MILPs). They also imply that the \probabbr can be solved in polynomial time whenever the first-phase problem of identifying optimal compensation values can be solved in polynomial time. The main property required for the results introduced below is that the compensation and acceptance decisions are \emph{separable}. This property holds for the \probabbr since the acceptance probability functions are assumed to be independent of each other (for each pair of task and occasional driver) and since the considered setting does not include constraints that limit the operator's choice, such as cardinality or budget limits on the tasks offered to occasional drivers. \cref{th:compensation-values} reveals that the separability of compensation and acceptance decisions allows for the identification of optimal compensation values independent of the task allocation decisions. The latter is made more explicit in \cref{corr:compensations:optimal}. \begin{theorem}\label{th:compensation-values} Consider an arbitrary instance of the \probabbr. There exists an optimal solution for this instance in which the (optimal) compensation values $C_{ij}^*$ are equal to \begin{equation*} C^*_{ij} = \argmin_{C_{ij}\ge 0} ~ P_{ij}(C_{ij})(C_{ij}-c'_i) \end{equation*} if task $i\in I$ is offered to occasional driver $j\in J$. Furthermore, compensations $C_{ij}^*$ for tasks $i\in I$ that are not offered to occasional driver $j\in J$ can be set to an arbitrary (non-negative) value. \end{theorem} \begin{corollary}\label{corr:compensations:optimal} There exists an optimal solution for an arbitrary instance of the \probabbr that can be identified by solving formulation \eqref{eq:minlp} while setting compensation values equal to \begin{equation}\label{eq:compensations:optimal} C_{ij}^* = \argmin_{C_{ij}\ge 0} P_{ij}(C_{ij}) (C_{ij} - c'_i) \end{equation} for each task $i\in I$ and every occasional driver $j\in J$. \end{corollary} A major consequence of \cref{corr:compensations:optimal} is that optimal compensation values, which can be calculated according to Equation~\eqref{eq:compensations:optimal}, are independent of the other decisions, i.e., which tasks to assign to professional drivers and which tasks to offer to which occasional driver. Therefore, we can first calculate the compensation values and then identify optimal assignment decisions using the MILP formulation~\eqref{eq:milp} in which cost parameters $w^*_{ij}$ are set to \begin{equation} w^*_{ij} = P_{ij}(C_{ij}^*) C_{ij}^* + (1 - P_{ij}(C^*_{ij})) c'_i \label{eq:expected_cost} \end{equation} for each $i\in I$ and $j\in J$. As in formulation~\eqref{eq:minlp}, variables $y_i\in \{0,1\}$ indicate whether task $i\in I$ is performed by a professional driver and variables $x_{ij}\in \{0,1\}$ indicate whether it is offered to occasional driver $j\in J$. \begin{subequations}\label{eq:milp} \begin{align} \min \quad & \sum_{i\in I} c_i y_i + \sum_{i\in I} \sum_{j\in J} w^*_{ij} x_{ij} \label{eq:milp:obj} \\ \mbox{s.t.}\quad & \sum_{i\in I} x_{ij} \leq 1 & j\in J \label{eq:milp:oc:assignment} \\ & \sum_{j\in J} x_{ij} + y_i = 1 & i\in I \label{eq:milp:task:assignment} \\ & x_{ij} \in \{0,1\} & i\in I,\ j\in J \label{eq:milp:x} \\ & y_i \in \{0,1\} & i\in I \label{eq:milp:y} \end{align} \end{subequations} Constraints~\eqref{eq:milp:oc:assignment} and \eqref{eq:milp:task:assignment} ensure that each occasional driver is offered at most one task, and that each task is either offered to an occasional driver or assigned to a professional driver. Overall, it is easy to see that formulation~\eqref{eq:milp} is equivalent to a standard assignment problem (with special ``assignment'' variables $y_i$ for each $i\in I$). Consequently, \cref{corr:polytime} follows, since formulation~\eqref{eq:milp} can be solved in polynomial time due to its totally unimodular constraint matrix. \begin{corollary}\label{corr:polytime} The \probabbr can be solved in polynomial time if the compensation and acceptance decisions are separable and optimal compensation values $C^*_{ij}$ for tasks $i\in I$ and occasional drivers $j\in J$ can be identified in polynomial time. \end{corollary} \cref{prop:compenation-values:linear,prop:compenation-values:logistic} provide explicit formulas for optimal compensations in instances of the \probabbr if the acceptance behavior of occasional drivers is modeled using linear or logistic acceptance probability functions, respectively, as introduced in \cref{sec:apuc}. Thus, such instances satisfy the conditions of \cref{corr:polytime} and can be solved in polynomial time. \begin{proposition}\label{prop:compenation-values:linear} Consider an instance of the \probabbr in which the acceptance behavior of occasional drivers is modeled using the linear acceptance probability function \begin{equation}\label{eq:prob:linear:proof} P_{ij}(C_{ij}) = \begin{cases} 0 & \mbox{if $C_{ij} =0$} \\ \min \{\alpha_{ij} + \beta_{ij} C_{ij}, 1 \} & \mbox{ otherwise } \end{cases}, \quad i\in I,\ j\in J. \end{equation} There exists an optimal solution for this instance with compensation values $C_{ij}^*=\frac{c'_i}{2} - \frac{\alpha_{ij}}{2 \beta_{ij}}$ for each task $i\in I$ and occasional driver $j\in J$. \end{proposition} \begin{proposition}\label{prop:compenation-values:logistic} Consider an instance of the \probabbr in which the acceptance behavior of occasional drivers is modeled using the logistic acceptance probability function \begin{equation}\label{eq:prob:logistic:proof} P_{ij}(C_{ij}) = \begin{cases} 0 & \mbox{if $C_{ij} = 0$} \\ \frac{1}{1 + e^{ - (\gamma_{ij} + \delta_{ij} C_{ij})}} & \mbox{ otherwise} \end{cases}, \quad i\in I,\ i\in J. \end{equation} There exists an optimal solution to that instance with compensation values $C_{ij}^* = - \frac{W(e^{\gamma_{ij} + \delta_{ij} c'_i - 1})-\delta_{ij} c'_i + 1}{\delta_{ij}}$ for each task $i\in I$ and occasional driver $j\in J$, in which $W(\cdot)$ is the Lambert $W$ function. \end{proposition} \section{The \probabbr without separability\label{sec:non-separable}} In this section, we discuss a generalization of the \probabbr in which the decisions about which tasks to offer to occasional drivers and the corresponding compensation are not independent of each other. This occurs, for example, when the operator has to respect (strategic) considerations that limit the number of tasks offered to occasional drivers or the budget available for such offers. In this section, we assume that the relevant limitations can be modeled as a set of $L$ linear constraints involving assignment and compensation decisions. Using notation $a_{ij}^\ell$ and $b_{ij}^\ell$ for each $i\in I$, $j\in J$, and $\ell \in \{1, \dots, L\}$ to denote the coefficients associated to these two sets of variables and $B^\ell$ for the corresponding (resource) limits, the considered set of constraints is written as \begin{align} & \sum_{i\in I} \sum_{j\in J} ( a_{ij}^\ell x_{ij} + b_{ij}^\ell C_{ij}) \le B^\ell, & \ell\in \{1, \dots, L\}. \label{eq:non-separable} \end{align} A cardinality constraint on the total number of tasks offered to occasional drivers can, e.g., be realized by setting $a_{ij}^\ell=1$ and $b_{ij}^\ell=0$ for all $i\in I$ and $j\in J$ while $a_{ij}^\ell=0$ and $b_{ij}^\ell=1$ holds for a constraint limiting the overall budget offered to occasional drivers. \cref{prop:np} shows that the inclusion of \emph{non-separability constraints} \eqref{eq:non-separable} implies NP-hardness of the resulting variant of the \probabbr, which we call the \emph{\probname without Separability (\probnonsepabbr)}. \begin{proposition}\label{prop:np} The \probnonsepabbr is strongly NP-hard. \end{proposition} While considering the above set of constraints related to operator choices, we still assume that the acceptance decisions of occasional drivers depend only on the task and compensation offered, i.e., the acceptance probability functions $P_{ij}(C_{ij})$ are still applicable. Thus, while the results concerning optimal compensation decisions from \cref{sec:ilp} and the two-phase solution approach are no longer applicable, the objective function~\eqref{eq:minlp:obj} can be decomposed for each $i\in I$ and $j\in J$, which is exploited in \cref{prop:piecewise}. \begin{proposition}\label{prop:piecewise} The nonlinear objective function~\eqref{eq:minlp:obj} in formulation~\eqref{eq:minlp} can be replaced by \begin{align} & \sum_{i\in I} c_i z_i + \sum_{i\in I} \sum_{j\in J} (f_{ij}(C_{ij}) + g_{ij}(x_{ij})). \label{eq:minlp:obj:simplified} \end{align} Thereby, $f_{ij}(C_{ij})$ is a nonlinear function depending on $C_{ij}$, and $g_{ij}(x_{ij})$ is a linear function in $x_{ij}$. In the case of generic acceptance probability functions $P_{ij}(C_{ij})$, we have $f_{ij}(C_{ij})=P_{ij}(C_{ij})(C_{ij}-c_i')$ and $g_{ij}(x_{ij})=c_i' x_{ij}$. \end{proposition} \cref{prop:piecewise} implies that a piecewise linear approximation of the nonlinear objective function~\eqref{eq:minlp:obj} can be derived using standard techniques. Thus, for each $i\in I$ and $j\in J$, we use a discrete set of possible compensation values $\{u_{ij}^k : k\in \{1, \dots, K\}\}$ such that $u_{ij}^1=0$, $u_{ij}^K=U_{ij}$, $u_{ij}^{k-1}<u_{ij}^k$, $k\in \{2, \dots, K\}$, together with non-negative variables $w_{ij}^k\ge 0$, $k\in \{1, \dots, K\}$. Since the nonlinear functions $f_{ij}(C_{ij})$ are not necessarily convex, formulation~\eqref{eq:milp:approx} also uses binary variables $v_{ij}^k\in \{0,1\}$ for each $i\in I$, $j\in J$, and $k\in \{1, \dots, K-1\}$. Here, $v_{ij}^k=1$ indicates that $u_{ij}^k \le C_{ij} \le u_{ij}^{k+1}$. \begin{subequations}\label{eq:milp:approx} \begin{align} \min\quad & \sum_{i\in I} c_i y_i + \sum_{i\in I} \sum_{j\in J} \left( \sum_{k=1}^K f_{ij}(u_{ij}^k) w_{ij}^k + g_{ij}(x_{ij})\right) \label{eq:milp:approx-obj} \\ \mbox{s.t.}\quad & \eqref{eq:minlp:oc:assignment}-\eqref{eq:minlp:y}, \eqref{eq:non-separable} \nonumber \\ &\sum_{k=1}^K u_{ij}^k w_{ij}^k \le U_{ij} x_{ij} & i\in I,\ j\in J \label{eq:milp:approx:U}\\ & w_{ij}^1 \le v_{ij}^1 & i\in I,\ j\in J \label{eq:milp:approx:wv:1}\\ & w_{ij}^k \le v_{ij}^{k-1} + v_{ij}^k & i\in I, j\in J,\ k\in \{2, \dots, K-1\} \label{eq:milp:approx:wv}\\ & w_{ij}^K \le v_{ij}^{K-1} & i\in I,\ j\in J \label{eq:milp:approx:wv:K}\\ & \sum_{k=1}^{K-1} v_{ij}^k = 1 & i\in I,\ j\in J \label{eq:milp:approx:v:convex}\\ & \sum_{k=1}^K w_{ij}^k = 1 & i\in I,\ j\in J \label{eq:milp:approx:w:convex}\\ & v_{ij}^k\in \{0,1\} & i\in I,\ j\in J,\ k\in \{1, \dots, K-1\} \label{eq:milp:approx:v} \\ & w_{ij}^k \ge 0 & i\in I,\ j\in J,\ k\in \{ 1,\dots, K \} \label{eq:milp:approx:w}\ \end{align} \end{subequations} The objective function~\eqref{eq:milp:approx-obj} is a piecewise linear approximation of the one introduced in \cref{prop:piecewise} using the discrete set of compensation values $u_{ij}^k$ and variables $w_{ij}^k$. Recall that $g_{ij}(x_{ij})$ is a linear function. As described in \cref{sec:mnlp}, constraints \eqref{eq:minlp:oc:assignment}-\eqref{eq:minlp:y} ensure that at most one task is offered to an occasional driver and that each task is either performed by a professional driver, or offered to an occasional driver. They also define the domains of variables $x_{ij}$ and $y_i$ for tasks $i\in I$ and occasional drivers $j\in J$. Constraints~\eqref{eq:non-separable} have been introduced at the beginning of this section while inequalities~\eqref{eq:milp:approx:U} are constraints $C_{ij}\le U_{ij} x_{ij}$ rewritten using the identity $\sum_{k=1}^K u_{ij}^k w_{ij}^k = C_{ij}$. Finally, \eqref{eq:milp:approx:wv:1}--\eqref{eq:milp:approx:w} are standard constraints used to model piecewise linear approximations. \cref{prop:piecewise:linear,prop:piecewise:logistic} detail how formulation~\eqref{eq:milp:approx} can be modified when the acceptance behavior is modeled using a linear and logistic acceptance probability function, respectively. \begin{proposition}\label{prop:piecewise:linear} Consider an arbitrary instance of the \probnonsepabbr in which the acceptance behavior of occasional drivers is modeled using the linear probability function~\eqref{eq:prob:linear}. Then, \cref{prop:piecewise} applies for $f_{ij}(C_{ij})=\beta_{ij} C_{ij}^2 + (\alpha_{ij} - \beta_{ij}c_i') C_{ij}$ and $g_{ij}(x_{ij}) = c_i'(1 - \alpha_{ij}) x_{ij}$. Furthermore, variables $v_{ij}^k$, $i\in I$, $j\in J$, $k\in \{1, \dots, K-1\}$ and constraints \eqref{eq:milp:approx:wv:1}--\eqref{eq:milp:approx:v:convex} involving them are redundant in formulation~\eqref{eq:milp:approx} and can therefore be removed. \end{proposition} \begin{proposition}\label{prop:piecewise:logistic} Consider an arbitrary instance of the \probnonsepabbr in which the acceptance behavior of occasional drivers is modeled using the logistic probability function~\eqref{eq:prob:logistic}. Then, \cref{prop:piecewise} applies for $g_{ij}(x_{ij}) = c_i' x_{ij}$ and $f_{ij}(C_{ij}) = \begin{cases} 0 & \mbox{ if $C_{ij} = 0$} \\ \frac{C_{ij} - c_i'}{1 + e^{-\gamma_{ij} - \delta_{ij} C_{ij}}} & \mbox{ otherwise} \end{cases}$. \end{proposition} \section{Experimental setup\label{sec:experimental-setup}} In this section, we describe our benchmark instances, provide further details on the parameters of the considered acceptance probability functions, and introduce alternative compensation models inspired by the literature that we use to evaluate our approach. \subsection{Instances \label{sec:instances}} As a relatively new area of research, the field of crowdsourced delivery still lacks established benchmark libraries. Most existing studies use instances from the VRP literature (e.g., \citet{Archetti2016,Barbosa2022}) or randomly generated synthetic instances (e.g., \citet{arslan_crowdsourced_2019,dayarian_crowdshipping_2020}). Since the instances used in these works do not consider acceptance probabilities, we generate a new set of synthetic instances. To this end, we simulate an in-store delivery setting, where occasional drivers are assumed to be regular in-store customers who are willing to deliver a task en route. We generate destinations for occasional drivers and tasks uniformly at random in a $200\times 200$ plane. The coordinates of these locations are rounded to two decimal places. The store is assumed to be located in the center of the plane and coincides with the initial locations of all occasional drivers. We assume that the company cost $c_i$ for each task $i\in I$ is equal to the Euclidean distance between the task delivery point and the depot. Each instance is characterized by a combination of the following three parameters: \begin{enumerate} \item The number of occasional drivers $\odno \in \{50, 75, \dots, 150\}$. \item A penalty parameter $\pnlty \in \{0, 0.05, \dots, 0.25\}$ that defines the relative increase in company costs if an occasional driver rejects an offer, i.e., $c_i'=(1+\rho) c_i$, $\forall i\in I$. \item A parameter $\utlty \in \{0, 0.1, \dots, 1\}$ that affects the acceptance probabilities of occasional drivers and, in particular, their sensitivity towards making detours when performing deliveries. In the linear case, this parameter is used to define the base probability $\alpha_{ij}$ for each $i\in I$ and $j\in J$ via a weighted distance utility $\nicefrac{\utlty d_{j}}{(d_{i} + d_{ij})}$, i.e., $\alpha_{ij}=\nicefrac{\utlty d_{j}}{(d_{i} + d_{ij})}$. Here, $d_k$ is the Euclidean distance between the store and the destination of $k\in I\cup J$, while $d_{ij}$ is the Euclidean distance between the destinations of task $i\in I$ and driver $j\in J$. \end{enumerate} For each configuration, five instances with 100 tasks are generated using different random seeds, so our instance library consists of $1\,650$ instances. \subsection{Acceptance probability functions} As explained above, for the \emph{linear model} we set the base probability $\alpha_{ij}$ of probability function~\eqref{eq:prob:linear} equal to the weighted distance utility for task $i\in I$ and driver $j\in J$. Thus, the base probability decreases with increasing detour of occasional drivers (for constant $\utlty$) or decreasing $\utlty$ (for constant detour). The rate of increase $\beta_{ij}$ is determined by multiplying the detour for driver $j\in J$ when delivering task $i\in I$ with a value generated uniformly at random from the interval $[0.5,2]$. To simulate the \emph{logistic acceptance probabilities}, we first generate an artificial historical dataset containing $100\,000$ data points with simulated decisions of potential occasional drivers. Each data point consists of locations of a driver-task pair, $\alpha_{ij}$ and $\beta_{ij}$ values calculated as in the linear case, a compensation value drawn randomly from the uniform distribution over $[0,100\sqrt{2}]$ (the upper bound of the interval corresponds to the maximal company cost that incurs for serving a customer request), and an acceptance decision generated using a Bernoulli trial based on the resulting (linear) acceptance probability. Afterwards, this data set is used to train a logistic regression model in order to obtain parameters $\gamma_{ij}$ and $\delta_{ij}$ of the logistic acceptance probability function~\eqref{eq:prob:logistic}. The dependent variable considered in the logistic regression model is the acceptance decision, and the independent variables used to estimate it are the Euclidean distances between the store and the destinations of task and driver, the detour for delivering the task, the compensation, and the driver sensitivity. The driver sensitivity corresponds to $\beta_{ij}$ values used in the linear model. \subsection{Benchmark compensation models \label{sec:benchmark_schemes}} We assess the potential advantages of the individualized compensation scheme proposed in this paper to the established detour-based, distance-based and flat compensation schemes introduced in the literature. The three schemes are formally defined as follows: \begin{itemize} \item In the \textbf{detour-based scheme}, the offered compensation is proportional to the detour of driver $j\in J$ when delivering task $i\in I$, i.e., $C_{ij}=p_\mathrm{detour} \cdot (d_i + d_{ij} - d_{j})$. \item The \textbf{distance-based scheme} compensates proportional to the distance between the central store and the destination of task $i\in I$, i.e., $C_{ij} = p_\mathrm{distance} \cdot d_i$ holds for all $j\in J$. \item The \textbf{flat scheme} offers a constant compensation for all tasks and drivers, i.e., $C_{ij} = p_\mathrm{flat}$ for all $i\in I$ and $j\in J$. \end{itemize} Each of the three schemes can be fully described by a single parameter (i.e., $p_\mathrm{detour}$, $p_\mathrm{distance}$, $p_\mathrm{flat}$). Naturally, the quality of the compensation schemes may be very sensitive to the values of these parameters. In order to assess the advantages of individual compensations over these schemes in a fair way, we aim to find (close-to) optimal parameter values for each of the three benchmark schemes. We use this value to replace the compensation in Equation~\eqref{eq:compensations:optimal} in the first phase of our two-phase solution approach, and then determine optimal assignments under the corresponding scheme. To determine the best compensation parameter for any given instance, we perform a heuristic search procedure in the interval $[0,p_\mathrm{max}]$ where $p_\mathrm{max}$ is the maximum value to which any optimal compensation $C_{ij}^*$ from the individualized scheme (identified using equation~\eqref{eq:compensations:optimal}) leads in the considered scheme. For example, in the distance-based scheme, the maximum can be calculated by $p_\mathrm{distance} = \max_{i\in I, j\in J} \{\nicefrac{C_{ij}^*}{d_i}\}$. Next, we identify optimal assignments for all compensation values $p\in \{ \nicefrac{\ell \cdot p_\mathrm{max}}{25}\mid \ell=0, 1, \dots 25\}$ by solving formulation~\eqref{eq:milp} using $w_{ij}^*=p$ for all $i\in I$ and $j\in J$. Let $\ell^*\in \{0, 1, \dots, 25\}$ be a value yielding minimum expected costs for the resulting assignment. We use the golden section search within the interval $[\nicefrac{\ell^- \cdot p_{\mathrm{max}}}{25},\nicefrac{\ell^+ \cdot p_{\mathrm{max}}}{25}]$, $\ell^-= \max\{\ell^*-1,0\}$, $\ell^+=\min\{\ell^*+1,25\}$ to find a parameter value leading to a local minimum of the expected costs using formulation~\eqref{eq:milp}. This value is then used for the comparison of the different schemes in the following section. \begin{comment} \subsection{Evaluation Criteria \label{sec:eval_crit}} Crowdsourced delivery aims to improve last-mile delivery operations. Some of the potential improvements include reduction in operational costs, emissions, traffic. Next, we describe the evaluation criteria used to measure the contribution to each of these potential improvement areas. \subsubsection{Total Expected Delivery Cost} We measure the improvements in operational costs by comparing total expected delivery cost. The total expected delivery cost is defined as follows: \begin{equation} \sum_{i\in I \setminus I_\mathrm{o}} c_i + \sum_{ (i,j) \in A} \left(P_{ij}(C_{ij}) C_{ij} + (1-P_{ij}(C_{ij})) c_i' \right) \label{eq:expected_cost} \end{equation} The company cost incurs for the tasks that are directly assigned to the company fleet. If a task is offered to an occasional driver, than the expected value, which is calculated with respect to the acceptance/rejection probabilities, compensation and penalized cost, of this assignment incurs. \subsubsection{Total Expected Delivery Distance} The reduction in emissions is evaluated by total expected delivery distance in a given solution, assuming that the emission released while delivering a task would be proportional to the distance traveled. Also, this metric reflects to the reduction in urban traffic, which is important as the capacity of an urban network is limited. We assumed no difference between the emissions of company vehicles and occasional driver vehicles. The total expected delivery distance is defined as follows: \begin{equation} \sum_{i\in I \setminus I_\mathrm{o}} 2 \times \text{Task Dist}_i + \sum_{ (i,j) \in A} \left(P_{ij}(C_{ij}) \text{Delivery Dist}_{ij} + (1-P_{ij}(C_{ij})) \times 2 \times \text{Task Dist}_i \right) \end{equation} If a task is assigned to the company fleet, than a fleet vehicle delivers the task and returns to depot. In this case, two times the task distance, the distance between the task delivery point and the depot, is traveled. If the task is offered to an occasional driver and accepted, the occasional driver travels the delivery distance, which is the path between the depot, task delivery point and driver destination respectively. In the case of rejection, the company vehicle makes the delivery by traversing the task distance. \subsubsection{Total Expected Offer Acceptance} The total expected offer acceptance simply refers to the sum of all acceptance probabilities of offered tasks and formally defined as follows: \begin{equation} \sum_{ (i,j) \in A} P_{ij}(C_{ij}) \end{equation} Measuring total expected offer acceptance is important both from the company and occasional driver perspectives. As acceptance rates increases, the company could delegate more tasks that will be delivered by its own fleet. Hence, a decrease in the fleet capacity could be expected in the long run. Also, an increase in offer acceptance means that the occasional drivers' willingness to accept offer increases. Hence, the metric could be a measure of driver satisfaction as well. \end{comment} \section{Computational study\label{sec:results}} In this section, we discuss the results of our computational study. We analyze the performance of the four considered compensation schemes for linear and logistic acceptance probability functions in \cref{sec:linearPerformance} and \cref{sec:logisticPerformance}, respectively. \cref{sec:sensitivity} analyzes the sensitivity of the results in the logistic model on the availability of occasional drivers, their willingness to take detours, and the penalty for rejecting tasks. We base our analysis of the compensation schemes on their respective performance in terms of economic benefit to the company, environmental impact, and satisfaction of the occasional drivers with the respective compensation schemes. We quantify these three performance indicators as follows: \begin{itemize} \item To quantify the economic benefit, we consider the \textbf{expected total cost}, which is directly related to the objective function values of the \probabbr. \item To quantify the environmental benefit, we calculate the \textbf{expected total distance} required to fulfill customer requests by professional and occasional drivers. This distance is an indicator of the traffic generated by deliveries and the associated emissions. For solution $A\subset I_o\times J_o$ represented as tasks $I_o\subseteq I$ offered to occasional drivers $J_o\subseteq J$ for compensations $C_{ij}$, $(i,j)\in A$, this is equal to $\sum_{i\in I\setminus I_o} 2 d_i + \sum_{(i,j)\in A} (P_{ij}(C_{ij}) (d_i + d_{ij} - d_j) + (1-P_{ij}(C_{ij})) 2 d_i )$. Note that the second term is equal to the (expected) detour made by additional drivers, since we want to measure the (additional) traffic resulting from satisfying customer requests. \item To quantify the satisfaction of occasional drivers, we calculate the \textbf{mean acceptance rate} $\sum_{(i,j)\in A} \nicefrac{P_{ij}(C_{ij})}{|A|}$. A high value indicates that occasional drivers are likely to receive offers that they are satisfied with as compensation for completing the task. To put this indicator in perspective, we evaluate it together with the \textbf{fraction of tasks offered}. A high value for both indicators represents a compensation scheme in which occasional drivers can expect to receive a high number of offers, and that these offers are generally acceptable. \end{itemize} Our conclusions regarding comparisons between the different compensation schemes are supported by paired t-tests performed for each instance and reported at $\alpha=0.05$ level in the remainder of this section. Full results are provided in the electronic companion, which includes detailed results for each instance and performance indicator. Runtimes are not reported as they are less than one second for each instance. \subsection{Performance comparison for linear acceptance probabilities \label{sec:linearPerformance}} \paragraph{Expected total cost.} We focus first on analyzing the expected total cost. \cref{fig:linearCost:boxplot} shows the relative savings for this criterion for all compensation schemes compared to the case where all tasks are allocated to the company fleet. From this figure, we conclude that the individualized compensation scheme leads to cost savings with a median of over $75\%$. The other schemes lead to significantly smaller median savings of around $65\%$. This figure also indicates that the detour-based scheme performs the worst, while the results of the flat and distance-based schemes appear to be similar. More insight into the relative performance of the four compensation schemes in terms of savings in expected total cost (in the linear case) can be obtained from \cref{fig:linearCost:performance}. For each scheme, this figure reports the fraction of instances in which the relative loss in solution quality (in percent) is at most a given value compared to the best performing scheme. \begin{figure} \caption{Expected total cost for linear acceptance probabilities.} \label{fig:linearCost} \end{figure} First, we observe that the results obtained by using the individualized compensation scheme are strictly better than those of any other scheme in all instances considered. This follows because the relative loss of solution quality of the individualized scheme is zero in $100\%$ of the instances. For all other schemes, the relative loss is at least $2\%$ for each instance. \cref{fig:linearCost:performance} also illustrates that the individualized scheme clearly outperforms all other schemes, that the flat scheme performs slightly better than the distance scheme, and that both of them clearly outperform the detour scheme. Note that this relative order of the schemes is rather unexpected. Intuitively, one would expect the distance and detour schemes to outperform the flat scheme, since the detour and distance amounts directly affect the acceptance probabilities. The results show that for each experimental configuration, the intervals of the relative mean difference in expected total cost between the individualized and benchmarks schemes are $(4.32\%,79.51\%)$, $(6.10\%,61.28\%)$, and $(5.02\%,66.78\%)$ for the detour, distance, and flat schemes, respectively. Paired t-tests confirm that the differences between the compensation schemes are statistically significant in all instances. \paragraph{Expected total distance.} Similar to the analogous plots for total expected cost, \cref{fig:linearDistance:boxplot} shows the relative savings in terms of total distance for the case where all tasks are performed by company vehicles, while \cref{fig:linearDistance:performance} visualizes the percentage of instances for which the losses of a particular scheme relative to the best performing scheme (per instance) are less than or equal to a certain value. From these figures, we observe median savings of around $80\%$ for the three benchmark schemes and more than $85\%$ for the individualized scheme. The individualized scheme clearly outperforms the alternatives and performs best in every instance. In terms of total distance, the performance of the flat scheme comes closest to the individualized scheme, and the distance scheme slightly outperforms the detour scheme. The mean differences between the individualized and the benchmark schemes for each parameter setting lie in the intervals $(3.67\%,169.88\%)$, $(5.01\%,192.01\%)$, and $(3.39\%,185.73\%)$ for the detour, distance and flat schemes, respectively. The results of the paired t-tests performed show that the differences between the individualized and each benchmark scheme in terms of expected total distance are statistically significant. \begin{figure} \caption{Expected total distance for linear acceptance probabilities. } \label{fig:linearDistance} \end{figure} \begin{comment} \subsubsection{Expected delivery distance} \cref{fig:linearDistance} also contains a boxplot and a performance profile that is structured similar to \cref{fig:linearCost}. It compares the savings in total expected delivery distance compared to the traditional scenario. We observe from the boxplot that incentive scheme is able to produce mean savings of $35\%$ on the average, which is significantly higher than the benchmark schemes. This is also supported by the performance profile, which shows that the incentive scheme overperforms the benchmark scheme in all instances. The comparison within the benchmark schemes is also similar to the cost comparison. The detour scheme is outperformed by the distance and flat schemes, and the flat scheme is slightly better than the distance scheme. For each instance configuration, paired t-test results show that the difference between the incentive and each benchmark scheme is statistically significant. When we calculate the differences in total expected delivery distances, mean differences between the incentive and the benchmark schemes take values in $(1.2\%,9.1\%)$, $(2.0\%,6.2\%)$, and $(1.4\%,7.2\%)$ intervals for the detour, distance and flat schemes respectively. The detailed results for the paired t-tests are provided in \cref{table:ttestLinearDist}. \begin{figure} \caption{Expected toal distance} \label{fig:linearDistance} \end{figure} \end{comment} \paragraph{Mean acceptance rate.} \cref{fig:linearAcceptance:tasksoffered} shows the fractions of tasks offered to occasional drivers by the different compensation schemes, while \cref{fig:linearAcceptance:meanacceptance} shows the mean acceptance rates of these tasks per instance. We observe that all schemes offer the majority of tasks to occasional drivers while achieving high expected acceptance rates. As for the previous two criteria, the individualized compensation scheme clearly outperforms the benchmark schemes as it achieves a higher acceptance rate while simultaneously offering more tasks. Among the benchmark schemes, the flat scheme shows the best performance, followed by the detour and distance schemes. The average differences in mean acceptance rates between the individualized and the benchmark schemes per experimental configuration take values in $(0.88\%,8.10\%)$, $(1.18\%,16.04\%)$ and $(1.24\%,8.10\%)$ for the detour, distance, and flat schemes, respectively. While the differences between the individualized scheme and the distance or flat scheme are statistically significant in all instances, this is not true for 49 out of 330 cases when comparing the individualized scheme and the detour scheme. These exceptions arise for instances with a relatively small number of occasional drivers, i.e., when $|J|\in \{50,75\}$. As their availability increases, the differences between the two schemes increase. \begin{figure} \caption{Fractions of offered tasks and expected acceptance rates for linear acceptance probabilities.} \label{fig:linearAcceptance} \end{figure} \begin{comment} \bc{ Results for Mean Acceptance Per Instance: $\nicefrac{\sum_{(i,j)\in A} P_{ij}(C_{ij})}{|A|}$ \begin{enumerate} \item Mean Difference Ranges: \begin{itemize} \item Detour: $(0.88\%,8.10\%)$ \item Distance: $(1.18\%,16.04\%)$ \item Flat: $(1.24\%,8.10\%)$ \end{itemize} \item P-Values: \begin{itemize} \item Detour: 49 of 330 configuration are not significant at 0.05 level. All with O = 50,75. \item Distance: All significant at 0.05 level, largest p: 0.023 \item Flat: All significant at 0.05 level, largest p: 0.035 \end{itemize} \end{enumerate} \red{No results for fraction of tasks offered, as they fail to produce t-test results. The reason is that the number of tasks are discrete and we calculate fractions using these values. Therefore, there are many configurations with exactly the same number of offered tasks.} } \end{comment} Overall, we conclude that the individualized scheme clearly outperforms all alternatives considered in each of the three evaluation criteria. Thus, the use of the individualized scheme has the greatest potential to increase the satisfaction of occasional drivers and to reduce the size of a company fleet, while simultaneously reducing the expected total cost and distance. \begin{comment} \subsubsection{Expected fraction of accepted offers} \cref{fig:linearAcceptance} compares the expected fraction of offers obtained by each compensation scheme in the form of a boxplot and a performance profile similar to the previous subsections. However, the values are compared directly instead of comparing the savings, as there is no offers made in the traditional scenario. Similar to the previous criteria, the incentive scheme outperforms the benchmarks both in values and instance-wise comparisons. Among the benchmark schemes, the flat scheme shows the best performance, followed by detour and distance schemes. This is opposed to what we saw in the previous criteria and shows that even though detour scheme is able to yield larger fraction of accepted offers, this success does not reflect to the cost and distance savings. The mean differences in all experimental settings take values in $(0.9\%,12.2\%)$, $(1.2\%,17.2\%)$ and $(1.8\%,9.4\%)$ intervals while comparing the incentive scheme with detour, distance, and flat schemes, respectively. The differences obtained while comparing incentive scheme with the distance and flat schemes are statistically significant for all configurations. Also, there are statistically significant differences between the incentive and detour schemes in the majority of configurations. There are nine exceptional cases at 0.05 level, all of which contain the smallest number of possible occasional drivers. As the availability increases, the difference between the schemes increases. The paired t-test results for all configurations are provided in \cref{table:ttestLinearAcceptance}. \begin{figure} \caption{Total expected offer acceptance improvement w.r.t. no crowdshippers} \label{fig:linearAcceptance} \end{figure} \end{comment} \subsection{Performance comparison for logistic acceptance probabilities\label{sec:logisticPerformance}} \begin{figure} \caption{Expected total cost for logistic acceptance probabilities.} \label{fig:logisticCost} \end{figure} \paragraph{Expected total cost.} \cref{fig:logisticCost:boxplot} shows the relative cost savings of all four compensation schemes compared to the setting with no occasional drivers for logistic acceptance probability functions. Consistent with the case of the linear acceptance function, the individualized compensation scheme outperforms all benchmark schemes with median cost savings of more than $35\%$. \cref{fig:logisticCost:performance} shows that the individualized scheme outperforms all other schemes in every instance. For each parameter combination, the mean differences between the individualized scheme and the detour-based, distance-based, and flat schemes take values in the intervals $(0.27\%,6.73\%)$, $(1.31\%,8.99\%)$ and $(1.41\%,13.60\%)$, respectively. The statistical significance of the differences between the individualized and the benchmark schemes is confirmed for each setting considered by paired t-tests. \cref{fig:logisticCost} also reveals that the detour-based scheme comes closest to the individualized scheme and outperforms the other two schemes. The flat compensation scheme shows the worst performance, which is in contrast to the case of linear acceptance probability functions, where the performance order of the three benchmark schemes is reversed. \begin{comment} Also, the savings are smaller in the logistic acceptance case. The difference in the savings is due to using random Bernouilli draws while creating the historical dataset for the logistic acceptance function. This procedure leads to smaller and less sensitive acceptance probabilities, resulting in higher expected costs. The performance order within benchmark schemes in the logistic case is more realistic than the linear case. This is also due to the lower sensitivity of logistic acceptance probabilities to the compensations. As a result, the schemes based on the factors impacting the acceptance probabilities more produce better results. While comparing \cref{fig:linearCost,fig:logisticCost}, we observe that the incentive scheme consistently outperforms the benchmarks, even though the order within the benchmarks is not consistent. Also, the savings are smaller in the logistic acceptance case. The difference in the savings is due to using random Bernouilli draws while creating the historical dataset for the logistic acceptance function. This procedure leads to smaller and less sensitive acceptance probabilities, resulting in higher expected costs. The performance order within benchmark schemes in the logistic case is more realistic than the linear case. This is also due to the lower sensitivity of logistic acceptance probabilities to the compensations. As a result, the schemes based on the factors impacting the acceptance probabilities more produce better results. \end{comment} \paragraph{Expected total distance.} Similar to above, Figures~\ref{fig:logisticDistance:boxplot} and \ref{fig:logisticDistance:performance} show the relative savings (compared to the case of assigning all tasks to the company fleet) and relative losses compared to the best performing method (per instance) in terms of expected total distance. The individualized scheme leads to median savings of more than $55\%$, while the other schemes perform worse. Consistent with the case of expected total costs, we also observe that the relative savings are somewhat smaller than those observed for the case of linear acceptance probabilities, cf.\ \cref{fig:linearDistance:boxplot}. Focusing on the differences between the different compensation schemes in each experimental configuration, we see that the mean differences between the individualized scheme and the detour-based, distance-based, and flat schemes lie in the intervals $(1.13\%,40.19\%)$, $(3.20\%,44.37\%)$, and $(3.38\%,73.21\%)$, respectively. The differences are statistically significant in all cases and for each benchmark scheme. \begin{comment} The structure of \cref{fig:logisticDistance} is similar to the structure of \cref{fig:linearDistance}. It also consists of a boxplot showing the distribution of savings in total expected delivery distance compared to the traditional scenario, and a performance profile comparing the schemes instance-wise using the same criteria. We observe that the incentive scheme is able to yield larger savings than the benchmarks with an average more than $20\%$. Also, it is the best performing scheme in the majority of instances. Among the benchmark schemes, the performance order from the best to the worst is detour, distance and flat. When we compare the total expected delivery distances obtained on each experimental configuration, we see that mean difference between the incentive scheme and detour, distance and flat schemes vary in $(0.3\%,9.0\%)$, $(1.2\%,10.6\%)$, and $(1.3\%,15.5\%)$ intervals, respectively. All results are statistically significant for distance and flat schemes. There are 26 experimental setups for which the mean differences between incentive and detour schemes are not statistically significant at 0.01 level. These results are coming from experiments in which either the number of occasional drivers is higher than the number of tasks, e.g., $\odno = 125,150$ and the distance utility weight of the drivers is high, e.g., $\utlty = 80,90,100$, or the number of occasional drivers is the smallest, e.g., $\odno=50$, and penalty ratio is high, e.g., $\pnlty=15,20,25$. Hence, we conclude that the difference between incentive and detour scheme performances diminishes either when there are many convenient options to assign a task, but the room for the incentive is low due to the high distance utility of drivers or when there are few available driver options with a high cost of rejection. The difference between the incentive and detour schemes is statistically significant in the rest of the configurations. The detailed results are shown in \cref{table:ttestLogisticDist}. \end{comment} \begin{figure} \caption{Expected total distance for logistic acceptance probabilities.} \label{fig:logisticDistance} \end{figure} \paragraph{Mean acceptance rate.} Finally, \cref{fig:logisticAcceptance:tasksoffered,fig:logisticAcceptance:meanacceptance} show that the individualized scheme is superior to the benchmark schemes, as it outsources more tasks than the other schemes and achieves the highest mean acceptance rate. The median fraction of tasks offered to occasional drivers is around $90\%$, and the median expected acceptance rate (per instance) of these offers is close to $70\%$. In comparison, the median fraction of tasks offered by the flat compensation scheme is only about $75\%$, and the median acceptance rate per instance is only slightly higher than $60\%$. If we examine the differences in each configuration, we see that the average differences of the mean acceptance rates per configuration of the detour, distance, and flat schemes to the individualized scheme lie in the intervals $(1.04\%,11.75\%)$, $(1.30\%,14.13\%)$, and $(3.11\%,24.68\%)$, respectively. While the differences between the individualized and the flat schemes are statistically significant in all cases, this is not true for the distance scheme in 2 out of $330$ configurations ($|J|=150$, $\rho\in \{20,50\}$, $\mu=0$) in which occasional drivers are mainly concerned with distance and are not sensitive to the detour. When comparing the detour-based and the individualized scheme, we observe that the differences are not statistically significant in $76$ out of $330$ configurations (in which $|J|\in \{50,75,100\}$). This shows that the individualized scheme significantly outperforms the detour-based schemes in environments characterized by an oversupply of occasional drivers. \begin{comment} There is only one instance where the difference between incentive and detour schemes is not statistically significant. In this instance, the weight parameter equals 100, and the number of occasional drivers is less than the tasks. Hence, the available options are small while the drivers' sensitivity to detour is highest. The differences in the remaining configurations are also statistically significant. All paired t-test results are given in \cref{table:ttestLogisticAccept}. \bc{ Results for Mean Acceptance Per Instance: $\nicefrac{\sum_{(i,j)\in A} P_{ij}(C_{ij})}{|A|}$ \begin{enumerate} \item Mean Difference Ranges: \begin{itemize} \item Detour: $(1.04\%,11.75\%)$ \item Distance: $(1.30\%,14.13\%$ \item Flat: $(3.11\%,24.68\%)$ \end{itemize} \item P-Values: \begin{itemize} \item Detour: 76 of 330 configurations are not significant at 0.05 level. All with O=50,75,100 \item Distance: 2 of 330 configurations are not significant at 0.05 level. (O=150, P=20,25, R=0). Characteristic: Many options, high penalty, drivers are insensitive to the detour. \item Flat: All significant at 0.05 level, largest p: 0.017 \end{itemize} \end{enumerate} \red{No results for fraction of tasks offered, as they fail to produce t-test results. The reason is that the number of tasks are discrete and we calculate fractions using these values. Therefore, there are many configurations with exactly the same number of offered tasks.} } \end{comment} \begin{figure} \caption{Fractions of offered tasks and expected acceptance rates for logistic acceptance probabilities.} \label{fig:logisticAcceptance} \end{figure} Overall, we conclude that the individualized scheme clearly outperforms all alternatives considered in each of the three evaluation criteria for both linear and logistic acceptance probability functions. \begin{comment} \subsubsection{Expected delivery cost} \cref{fig:logisticCost} shows in the boxplot the distribution of savings in total expected delivery cost obtained by each compensation scheme compared to the traditional scenario using logistic acceptance probability. The performance profile compares the savings obtained by each scheme instance-wise. Here, one can observe that the incentive scheme outperforms all benchmark schemes on the average, with an average more than $35\%$. Moreover, one can also observe on the performance profile that the incentive scheme strictly outperforms the benchmarks in each instance. Paired t-tests comparing the total expected delivery costs obtained by the incentive and each benchmark scheme on each instance configuration shows that the mean differences are statistically significant and takes values in $(0.3\%,6.7\%)$, $(1.3\%,9.0\%)$ and $(1.4\%,13.6\%)$ intervals while comparing incentive scheme to detour, distance, and flat schemes respectively. The detailed results are provided in \cref{table:ttestLogisticCost}. While comparing \cref{fig:linearCost,fig:logisticCost}, we observe that the incentive scheme consistently outperforms the benchmarks, even though the order within the benchmarks is not consistent. Also, the savings are smaller in the logistic acceptance case. The difference in the savings is due to using random Bernouilli draws while creating the historical dataset for the logistic acceptance function. This procedure leads to smaller and less sensitive acceptance probabilities, resulting in higher expected costs. The performance order within benchmark schemes in the logistic case is more realistic than the linear case. This is also due to the lower sensitivity of logistic acceptance probabilities to the compensations. As a result, the schemes based on the factors impacting the acceptance probabilities more produce better results. \begin{figure} \caption{Total expected delivery cost - Logistic Acceptance} \label{fig:logisticCost} \end{figure} \subsubsection{Expected delivery distance} The structure of \cref{fig:logisticDistance} is similar to the structure of \cref{fig:linearDistance}. It also consists of a boxplot showing the distribution of savings in total expected delivery distance compared to the traditional scenario, and a performance profile comparing the schemes instance-wise using the same criteria. We observe that the incentive scheme is able to yield larger savings than the benchmarks with an average more than $20\%$. Also, it is the best performing scheme in the majority of instances. Among the benchmark schemes, the performance order from the best to the worst is detour, distance and flat. When we compare the total expected delivery distances obtained on each experimental configuration, we see that mean difference between the incentive scheme and detour, distance and flat schemes vary in $(0.3\%,9.0\%)$, $(1.2\%,10.6\%)$, and $(1.3\%,15.5\%)$ intervals, respectively. All results are statistically significant for distance and flat schemes. There are 26 experimental setups for which the mean differences between incentive and detour schemes are not statistically significant at 0.01 level. These results are coming from experiments in which either the number of occasional drivers is higher than the number of tasks, e.g., $\odno = 125,150$ and the distance utility weight of the drivers is high, e.g., $\utlty = 80,90,100$, or the number of occasional drivers is the smallest, e.g., $\odno=50$, and penalty ratio is high, e.g., $\pnlty=15,20,25$. Hence, we conclude that the difference between incentive and detour scheme performances diminishes either when there are many convenient options to assign a task, but the room for the incentive is low due to the high distance utility of drivers or when there are few available driver options with a high cost of rejection. The difference between the incentive and detour schemes is statistically significant in the rest of the configurations. The detailed results are shown in \cref{table:ttestLogisticDist}. \begin{figure} \caption{Total expected delivery distance} \label{fig:logisticDistance} \end{figure} \subsubsection{Expected fraction of offers} \cref{fig:logisticAcceptance} shows that the incentive scheme is also superior to the benchmark schemes in terms of making more successful offers. The average of the accepted fraction of offers is close to $60\%$ using incentive scheme, while it is either close to or less than $50\%$ for the others. The performance order among the benchmark schemes are similar to the previous one, which is detour, distance and flat from the best to the worst. When we investigate the differences in each configuration, we see that the mean difference of the detour, distance, and flat schemes respectively vary in $(0.06\%,19.8\%)$, $(2.9\%,20.8\%)$, and $(3.1\%,32.5\%)$ intervals compared to the incentive scheme. The differences obtained while comparing incentive scheme to the distance and flat schemes are statistically significant in all configurations. There is only one instance where the difference between incentive and detour schemes is not statistically significant. In this instance, the weight parameter equals 100, and the number of occasional drivers is less than the tasks. Hence, the available options are small while the drivers' sensitivity to detour is highest. The differences in the remaining configurations are also statistically significant. All paired t-test results are given in \cref{table:ttestLogisticAccept}. \begin{figure} \caption{Total expected offer acceptance} \label{fig:logisticAcceptance} \end{figure} \end{comment} \subsection{Sensitivity analysis} \begin{figure} \caption{Expected total cost.} \label{fig:logistic_cost_O} \caption{Expected total distance.} \label{fig:logistic_dist_O} \caption{Fraction of tasks offered.} \label{fig:logistic_frac_acc_O} \caption{Mean acceptance rate.} \label{fig:logistic_mean_acc_O} \caption{Sensitivity to availability} \label{fig:sens_od} \end{figure} \label{sec:sensitivity} In the following, we analyze the sensitivity of the three performance indicators with regard to changes in the availability of occasional drivers $|J|$, the penalty for rejected tasks $\pnlty$, and the parameter $\utlty$ that affects the willingness to make detours. For the sake of brevity, and since all effects can be readily demonstrated within the logistic acceptance probability model, we restrict our discussion of sensitivity to the above parameters to that model. The full set of results can be found in the electronic companion. \paragraph{Sensitivity towards availability of occasional drivers.} \cref{fig:sens_od} shows the savings in expected total cost and distance compared to the scenario with no occasional drivers, as well as the mean acceptance rates and fraction of tasks offered, respectively, for different numbers of (available) occasional drivers $|J|$. We observe that the savings tend to increase with increasing availability of occasional drivers for all the performance indicators, which is to be expected since a higher number gives the company more options to select cost-efficient occasional drivers. \cref{fig:logistic_frac_acc_O} nicely illustrates that the number of tasks offered is always close to the maximum number possible (i.e., the minimum of $|I|$ and $|J$), while the mean acceptance always hovers around $65\%$. With a higher number of occasional drivers (i.e., with $|J| > |I|$), it is more likely to find occasional drivers who are more willing to accept the offers, and the mean acceptance increases (cf.\ \cref{fig:logistic_mean_acc_O}). \begin{figure} \caption{Expected total cost.} \label{fig:logistic_cost_P} \caption{Expected total distance.} \label{fig:logistic_dist_P} \caption{Fraction of tasks offered.} \label{fig:logistic_frac_acc_P} \caption{Mean acceptance rate.} \label{fig:logistic_mean_acc_P} \caption{Sensitivity to penalty} \label{fig:sens_pen} \end{figure} \paragraph{Sensitivity towards the penalty parameter $\pnlty$.} \cref{fig:sens_pen} illustrates the effect of increasing the penalty for rejected offers. As expected, an increase in the penalty has a significant impact on the individualized scheme, leading to fewer tasks being offered to the occasional drivers on average, cf.\ \cref{fig:logistic_frac_acc_P}. At the same time, the mean acceptance of offers increases (cf.\ \cref{fig:logistic_mean_acc_P}), indicating a more careful selection of occasional drivers and a stronger focus on making good (acceptable) offers. The decrease in the number of occasional drivers used and the simultaneous increase in the penalty also lead to smaller savings in terms of total expected costs. However, as can be seen in \cref{fig:logistic_cost_P}, the effects are much less severe than one would expect, decreasing from $40\%$ savings in the unrealistic case that no penalty is incurred to $33\%$ in the case that a rejected offer leads to a $25\%$ increase in operational costs. One explanation is that the individualized compensation scheme allows good (acceptable) offers to be made to occasional drivers for tasks that would initially be very costly to the company, thus allowing the company to efficiently outsource them. This notion is supported by \cref{fig:logistic_dist_P}, which depicts the savings in terms of expected total distance. The value remains almost constant for all values of $\pnlty$, showing that even with a high penalty, the model prioritizes assigning tasks that would result in a high distance driven (and cost) for the company fleet, while selecting occasional drivers who only have to make a small detour. \paragraph{Sensitivity towards parameter $\utlty$.} A larger value of $\mu$ increases the acceptance probability because it increases the willingness of occasional drivers to make a detour. As can be seen in \cref{fig:sens_util}, this increases the savings in terms of expected total cost and distance (cf.\ \cref{fig:logistic_cost_R,fig:logistic_dist_R}), more tasks are offered to occasional drivers, and the acceptance rates also increase, cf.\ \cref{fig:logistic_frac_acc_R,fig:logistic_mean_acc_R}. While this effect was expected, it is worth noting that the individualized scheme generates significant cost and distance savings for all parameter values considered. Furthermore, high fractions of tasks offered to occasional drivers and mean acceptance rates indicate that it allows to outsource a high proportion of tasks to occasional drivers, relatively independent of the concrete value of parameter $\utlty$. \begin{figure} \caption{Expected total cost.} \label{fig:logistic_cost_R} \caption{Expected total distance.} \label{fig:logistic_dist_R} \caption{Fraction of tasks offered.} \label{fig:logistic_frac_acc_R} \caption{Mean acceptance rate.} \label{fig:logistic_mean_acc_R} \caption{Sensitivity to distance utility weight} \label{fig:sens_util} \end{figure} \section{Conclusions and future work} \label{sec:conclusion} In this paper, we have introduced the \probname which defines an integrated solution to the task assignment and compensation decisions, while explicitly accounting for the probability with which occasional drivers will accept these assignments. To this end, we have proposed an MINLP formulation for generic acceptance probability functions. For the cases of linear and logistic acceptance probability functions, we introduced exact linear reformulations that can be solved in polynomial time. We conducted an extensive computational study comparing our approach with established benchmark compensation schemes from the literature. The results of our study show that the use of crowdshippers can lead to substantial economic and environmental benefits, and that our approach outperforms the compensation schemes from the literature in terms of expected total cost and expected total distance. Our model allows operators to offer individualized compensation that results in a very high rate of accepted offers while achieving higher savings in total expected cost and distance than the other schemes, which is due to the flexibility of our model according to the results. This is an important observation from the perspective of both the operator and the occasional driver. It shows that more occasional drivers can be utilized, which may result in a reduction in the size of the dedicated fleet in the long run. Moreover, a high acceptance rate of offers shows that the offers are more persuasive to occasional drivers, which signals higher driver satisfaction and therefore may result in a higher engagement and availability of occasional drivers in the long run. The sensitivity analysis shows that the proposed model is robust with regards to variations in the availability of occasional drivers and adapts very well to changes in the magnitude of the penalty incurred for rejected offers as well as to changes in the utility functions of occasional drivers. Future research can be devoted to studying dynamic versions of the problem, where tasks and available drivers are unknown in advance and become available over time. Another possible avenue is to enrich the problem by considering additional features such as routing decisions for company drivers and offering bundles of tasks to occasional drivers. \ifTR \appendix This \ifTR appendix \else e-companion \fi is structured as follows: \cref{appendix:proofs} contains proofs of theorems and propositions stated in the article and \cref{appendix:results} contains additional and more detailed computational results. \else \begin{APPENDIX}{Proofs of theorems and propositions} \fi \ifTR \section{Proofs\label{appendix:proofs}} \fi \appendixproof{\cref{th:compensation-values}} { We conclude that the second statement of the theorem holds since compensation values $C_{ij}$ are irrelevant if task $i\in I$ is not offered to occasional driver $j\in J$, cf.\ objective function~\eqref{eq:obj}. Assume that task $i\in I$ is offered to occasional driver $j\in J$. We observe that an optimal compensation value $C_{ij}^*$ is equal to \[ C_{ij}^* = \argmin_{C_{ij}\ge 0} P_{ij}(C_{ij}) (C_{ij} - c'_i) + c_i' \] since this value minimizes the (relevant terms of) objective function~\eqref{eq:minlp} and there are no dependencies on other tasks and drivers. The theorem follows because the last term of this equation is constant and can therefore be neglected. } \appendixproof{\cref{prop:compenation-values:linear}} { Recall that the optimal compensation values are calculated according to Equation~\eqref{eq:compensations:optimal} given in \cref{corr:compensations:optimal}. We observe that the minimum of Equation~\eqref{eq:compensations:optimal} when plugging in probability function~\eqref{eq:prob:linear:proof} is attained for $C_{ij}\in ]0,c_i']$, in which case its value is less than or equal to zero, while it is equal to zero if $C_{ij}=0$. Similarly, we have $C_{ij}\le \frac{1-\alpha_{ij}}{\beta_{ij}}$ since $P_{ij}(\frac{1-\alpha_{ij}}{\beta_{ij}})=1$ and further increasing the compensation would increase the value of $P_{ij}(C_{ij})(C_{ij}-c_i')$. Thus, an optimal compensation value for task $i\in I$ when offered to occasional driver $j\in J$ must be a minimizer of $(\alpha_{ij} + \beta_{ij} C_{ij})(C_{ij}-c_i') = \beta_{ij} C_{ij}^2 + (\alpha_{ij} - \beta_{ij} c_i') C_{ij} - \alpha_{ij} c_i'$. By taking the first derivative (and since the second derivative is non-negative), the minimizer $C_{ij}^*$ is obtained as \[ 2 \beta_{ij} C_{ij}^* + \alpha_{ij} - \beta_{ij}c_i' = 0 \Leftrightarrow C_{ij}^* = \frac{c_i'}{2} - \frac{\alpha_{ij}}{2 \beta_{ij}}. \] } \appendixproof{\cref{prop:compenation-values:logistic}} { We first observe that optimal compensation values lie in the intervals $]0,c_i']$ since Equation~\eqref{eq:compensations:optimal} evaluates to a non-positive value in this interval. Thus, an optimal compensation value for task $i\in I$ when offered to occasional driver $j\in J$ must be a minimizer of $\frac{C_{ij} - c_i'}{1 + e^{ - (\gamma_{ij} + \delta_{ij} C_{ij})}}$ whose first derivative $\frac{e^{-(\gamma_{ij}+\delta_{ij} C_{ij})}\left( 1+(C_{ij}-c_i')\delta_{ij}+e^{\gamma_{ij} + \delta_{ij} C_{ij}}\right)}{\left(1+e^{-(\gamma_{ij} + \delta_{ij} C_{ij})}\right)^2}$ is equal to zero if and only if the (strictly monotonically increasing) function $1+(C_{ij}-c_i')\delta_{ij}+e^{\gamma_{ij} + \delta_{ij} C_{ij}}$ is equal to zero. We use the following basic manipulations \begin{align*} 1+(C_{ij}^*-c_i')\delta_{ij}+e^{\gamma_{ij} + \delta_{ij} C_{ij}^*} = 0 & \quad \Leftrightarrow \quad e^{\gamma_{ij} + \delta_{ij} C_{ij}^*} = c_i' \delta_{ij} - 1 - \delta_{ij} C_{ij}^* \quad \Leftrightarrow \\ \Leftrightarrow \quad (c_i' \delta_{ij} - 1 - \delta_{ij} C_{ij}^*) e^{-\gamma_{ij} - \delta_{ij} C_{ij}^*} = 1 & \quad \Leftrightarrow \quad (c_i' \delta_{ij} - 1 - \delta_{ij} C_{ij}^*) e^{c_i' \delta_{ij} - 1 - \delta_{ij} C_{ij}^*} = e^{\gamma_{ij} + c_i'\delta_{ij} - 1} \end{align*} that hold for optimal compensation values $C_{ij}^*$ and observe that the last equation has the form $z e^z = b$ for $z=c_i' \delta_{ij} - 1 - \delta_{ij} C_{ij}^*$ and $b=e^{\gamma_{ij} + c_i'\delta_{ij} - 1}$. It can therefore be solved using the real-valued Lambert $W$ function $W(z e^z)=z$ and it follows that \begin{align*} c_i' \delta_{ij} - 1 - \delta_{ij} C_{ij}^* & = W(e^{\gamma_{ij} + c_i'\delta_{ij} - 1}) & \quad \Leftrightarrow \quad C_{ij}^* = -\frac{W(e^{\gamma_{ij} + c_i'\delta_{ij} - 1}) - c_i' \delta_{ij} + 1}{\delta_{ij}}. \end{align*} } \appendixproof{\cref{prop:np}} { Obviously, the problem is in NP, since any assignment and compliance with the constraints can be verified in polynomial time (given there is only a polynomial number of constraints of type \eqref{eq:non-separable}). We show the proposition by reduction from the multidimensional knapsack problem. Let $A$ be an instance of the multidimensional knapsack problem with $d$ dimensions and items $I^K$ with value $v^K_{i}$ for $i^K \in I^K$. Let $C^K_\ell$ be the capacity of the knapsack in dimension $\ell = 1,\ldots,d$ and $w^K_{i\ell}$ be the weight of $i^K \in I^K$ in dimension $\ell = 1,\ldots,d$. Design an instance $A'$ of \probnonsepabbr with tasks $I$ as follows: \begin{itemize} \item Let $\omega = \Pi_{i \in I^K} \min\left\{v_{i^K},2\right\}$ be a large constant. \item With each item $i^K \in I^K$ identify a task $i \in I$ with $c_i = c'_i = \omega v^K_i$. \item Let $J$ be the set of occasional drivers with $|J| \ge |I|$. For each $j \in J$ set the parameters of the probability function such that $j$ accepts a task $i\in I$ for any $C_{ij} > 0$. \item For $\ell = 1,\ldots,d$ introduce the non-separability constraint \begin{align} \sum_{i\in I} \sum_{j\in J} (w^K_{i\ell} x_{ij}) \le C^K_\ell. \label{eq:non-separable_np} \end{align} \end{itemize} Each constraint \eqref{eq:non-separable_np} in $A'$ models the knapsack constraint for dimension $\ell$ in $A$. Then, there exists a solution to $A$ with a value of at least $V$ if and only if there exists an assignment of tasks in $A'$ with an expected cost of at most $\tilde{C} = \left(\sum_{i^k \in I^K} v_{i^k} - V\right) \omega + 1$. \begin{itemize} \item[$\Rightarrow$] Let $i_1^K,\ldots,i_n^K$ be a feasible solution for $A$ with $V = \sum_1^n v^K_i$, which are identified by tasks $1,\ldots,n$ in $A'$. Take $n$ occasional drivers in $J$, w.l.o.g. $1,\ldots,n$ and offer a compensation of $C_{ij} = \frac{1}{n}$. The construction of the probability function ensures that the occasional drivers will accept these tasks (with a probability of one). Due to the construction of the non-separability constraints, and since the solution is feasible for the $A$, assigning the tasks to these occasional drivers is also feasible in $A'$. Since no other tasks are offered to the occasional drivers, the total expected cost is $\tilde{C}$. \item[$\Leftarrow$] Consider a feasible solution to $A'$ with cost of $C' \leq \tilde{C}$ in which, w.l.o.g., tasks $1,\ldots,n$ are allocated to occasional drivers. Since the occasional drivers require a compensation strictly greater than zero to perform the task, it holds that \[ \sum_{j \in J}\sum_{i \in J} v^K_i x_{ij} \geq V \] since otherwise the total expected cost would be higher than $\tilde{C}$. Then, the corresponding items $i^K_1,\ldots,i^K_n$ also sum up to a value of at least $V$ in $A$. Since all non-separability constraints are adhered to in $A'$, this solution is also feasible for $A$. \end{itemize} } \appendixproof{\cref{prop:piecewise}} { We first recall that $x_{ij}=0$ implies $C_{ij}=0$ due to inequalities~\eqref{eq:minlp:forcing}. Since $P_{ij}(0)=0$ holds by assumption all nonlinear terms, $P_{ij}(C_{ij}) x_{ij}$ in objective function~\eqref{eq:minlp:obj} can be replaced by $P_{ij}(C_{ij})$. As a consequence the proposition follows since \begin{align*} & \sum_{i\in I} c_i z_i + \sum_{i\in I} \sum_{j\in J} \left( P_{ij}(C_{ij}) C_{ij} + (1-P_{ij}(C_{ij})) c_i' \right) x_{ij} = \\ = & \sum_{i\in I} c_i z_i + \sum_{i\in I} \sum_{j\in J} \left( C_{ij} P_{ij}(C_{ij}) x_{ij} - c_i' P_{ij}(C_{ij}) x_{ij} + c_i' x_{ij} \right) = \\ = & \sum_{i\in I} c_i z_i + \sum_{i\in I} \sum_{j\in J} \left( P_{ij}(C_{ij})(C_{ij} - c_i') + c_i' x_{ij} \right) \end{align*} } \appendixproof{\cref{prop:piecewise:linear}} { Using $U_{ij}= \min\{c_i, \frac{1-\alpha_{ij}}{\beta_{ij}}\}$ discussed in \cref{sec:mnlp:linear}, the linear acceptance probability function~\eqref{eq:prob:linear} can be simplified to $P_{ij}(C_{ij})=\alpha_{ij} + \beta_{ij}C_{ij}$ due to constraints~\eqref{eq:minlp:forcing}, cf.\ \cref{prop:piecewise}. Thus, $f_{ij}(C_{ij}) + g_{ij}(x_{ij})$ must be equal to the corresponding (nonlinear) part of \eqref{eq:mnlp:obj:linear-prob} for each $i\in I$ and $j\in J$ and we have \begin{align*} f_{ij}(C_{ij}) + g_{ij}(x_{ij}) & = (P_{ij}(C_{ij}) C_{ij} + (1- P_{ij}(C_{ij})) c_i') x_{ij} = \\ & = ((\alpha_{ij}+\beta_{ij} C_{ij}) C_{ij} + (1-\alpha_{ij}-\beta_{ij} C_{ij}) c'_i) x_{ij} = \\ & = \beta_{ij} C_{ij}^2 x_{ij} + (\alpha_{ij} - \beta_{ij} c_i' C_{ij} x_{ij} + c_i'(1 - \alpha_{ij}) x_{ij} = \\ & = \underbrace{\beta_{ij} C_{ij}^2 + (\alpha_{ij} - \beta_{ij} c_i') C_{ij}}_{f_{ij}(C_{ij})} + \underbrace{c_i'(1 - \alpha_{ij}) x_{ij}}_{g_{ij}(x_{ij})} \end{align*} The last equation holds since $x_{ij}=0$ implies that $C_{ij}=0$ due to constraints~\eqref{eq:minlp:forcing}. The second part of the proposition follows since $f_{ij}(C_{ij})$ is a convex (quadratic) function as $\beta_{ij}>0$. } \appendixproof{\cref{prop:piecewise:logistic}} { We first observe that the proposition holds for $C_{ij}=0$ in which case we obtain $f_{ij}(C_{ij})+g_{ij}(x_{ij}) = c_i' x_{ij}$. For $0 < C_{ij}\le U_{ij}$ we obtain \begin{align*} f_{ij}(C_{ij}) + g_{ij}(x_{ij}) & = (P_{ij}(C_{ij}) C_{ij} + (1- P_{ij}(C_{ij})) c_i') x_{ij} = \frac{(C_{ij} - c_i') x_{ij}}{1+e^{-\gamma_{ij} - \delta_{ij} C_{ij}}} + c_i' x_{ij}. \end{align*} We observe that we can remove $x_{ij}$ from $(C_{ij} - c_i') x_{ij}$ since the assumption $C_{ij}> 0$ implies that $x_{ij}=1$. } \ifTR \else \end{APPENDIX} \ECSwitch \ECHead{Supplemental material} \fi \ifTR \section{Additional and detailed computational results\label{appendix:results}} \fi \begin{longtable}{rrlllrrrrrrr} \kill \caption{Paired t-test results comparing the individualized compensation scheme with the benchmark schemes in terms of total expected cost obtained using linear acceptance probability.} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endfirsthead \caption{(continued.)} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endhead 50 & 0.00 & 0.00 & 4617.53 & 0.15 & 0.00 & 0.08 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 4573.57 & 0.14 & 0.00 & 0.08 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 4526.51 & 0.13 & 0.00 & 0.08 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4476.96 & 0.12 & 0.00 & 0.08 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 4424.89 & 0.12 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 \\ & & 0.50 & 4370.55 & 0.11 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 4311.78 & 0.10 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.70 & 4248.36 & 0.09 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 4180.39 & 0.08 & 0.00 & 0.06 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 4107.25 & 0.07 & 0.00 & 0.06 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 4031.02 & 0.04 & 0.00 & 0.06 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 4638.81 & 0.14 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 4594.34 & 0.14 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 4546.85 & 0.13 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4496.62 & 0.12 & 0.00 & 0.08 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 4444.26 & 0.11 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.50 & 4389.67 & 0.11 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 4330.36 & 0.10 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 4266.70 & 0.09 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 4197.91 & 0.08 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 4123.62 & 0.07 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 4046.93 & 0.04 & 0.00 & 0.06 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 4655.59 & 0.14 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 4611.02 & 0.14 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 4563.27 & 0.13 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4512.21 & 0.12 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 4458.82 & 0.11 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.50 & 4403.55 & 0.11 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 4343.94 & 0.10 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 4279.69 & 0.09 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 4210.03 & 0.08 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 4135.38 & 0.07 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 4058.26 & 0.04 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 4665.98 & 0.14 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 4621.43 & 0.14 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 4573.72 & 0.13 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4521.90 & 0.12 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 4468.45 & 0.11 & 0.00 & 0.09 & 0.00 & 0.07 & 0.00 \\ & & 0.50 & 4412.85 & 0.11 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 4352.55 & 0.10 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 4288.43 & 0.09 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 4218.11 & 0.08 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 4143.48 & 0.07 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 4065.98 & 0.05 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 4672.25 & 0.14 & 0.00 & 0.10 & 0.00 & 0.09 & 0.00 \\ & & 0.10 & 4627.93 & 0.14 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 4580.41 & 0.13 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4528.96 & 0.12 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 4475.72 & 0.12 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.50 & 4419.85 & 0.11 & 0.00 & 0.09 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 4359.04 & 0.10 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 4294.91 & 0.09 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 4224.54 & 0.08 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 4149.98 & 0.07 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 4071.96 & 0.05 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 4675.80 & 0.14 & 0.00 & 0.10 & 0.00 & 0.09 & 0.00 \\ & & 0.10 & 4631.57 & 0.14 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 4583.88 & 0.13 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4532.59 & 0.13 & 0.00 & 0.10 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 4479.37 & 0.12 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 \\ & & 0.50 & 4423.57 & 0.11 & 0.00 & 0.09 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 4362.62 & 0.10 & 0.00 & 0.09 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 4298.55 & 0.09 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 4228.30 & 0.08 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 4153.63 & 0.07 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 4075.62 & 0.05 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ \midrule 75 & 0.00 & 0.00 & 3211.97 & 0.31 & 0.00 & 0.19 & 0.00 & 0.19 & 0.00 \\ & & 0.10 & 3150.19 & 0.30 & 0.00 & 0.18 & 0.00 & 0.18 & 0.00 \\ & & 0.20 & 3085.64 & 0.29 & 0.00 & 0.18 & 0.00 & 0.18 & 0.00 \\ & & 0.30 & 3018.23 & 0.27 & 0.00 & 0.17 & 0.00 & 0.17 & 0.00 \\ & & 0.40 & 2946.62 & 0.26 & 0.00 & 0.17 & 0.00 & 0.16 & 0.00 \\ & & 0.50 & 2872.59 & 0.24 & 0.00 & 0.17 & 0.00 & 0.15 & 0.00 \\ & & 0.60 & 2794.91 & 0.23 & 0.00 & 0.16 & 0.00 & 0.15 & 0.00 \\ & & 0.70 & 2711.08 & 0.21 & 0.00 & 0.16 & 0.00 & 0.14 & 0.00 \\ & & 0.80 & 2621.94 & 0.18 & 0.00 & 0.16 & 0.00 & 0.13 & 0.00 \\ & & 0.90 & 2528.45 & 0.15 & 0.00 & 0.15 & 0.00 & 0.12 & 0.00 \\ & & 1.00 & 2431.15 & 0.10 & 0.00 & 0.14 & 0.00 & 0.11 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 3236.30 & 0.31 & 0.00 & 0.20 & 0.00 & 0.19 & 0.00 \\ & & 0.10 & 3173.74 & 0.30 & 0.00 & 0.19 & 0.00 & 0.19 & 0.00 \\ & & 0.20 & 3108.20 & 0.29 & 0.00 & 0.19 & 0.00 & 0.18 & 0.00 \\ & & 0.30 & 3038.66 & 0.27 & 0.00 & 0.19 & 0.00 & 0.17 & 0.00 \\ & & 0.40 & 2965.56 & 0.26 & 0.00 & 0.18 & 0.00 & 0.17 & 0.00 \\ & & 0.50 & 2890.13 & 0.24 & 0.00 & 0.18 & 0.00 & 0.16 & 0.00 \\ & & 0.60 & 2810.97 & 0.23 & 0.00 & 0.18 & 0.00 & 0.15 & 0.00 \\ & & 0.70 & 2726.45 & 0.21 & 0.00 & 0.17 & 0.00 & 0.14 & 0.00 \\ & & 0.80 & 2636.31 & 0.19 & 0.00 & 0.17 & 0.00 & 0.13 & 0.00 \\ & & 0.90 & 2541.97 & 0.16 & 0.00 & 0.16 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 2443.87 & 0.10 & 0.00 & 0.15 & 0.00 & 0.12 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 3255.94 & 0.31 & 0.00 & 0.21 & 0.00 & 0.19 & 0.00 \\ & & 0.10 & 3192.52 & 0.29 & 0.00 & 0.20 & 0.00 & 0.19 & 0.00 \\ & & 0.20 & 3125.86 & 0.29 & 0.00 & 0.20 & 0.00 & 0.18 & 0.00 \\ & & 0.30 & 3055.12 & 0.27 & 0.00 & 0.20 & 0.00 & 0.18 & 0.00 \\ & & 0.40 & 2981.15 & 0.26 & 0.00 & 0.19 & 0.00 & 0.17 & 0.00 \\ & & 0.50 & 2904.33 & 0.24 & 0.00 & 0.19 & 0.00 & 0.16 & 0.00 \\ & & 0.60 & 2824.01 & 0.23 & 0.00 & 0.18 & 0.00 & 0.15 & 0.00 \\ & & 0.70 & 2738.60 & 0.21 & 0.00 & 0.18 & 0.00 & 0.15 & 0.00 \\ & & 0.80 & 2647.44 & 0.19 & 0.00 & 0.18 & 0.00 & 0.14 & 0.00 \\ & & 0.90 & 2552.98 & 0.16 & 0.00 & 0.17 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 2454.39 & 0.11 & 0.00 & 0.16 & 0.00 & 0.12 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 3269.88 & 0.31 & 0.00 & 0.21 & 0.00 & 0.20 & 0.00 \\ & & 0.10 & 3206.21 & 0.30 & 0.00 & 0.21 & 0.00 & 0.19 & 0.00 \\ & & 0.20 & 3139.25 & 0.29 & 0.00 & 0.21 & 0.00 & 0.18 & 0.00 \\ & & 0.30 & 3067.75 & 0.27 & 0.00 & 0.21 & 0.00 & 0.18 & 0.00 \\ & & 0.40 & 2992.78 & 0.26 & 0.00 & 0.20 & 0.00 & 0.17 & 0.00 \\ & & 0.50 & 2915.21 & 0.24 & 0.00 & 0.20 & 0.00 & 0.17 & 0.00 \\ & & 0.60 & 2834.13 & 0.23 & 0.00 & 0.19 & 0.00 & 0.16 & 0.00 \\ & & 0.70 & 2747.74 & 0.21 & 0.00 & 0.19 & 0.00 & 0.15 & 0.00 \\ & & 0.80 & 2656.28 & 0.19 & 0.00 & 0.18 & 0.00 & 0.14 & 0.00 \\ & & 0.90 & 2561.70 & 0.16 & 0.00 & 0.17 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 2462.41 & 0.11 & 0.00 & 0.17 & 0.00 & 0.12 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 3279.02 & 0.31 & 0.00 & 0.22 & 0.00 & 0.20 & 0.00 \\ & & 0.10 & 3215.48 & 0.30 & 0.00 & 0.22 & 0.00 & 0.19 & 0.00 \\ & & 0.20 & 3148.54 & 0.29 & 0.00 & 0.21 & 0.00 & 0.19 & 0.00 \\ & & 0.30 & 3076.79 & 0.28 & 0.00 & 0.21 & 0.00 & 0.18 & 0.00 \\ & & 0.40 & 3001.48 & 0.26 & 0.00 & 0.21 & 0.00 & 0.17 & 0.00 \\ & & 0.50 & 2923.36 & 0.24 & 0.00 & 0.20 & 0.00 & 0.17 & 0.00 \\ & & 0.60 & 2841.68 & 0.23 & 0.00 & 0.20 & 0.00 & 0.16 & 0.00 \\ & & 0.70 & 2754.88 & 0.21 & 0.00 & 0.19 & 0.00 & 0.15 & 0.00 \\ & & 0.80 & 2663.16 & 0.19 & 0.00 & 0.19 & 0.00 & 0.14 & 0.00 \\ & & 0.90 & 2568.14 & 0.16 & 0.00 & 0.18 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 2468.73 & 0.11 & 0.00 & 0.17 & 0.00 & 0.12 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 3284.01 & 0.31 & 0.00 & 0.22 & 0.00 & 0.20 & 0.00 \\ & & 0.10 & 3220.28 & 0.30 & 0.00 & 0.22 & 0.00 & 0.20 & 0.00 \\ & & 0.20 & 3153.25 & 0.29 & 0.00 & 0.22 & 0.00 & 0.19 & 0.00 \\ & & 0.30 & 3081.50 & 0.28 & 0.00 & 0.22 & 0.00 & 0.18 & 0.00 \\ & & 0.40 & 3005.97 & 0.26 & 0.00 & 0.22 & 0.00 & 0.18 & 0.00 \\ & & 0.50 & 2927.76 & 0.25 & 0.00 & 0.21 & 0.00 & 0.17 & 0.00 \\ & & 0.60 & 2845.77 & 0.23 & 0.00 & 0.21 & 0.00 & 0.16 & 0.00 \\ & & 0.70 & 2758.71 & 0.22 & 0.00 & 0.20 & 0.00 & 0.16 & 0.00 \\ & & 0.80 & 2666.56 & 0.20 & 0.00 & 0.19 & 0.00 & 0.15 & 0.00 \\ & & 0.90 & 2571.44 & 0.17 & 0.00 & 0.19 & 0.00 & 0.14 & 0.00 \\ & & 1.00 & 2472.17 & 0.11 & 0.00 & 0.18 & 0.00 & 0.13 & 0.00 \\ \midrule 100 & 0.00 & 0.00 & 2206.74 & 0.57 & 0.00 & 0.37 & 0.00 & 0.37 & 0.00 \\ & & 0.10 & 2134.99 & 0.56 & 0.00 & 0.37 & 0.00 & 0.36 & 0.00 \\ & & 0.20 & 2061.18 & 0.55 & 0.00 & 0.37 & 0.00 & 0.35 & 0.00 \\ & & 0.30 & 1985.07 & 0.53 & 0.00 & 0.36 & 0.00 & 0.33 & 0.00 \\ & & 0.40 & 1906.57 & 0.51 & 0.00 & 0.36 & 0.00 & 0.32 & 0.00 \\ & & 0.50 & 1825.18 & 0.49 & 0.00 & 0.35 & 0.00 & 0.31 & 0.00 \\ & & 0.60 & 1740.57 & 0.47 & 0.00 & 0.34 & 0.00 & 0.30 & 0.00 \\ & & 0.70 & 1652.52 & 0.43 & 0.00 & 0.33 & 0.00 & 0.28 & 0.00 \\ & & 0.80 & 1562.53 & 0.39 & 0.00 & 0.31 & 0.00 & 0.26 & 0.00 \\ & & 0.90 & 1469.11 & 0.33 & 0.00 & 0.29 & 0.00 & 0.24 & 0.00 \\ & & 1.00 & 1371.79 & 0.20 & 0.00 & 0.25 & 0.00 & 0.21 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 2221.55 & 0.58 & 0.00 & 0.38 & 0.00 & 0.38 & 0.00 \\ & & 0.10 & 2148.63 & 0.57 & 0.00 & 0.38 & 0.00 & 0.37 & 0.00 \\ & & 0.20 & 2073.98 & 0.56 & 0.00 & 0.38 & 0.00 & 0.36 & 0.00 \\ & & 0.30 & 1996.64 & 0.54 & 0.00 & 0.38 & 0.00 & 0.35 & 0.00 \\ & & 0.40 & 1916.35 & 0.52 & 0.00 & 0.37 & 0.00 & 0.34 & 0.00 \\ & & 0.50 & 1833.03 & 0.50 & 0.00 & 0.37 & 0.00 & 0.32 & 0.00 \\ & & 0.60 & 1747.62 & 0.47 & 0.00 & 0.36 & 0.00 & 0.31 & 0.00 \\ & & 0.70 & 1658.84 & 0.44 & 0.00 & 0.34 & 0.00 & 0.29 & 0.00 \\ & & 0.80 & 1567.98 & 0.40 & 0.00 & 0.33 & 0.00 & 0.27 & 0.00 \\ & & 0.90 & 1474.15 & 0.34 & 0.00 & 0.30 & 0.00 & 0.25 & 0.00 \\ & & 1.00 & 1376.14 & 0.21 & 0.00 & 0.26 & 0.00 & 0.22 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 2234.08 & 0.58 & 0.00 & 0.39 & 0.00 & 0.39 & 0.00 \\ & & 0.10 & 2160.51 & 0.57 & 0.00 & 0.39 & 0.00 & 0.38 & 0.00 \\ & & 0.20 & 2084.80 & 0.56 & 0.00 & 0.39 & 0.00 & 0.37 & 0.00 \\ & & 0.30 & 2005.94 & 0.54 & 0.00 & 0.39 & 0.00 & 0.36 & 0.00 \\ & & 0.40 & 1924.20 & 0.52 & 0.00 & 0.39 & 0.00 & 0.35 & 0.00 \\ & & 0.50 & 1839.92 & 0.50 & 0.00 & 0.38 & 0.00 & 0.34 & 0.00 \\ & & 0.60 & 1753.58 & 0.48 & 0.00 & 0.37 & 0.00 & 0.32 & 0.00 \\ & & 0.70 & 1664.44 & 0.45 & 0.00 & 0.36 & 0.00 & 0.30 & 0.00 \\ & & 0.80 & 1572.75 & 0.41 & 0.00 & 0.34 & 0.00 & 0.28 & 0.00 \\ & & 0.90 & 1478.43 & 0.34 & 0.00 & 0.32 & 0.00 & 0.26 & 0.00 \\ & & 1.00 & 1379.76 & 0.21 & 0.00 & 0.28 & 0.00 & 0.23 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 2243.76 & 0.58 & 0.00 & 0.40 & 0.00 & 0.40 & 0.00 \\ & & 0.10 & 2169.68 & 0.57 & 0.00 & 0.40 & 0.00 & 0.39 & 0.00 \\ & & 0.20 & 2093.42 & 0.56 & 0.00 & 0.40 & 0.00 & 0.38 & 0.00 \\ & & 0.30 & 2013.29 & 0.55 & 0.00 & 0.40 & 0.00 & 0.37 & 0.00 \\ & & 0.40 & 1930.49 & 0.53 & 0.00 & 0.40 & 0.00 & 0.36 & 0.00 \\ & & 0.50 & 1845.87 & 0.51 & 0.00 & 0.39 & 0.00 & 0.35 & 0.00 \\ & & 0.60 & 1758.87 & 0.48 & 0.00 & 0.39 & 0.00 & 0.34 & 0.00 \\ & & 0.70 & 1669.11 & 0.45 & 0.00 & 0.37 & 0.00 & 0.32 & 0.00 \\ & & 0.80 & 1577.06 & 0.41 & 0.00 & 0.36 & 0.00 & 0.29 & 0.00 \\ & & 0.90 & 1482.20 & 0.35 & 0.00 & 0.33 & 0.00 & 0.27 & 0.00 \\ & & 1.00 & 1383.06 & 0.22 & 0.00 & 0.29 & 0.00 & 0.24 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 2250.65 & 0.58 & 0.00 & 0.41 & 0.00 & 0.41 & 0.00 \\ & & 0.10 & 2176.63 & 0.57 & 0.00 & 0.41 & 0.00 & 0.40 & 0.00 \\ & & 0.20 & 2099.51 & 0.56 & 0.00 & 0.41 & 0.00 & 0.39 & 0.00 \\ & & 0.30 & 2018.90 & 0.55 & 0.00 & 0.41 & 0.00 & 0.38 & 0.00 \\ & & 0.40 & 1935.62 & 0.53 & 0.00 & 0.41 & 0.00 & 0.37 & 0.00 \\ & & 0.50 & 1850.70 & 0.51 & 0.00 & 0.41 & 0.00 & 0.36 & 0.00 \\ & & 0.60 & 1763.48 & 0.49 & 0.00 & 0.40 & 0.00 & 0.34 & 0.00 \\ & & 0.70 & 1673.21 & 0.46 & 0.00 & 0.39 & 0.00 & 0.33 & 0.00 \\ & & 0.80 & 1580.96 & 0.42 & 0.00 & 0.37 & 0.00 & 0.30 & 0.00 \\ & & 0.90 & 1485.66 & 0.36 & 0.00 & 0.34 & 0.00 & 0.28 & 0.00 \\ & & 1.00 & 1386.01 & 0.23 & 0.00 & 0.30 & 0.00 & 0.25 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 2255.77 & 0.58 & 0.00 & 0.41 & 0.00 & 0.41 & 0.00 \\ & & 0.10 & 2181.50 & 0.58 & 0.00 & 0.41 & 0.00 & 0.41 & 0.00 \\ & & 0.20 & 2103.49 & 0.57 & 0.00 & 0.42 & 0.00 & 0.40 & 0.00 \\ & & 0.30 & 2022.77 & 0.56 & 0.00 & 0.42 & 0.00 & 0.39 & 0.00 \\ & & 0.40 & 1939.63 & 0.54 & 0.00 & 0.43 & 0.00 & 0.38 & 0.00 \\ & & 0.50 & 1854.69 & 0.51 & 0.00 & 0.42 & 0.00 & 0.37 & 0.00 \\ & & 0.60 & 1767.45 & 0.49 & 0.00 & 0.41 & 0.00 & 0.35 & 0.00 \\ & & 0.70 & 1676.81 & 0.46 & 0.00 & 0.40 & 0.00 & 0.33 & 0.00 \\ & & 0.80 & 1584.27 & 0.42 & 0.00 & 0.38 & 0.00 & 0.31 & 0.00 \\ & & 0.90 & 1488.35 & 0.37 & 0.00 & 0.36 & 0.00 & 0.29 & 0.00 \\ & & 1.00 & 1388.51 & 0.23 & 0.00 & 0.31 & 0.00 & 0.26 & 0.00 \\ \midrule 125 & 0.00 & 0.00 & 1715.33 & 0.73 & 0.00 & 0.49 & 0.00 & 0.52 & 0.00 \\ & & 0.10 & 1661.05 & 0.71 & 0.00 & 0.48 & 0.00 & 0.51 & 0.00 \\ & & 0.20 & 1605.86 & 0.69 & 0.00 & 0.47 & 0.00 & 0.49 & 0.00 \\ & & 0.30 & 1549.94 & 0.66 & 0.00 & 0.46 & 0.00 & 0.46 & 0.00 \\ & & 0.40 & 1492.80 & 0.64 & 0.00 & 0.45 & 0.00 & 0.44 & 0.00 \\ & & 0.50 & 1434.80 & 0.61 & 0.00 & 0.44 & 0.00 & 0.41 & 0.00 \\ & & 0.60 & 1375.87 & 0.57 & 0.00 & 0.42 & 0.00 & 0.38 & 0.00 \\ & & 0.70 & 1315.72 & 0.53 & 0.00 & 0.39 & 0.00 & 0.35 & 0.00 \\ & & 0.80 & 1254.50 & 0.48 & 0.00 & 0.36 & 0.00 & 0.31 & 0.00 \\ & & 0.90 & 1192.38 & 0.37 & 0.00 & 0.30 & 0.00 & 0.26 & 0.00 \\ & & 1.00 & 1129.36 & 0.17 & 0.00 & 0.21 & 0.00 & 0.21 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 1717.17 & 0.74 & 0.00 & 0.51 & 0.00 & 0.54 & 0.00 \\ & & 0.10 & 1662.44 & 0.72 & 0.00 & 0.50 & 0.00 & 0.52 & 0.00 \\ & & 0.20 & 1606.73 & 0.70 & 0.00 & 0.49 & 0.00 & 0.50 & 0.00 \\ & & 0.30 & 1550.33 & 0.67 & 0.00 & 0.48 & 0.00 & 0.48 & 0.00 \\ & & 0.40 & 1493.12 & 0.65 & 0.00 & 0.47 & 0.00 & 0.45 & 0.00 \\ & & 0.50 & 1435.09 & 0.62 & 0.00 & 0.45 & 0.00 & 0.42 & 0.00 \\ & & 0.60 & 1375.96 & 0.58 & 0.00 & 0.43 & 0.00 & 0.39 & 0.00 \\ & & 0.70 & 1315.90 & 0.54 & 0.00 & 0.41 & 0.00 & 0.36 & 0.00 \\ & & 0.80 & 1254.65 & 0.49 & 0.00 & 0.37 & 0.00 & 0.32 & 0.00 \\ & & 0.90 & 1192.56 & 0.39 & 0.00 & 0.31 & 0.00 & 0.27 & 0.00 \\ & & 1.00 & 1129.39 & 0.18 & 0.00 & 0.22 & 0.00 & 0.22 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 1718.76 & 0.75 & 0.00 & 0.51 & 0.00 & 0.56 & 0.00 \\ & & 0.10 & 1663.26 & 0.73 & 0.00 & 0.51 & 0.00 & 0.54 & 0.00 \\ & & 0.20 & 1607.19 & 0.71 & 0.00 & 0.51 & 0.00 & 0.52 & 0.00 \\ & & 0.30 & 1550.62 & 0.68 & 0.00 & 0.50 & 0.00 & 0.49 & 0.00 \\ & & 0.40 & 1493.38 & 0.66 & 0.00 & 0.49 & 0.00 & 0.47 & 0.00 \\ & & 0.50 & 1435.15 & 0.62 & 0.00 & 0.47 & 0.00 & 0.44 & 0.00 \\ & & 0.60 & 1376.00 & 0.59 & 0.00 & 0.45 & 0.00 & 0.41 & 0.00 \\ & & 0.70 & 1316.02 & 0.55 & 0.00 & 0.42 & 0.00 & 0.37 & 0.00 \\ & & 0.80 & 1254.76 & 0.50 & 0.00 & 0.38 & 0.00 & 0.33 & 0.00 \\ & & 0.90 & 1192.61 & 0.40 & 0.00 & 0.32 & 0.00 & 0.28 & 0.00 \\ & & 1.00 & 1129.39 & 0.18 & 0.00 & 0.22 & 0.00 & 0.23 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 1719.73 & 0.76 & 0.00 & 0.52 & 0.00 & 0.57 & 0.00 \\ & & 0.10 & 1663.73 & 0.74 & 0.00 & 0.52 & 0.00 & 0.55 & 0.00 \\ & & 0.20 & 1607.47 & 0.72 & 0.00 & 0.52 & 0.00 & 0.53 & 0.00 \\ & & 0.30 & 1550.86 & 0.69 & 0.00 & 0.52 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 1493.43 & 0.67 & 0.00 & 0.50 & 0.00 & 0.48 & 0.00 \\ & & 0.50 & 1435.19 & 0.64 & 0.00 & 0.48 & 0.00 & 0.45 & 0.00 \\ & & 0.60 & 1376.03 & 0.60 & 0.00 & 0.46 & 0.00 & 0.42 & 0.00 \\ & & 0.70 & 1316.04 & 0.55 & 0.00 & 0.43 & 0.00 & 0.38 & 0.00 \\ & & 0.80 & 1254.85 & 0.50 & 0.00 & 0.39 & 0.00 & 0.34 & 0.00 \\ & & 0.90 & 1192.61 & 0.41 & 0.00 & 0.33 & 0.00 & 0.29 & 0.00 \\ & & 1.00 & 1129.39 & 0.19 & 0.00 & 0.23 & 0.00 & 0.24 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 1720.15 & 0.76 & 0.00 & 0.53 & 0.00 & 0.58 & 0.00 \\ & & 0.10 & 1663.99 & 0.75 & 0.00 & 0.53 & 0.00 & 0.57 & 0.00 \\ & & 0.20 & 1607.70 & 0.72 & 0.00 & 0.53 & 0.00 & 0.54 & 0.00 \\ & & 0.30 & 1550.91 & 0.70 & 0.00 & 0.53 & 0.00 & 0.52 & 0.00 \\ & & 0.40 & 1493.47 & 0.67 & 0.00 & 0.52 & 0.00 & 0.49 & 0.00 \\ & & 0.50 & 1435.21 & 0.64 & 0.00 & 0.50 & 0.00 & 0.46 & 0.00 \\ & & 0.60 & 1376.04 & 0.60 & 0.00 & 0.48 & 0.00 & 0.43 & 0.00 \\ & & 0.70 & 1316.05 & 0.56 & 0.00 & 0.45 & 0.00 & 0.39 & 0.00 \\ & & 0.80 & 1254.89 & 0.51 & 0.00 & 0.41 & 0.00 & 0.35 & 0.00 \\ & & 0.90 & 1192.61 & 0.42 & 0.00 & 0.34 & 0.00 & 0.30 & 0.00 \\ & & 1.00 & 1129.39 & 0.19 & 0.00 & 0.24 & 0.00 & 0.25 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 1720.39 & 0.77 & 0.00 & 0.54 & 0.00 & 0.60 & 0.00 \\ & & 0.10 & 1664.21 & 0.75 & 0.00 & 0.54 & 0.00 & 0.58 & 0.00 \\ & & 0.20 & 1607.84 & 0.73 & 0.00 & 0.54 & 0.00 & 0.56 & 0.00 \\ & & 0.30 & 1550.94 & 0.71 & 0.00 & 0.55 & 0.00 & 0.53 & 0.00 \\ & & 0.40 & 1493.49 & 0.68 & 0.00 & 0.53 & 0.00 & 0.50 & 0.00 \\ & & 0.50 & 1435.23 & 0.65 & 0.00 & 0.51 & 0.00 & 0.47 & 0.00 \\ & & 0.60 & 1376.05 & 0.61 & 0.00 & 0.49 & 0.00 & 0.44 & 0.00 \\ & & 0.70 & 1316.05 & 0.57 & 0.00 & 0.46 & 0.00 & 0.40 & 0.00 \\ & & 0.80 & 1254.89 & 0.52 & 0.00 & 0.42 & 0.00 & 0.36 & 0.00 \\ & & 0.90 & 1192.61 & 0.43 & 0.00 & 0.35 & 0.00 & 0.31 & 0.00 \\ & & 1.00 & 1129.39 & 0.20 & 0.00 & 0.25 & 0.00 & 0.26 & 0.00 \\ \midrule 150 & 0.00 & 0.00 & 1479.30 & 0.77 & 0.00 & 0.57 & 0.00 & 0.59 & 0.00 \\ & & 0.10 & 1439.38 & 0.74 & 0.00 & 0.56 & 0.00 & 0.57 & 0.00 \\ & & 0.20 & 1399.34 & 0.72 & 0.00 & 0.55 & 0.00 & 0.54 & 0.00 \\ & & 0.30 & 1358.99 & 0.69 & 0.00 & 0.53 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 1318.27 & 0.65 & 0.00 & 0.52 & 0.00 & 0.47 & 0.00 \\ & & 0.50 & 1277.37 & 0.62 & 0.00 & 0.50 & 0.00 & 0.43 & 0.00 \\ & & 0.60 & 1236.21 & 0.58 & 0.00 & 0.47 & 0.00 & 0.39 & 0.00 \\ & & 0.70 & 1194.77 & 0.53 & 0.00 & 0.43 & 0.00 & 0.34 & 0.00 \\ & & 0.80 & 1153.04 & 0.47 & 0.00 & 0.37 & 0.00 & 0.29 & 0.00 \\ & & 0.90 & 1110.91 & 0.38 & 0.00 & 0.29 & 0.00 & 0.24 & 0.00 \\ & & 1.00 & 1068.19 & 0.13 & 0.00 & 0.15 & 0.00 & 0.18 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 1479.32 & 0.77 & 0.00 & 0.58 & 0.00 & 0.61 & 0.00 \\ & & 0.10 & 1439.40 & 0.75 & 0.00 & 0.58 & 0.00 & 0.58 & 0.00 \\ & & 0.20 & 1399.36 & 0.72 & 0.00 & 0.57 & 0.00 & 0.55 & 0.00 \\ & & 0.30 & 1359.01 & 0.69 & 0.00 & 0.55 & 0.00 & 0.52 & 0.00 \\ & & 0.40 & 1318.30 & 0.66 & 0.00 & 0.54 & 0.00 & 0.49 & 0.00 \\ & & 0.50 & 1277.40 & 0.62 & 0.00 & 0.52 & 0.00 & 0.45 & 0.00 \\ & & 0.60 & 1236.23 & 0.58 & 0.00 & 0.49 & 0.00 & 0.40 & 0.00 \\ & & 0.70 & 1194.78 & 0.54 & 0.00 & 0.44 & 0.00 & 0.36 & 0.00 \\ & & 0.80 & 1153.05 & 0.48 & 0.00 & 0.38 & 0.00 & 0.30 & 0.00 \\ & & 0.90 & 1110.91 & 0.39 & 0.00 & 0.30 & 0.00 & 0.25 & 0.00 \\ & & 1.00 & 1068.19 & 0.14 & 0.00 & 0.16 & 0.00 & 0.19 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 1479.32 & 0.78 & 0.00 & 0.59 & 0.00 & 0.63 & 0.00 \\ & & 0.10 & 1439.40 & 0.76 & 0.00 & 0.59 & 0.00 & 0.60 & 0.00 \\ & & 0.20 & 1399.36 & 0.73 & 0.00 & 0.58 & 0.00 & 0.57 & 0.00 \\ & & 0.30 & 1359.01 & 0.70 & 0.00 & 0.57 & 0.00 & 0.53 & 0.00 \\ & & 0.40 & 1318.30 & 0.67 & 0.00 & 0.56 & 0.00 & 0.50 & 0.00 \\ & & 0.50 & 1277.40 & 0.63 & 0.00 & 0.54 & 0.00 & 0.46 & 0.00 \\ & & 0.60 & 1236.23 & 0.59 & 0.00 & 0.51 & 0.00 & 0.42 & 0.00 \\ & & 0.70 & 1194.78 & 0.54 & 0.00 & 0.46 & 0.00 & 0.37 & 0.00 \\ & & 0.80 & 1153.05 & 0.48 & 0.00 & 0.40 & 0.00 & 0.32 & 0.00 \\ & & 0.90 & 1110.91 & 0.40 & 0.00 & 0.31 & 0.00 & 0.26 & 0.00 \\ & & 1.00 & 1068.19 & 0.14 & 0.00 & 0.16 & 0.00 & 0.19 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 1479.32 & 0.78 & 0.00 & 0.60 & 0.00 & 0.64 & 0.00 \\ & & 0.10 & 1439.40 & 0.76 & 0.00 & 0.60 & 0.00 & 0.61 & 0.00 \\ & & 0.20 & 1399.36 & 0.74 & 0.00 & 0.60 & 0.00 & 0.58 & 0.00 \\ & & 0.30 & 1359.01 & 0.71 & 0.00 & 0.59 & 0.00 & 0.55 & 0.00 \\ & & 0.40 & 1318.30 & 0.67 & 0.00 & 0.57 & 0.00 & 0.51 & 0.00 \\ & & 0.50 & 1277.40 & 0.64 & 0.00 & 0.55 & 0.00 & 0.47 & 0.00 \\ & & 0.60 & 1236.23 & 0.59 & 0.00 & 0.52 & 0.00 & 0.43 & 0.00 \\ & & 0.70 & 1194.78 & 0.55 & 0.00 & 0.47 & 0.00 & 0.38 & 0.00 \\ & & 0.80 & 1153.05 & 0.49 & 0.00 & 0.41 & 0.00 & 0.33 & 0.00 \\ & & 0.90 & 1110.91 & 0.40 & 0.00 & 0.32 & 0.00 & 0.27 & 0.00 \\ & & 1.00 & 1068.19 & 0.15 & 0.00 & 0.16 & 0.00 & 0.20 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 1479.32 & 0.79 & 0.00 & 0.60 & 0.00 & 0.66 & 0.00 \\ & & 0.10 & 1439.40 & 0.77 & 0.00 & 0.60 & 0.00 & 0.63 & 0.00 \\ & & 0.20 & 1399.36 & 0.74 & 0.00 & 0.61 & 0.00 & 0.60 & 0.00 \\ & & 0.30 & 1359.01 & 0.71 & 0.00 & 0.60 & 0.00 & 0.56 & 0.00 \\ & & 0.40 & 1318.30 & 0.68 & 0.00 & 0.58 & 0.00 & 0.52 & 0.00 \\ & & 0.50 & 1277.40 & 0.64 & 0.00 & 0.57 & 0.00 & 0.48 & 0.00 \\ & & 0.60 & 1236.23 & 0.60 & 0.00 & 0.54 & 0.00 & 0.44 & 0.00 \\ & & 0.70 & 1194.78 & 0.55 & 0.00 & 0.49 & 0.00 & 0.39 & 0.00 \\ & & 0.80 & 1153.05 & 0.49 & 0.00 & 0.42 & 0.00 & 0.34 & 0.00 \\ & & 0.90 & 1110.91 & 0.41 & 0.00 & 0.33 & 0.00 & 0.27 & 0.00 \\ & & 1.00 & 1068.19 & 0.15 & 0.00 & 0.17 & 0.00 & 0.21 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 1479.32 & 0.80 & 0.00 & 0.61 & 0.00 & 0.67 & 0.00 \\ & & 0.10 & 1439.40 & 0.77 & 0.00 & 0.61 & 0.00 & 0.64 & 0.00 \\ & & 0.20 & 1399.36 & 0.75 & 0.00 & 0.61 & 0.00 & 0.61 & 0.00 \\ & & 0.30 & 1359.01 & 0.72 & 0.00 & 0.61 & 0.00 & 0.57 & 0.00 \\ & & 0.40 & 1318.30 & 0.68 & 0.00 & 0.60 & 0.00 & 0.54 & 0.00 \\ & & 0.50 & 1277.40 & 0.64 & 0.00 & 0.58 & 0.00 & 0.49 & 0.00 \\ & & 0.60 & 1236.23 & 0.60 & 0.00 & 0.55 & 0.00 & 0.45 & 0.00 \\ & & 0.70 & 1194.78 & 0.55 & 0.00 & 0.50 & 0.00 & 0.40 & 0.00 \\ & & 0.80 & 1153.05 & 0.50 & 0.00 & 0.44 & 0.00 & 0.34 & 0.00 \\ & & 0.90 & 1110.91 & 0.42 & 0.00 & 0.34 & 0.00 & 0.28 & 0.00 \\ & & 1.00 & 1068.19 & 0.15 & 0.00 & 0.17 & 0.00 & 0.21 & 0.00 \\ \bottomrule \label{table:ttestLinearCost} \end{longtable} \begin{longtable}{rrlllrrrrrrr} \kill \caption{Paired t-test results comparing the individualized compensation scheme with the benchmark schemes in terms of total expected distance obtained using linear acceptance probability.} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endfirsthead \caption{(continued.)} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endhead 50 & 0.00 & 0.00 & 8116.07 & 0.08 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 8099.57 & 0.09 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.20 & 8065.91 & 0.09 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 8044.03 & 0.08 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.40 & 8025.53 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.50 & 8010.29 & 0.07 & 0.01 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.60 & 7985.62 & 0.07 & 0.01 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.70 & 7965.34 & 0.07 & 0.00 & 0.07 & 0.00 & 0.04 & 0.00 \\ & & 0.80 & 7953.42 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 7938.68 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 7933.74 & 0.04 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 8086.19 & 0.08 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 8060.26 & 0.08 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.20 & 8028.14 & 0.09 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 8009.34 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.40 & 7986.30 & 0.07 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.50 & 7972.19 & 0.07 & 0.01 & 0.07 & 0.00 & 0.04 & 0.00 \\ & & 0.60 & 7942.05 & 0.06 & 0.01 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.70 & 7918.62 & 0.07 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.80 & 7902.21 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 7889.50 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 7890.61 & 0.04 & 0.00 & 0.05 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 8045.97 & 0.09 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 8026.76 & 0.08 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.20 & 7988.19 & 0.09 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.30 & 7963.34 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.40 & 7938.98 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.50 & 7928.03 & 0.07 & 0.00 & 0.07 & 0.00 & 0.04 & 0.00 \\ & & 0.60 & 7892.64 & 0.07 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.70 & 7870.73 & 0.07 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.80 & 7852.97 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 7855.87 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 7852.48 & 0.04 & 0.00 & 0.05 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 8021.47 & 0.09 & 0.00 & 0.08 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 7994.83 & 0.09 & 0.00 & 0.09 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 7958.14 & 0.09 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 7931.74 & 0.09 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.40 & 7911.28 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.50 & 7902.47 & 0.07 & 0.00 & 0.06 & 0.00 & 0.05 & 0.00 \\ & & 0.60 & 7866.48 & 0.07 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.70 & 7847.35 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.80 & 7833.75 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 7836.71 & 0.05 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 7834.97 & 0.04 & 0.00 & 0.05 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 8011.28 & 0.09 & 0.00 & 0.09 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 7983.54 & 0.08 & 0.00 & 0.09 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 7943.13 & 0.08 & 0.00 & 0.09 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 7914.16 & 0.08 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.40 & 7895.36 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.50 & 7886.85 & 0.07 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.60 & 7853.96 & 0.07 & 0.00 & 0.07 & 0.00 & 0.04 & 0.00 \\ & & 0.70 & 7835.80 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.80 & 7823.01 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 7822.69 & 0.05 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 7821.29 & 0.04 & 0.00 & 0.05 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 8005.04 & 0.08 & 0.00 & 0.09 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 7975.94 & 0.08 & 0.00 & 0.09 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 7940.88 & 0.07 & 0.00 & 0.09 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 7911.63 & 0.08 & 0.00 & 0.08 & 0.00 & 0.05 & 0.00 \\ & & 0.40 & 7891.33 & 0.08 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.50 & 7875.22 & 0.07 & 0.00 & 0.07 & 0.00 & 0.04 & 0.00 \\ & & 0.60 & 7846.54 & 0.07 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ & & 0.70 & 7826.71 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.80 & 7811.17 & 0.06 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 7810.32 & 0.05 & 0.00 & 0.06 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 7812.69 & 0.04 & 0.00 & 0.05 & 0.00 & 0.03 & 0.00 \\ \midrule 75 & 0.00 & 0.00 & 4721.50 & 0.22 & 0.00 & 0.21 & 0.00 & 0.17 & 0.00 \\ & & 0.10 & 4690.61 & 0.21 & 0.00 & 0.21 & 0.00 & 0.16 & 0.00 \\ & & 0.20 & 4670.01 & 0.21 & 0.00 & 0.20 & 0.00 & 0.15 & 0.00 \\ & & 0.30 & 4627.46 & 0.22 & 0.00 & 0.20 & 0.00 & 0.15 & 0.00 \\ & & 0.40 & 4579.76 & 0.20 & 0.00 & 0.21 & 0.00 & 0.15 & 0.00 \\ & & 0.50 & 4546.41 & 0.20 & 0.00 & 0.21 & 0.00 & 0.15 & 0.00 \\ & & 0.60 & 4520.29 & 0.20 & 0.00 & 0.20 & 0.00 & 0.14 & 0.00 \\ & & 0.70 & 4486.66 & 0.19 & 0.00 & 0.19 & 0.00 & 0.14 & 0.00 \\ & & 0.80 & 4458.13 & 0.17 & 0.00 & 0.19 & 0.00 & 0.13 & 0.00 \\ & & 0.90 & 4445.62 & 0.16 & 0.00 & 0.18 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 4454.78 & 0.11 & 0.00 & 0.17 & 0.00 & 0.12 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 4678.68 & 0.23 & 0.00 & 0.21 & 0.00 & 0.16 & 0.00 \\ & & 0.10 & 4644.32 & 0.22 & 0.00 & 0.21 & 0.00 & 0.15 & 0.00 \\ & & 0.20 & 4602.15 & 0.22 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.30 & 4560.86 & 0.22 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.40 & 4525.42 & 0.21 & 0.00 & 0.21 & 0.00 & 0.14 & 0.00 \\ & & 0.50 & 4495.18 & 0.20 & 0.00 & 0.20 & 0.00 & 0.14 & 0.00 \\ & & 0.60 & 4471.88 & 0.19 & 0.00 & 0.20 & 0.00 & 0.14 & 0.00 \\ & & 0.70 & 4433.12 & 0.19 & 0.00 & 0.19 & 0.00 & 0.13 & 0.00 \\ & & 0.80 & 4408.99 & 0.18 & 0.00 & 0.19 & 0.00 & 0.13 & 0.00 \\ & & 0.90 & 4408.04 & 0.17 & 0.00 & 0.17 & 0.00 & 0.12 & 0.00 \\ & & 1.00 & 4416.25 & 0.09 & 0.00 & 0.16 & 0.00 & 0.11 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 4630.73 & 0.22 & 0.00 & 0.22 & 0.00 & 0.16 & 0.00 \\ & & 0.10 & 4590.20 & 0.22 & 0.00 & 0.22 & 0.00 & 0.16 & 0.00 \\ & & 0.20 & 4560.41 & 0.22 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.30 & 4521.39 & 0.20 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.40 & 4495.84 & 0.20 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.50 & 4460.59 & 0.19 & 0.00 & 0.21 & 0.00 & 0.14 & 0.00 \\ & & 0.60 & 4446.73 & 0.18 & 0.00 & 0.20 & 0.00 & 0.13 & 0.00 \\ & & 0.70 & 4410.02 & 0.19 & 0.00 & 0.20 & 0.00 & 0.13 & 0.00 \\ & & 0.80 & 4384.93 & 0.18 & 0.00 & 0.18 & 0.00 & 0.12 & 0.00 \\ & & 0.90 & 4391.36 & 0.17 & 0.00 & 0.17 & 0.00 & 0.11 & 0.00 \\ & & 1.00 & 4397.57 & 0.10 & 0.00 & 0.15 & 0.00 & 0.10 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 4598.41 & 0.22 & 0.00 & 0.23 & 0.00 & 0.17 & 0.00 \\ & & 0.10 & 4569.01 & 0.20 & 0.00 & 0.23 & 0.00 & 0.15 & 0.00 \\ & & 0.20 & 4528.70 & 0.22 & 0.00 & 0.23 & 0.00 & 0.15 & 0.00 \\ & & 0.30 & 4492.86 & 0.21 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.40 & 4461.79 & 0.21 & 0.00 & 0.22 & 0.00 & 0.14 & 0.00 \\ & & 0.50 & 4440.46 & 0.19 & 0.00 & 0.20 & 0.00 & 0.13 & 0.00 \\ & & 0.60 & 4425.28 & 0.18 & 0.00 & 0.19 & 0.00 & 0.13 & 0.00 \\ & & 0.70 & 4393.85 & 0.19 & 0.00 & 0.19 & 0.00 & 0.12 & 0.00 \\ & & 0.80 & 4371.21 & 0.18 & 0.00 & 0.17 & 0.00 & 0.12 & 0.00 \\ & & 0.90 & 4369.14 & 0.16 & 0.00 & 0.18 & 0.00 & 0.11 & 0.00 \\ & & 1.00 & 4397.83 & 0.09 & 0.00 & 0.15 & 0.00 & 0.10 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 4580.45 & 0.22 & 0.00 & 0.24 & 0.00 & 0.17 & 0.00 \\ & & 0.10 & 4548.03 & 0.20 & 0.00 & 0.24 & 0.00 & 0.16 & 0.00 \\ & & 0.20 & 4507.89 & 0.20 & 0.00 & 0.23 & 0.00 & 0.14 & 0.00 \\ & & 0.30 & 4468.57 & 0.21 & 0.00 & 0.22 & 0.00 & 0.15 & 0.00 \\ & & 0.40 & 4443.41 & 0.21 & 0.00 & 0.22 & 0.00 & 0.14 & 0.00 \\ & & 0.50 & 4417.94 & 0.19 & 0.00 & 0.21 & 0.00 & 0.14 & 0.00 \\ & & 0.60 & 4400.45 & 0.18 & 0.00 & 0.20 & 0.00 & 0.13 & 0.00 \\ & & 0.70 & 4378.23 & 0.19 & 0.00 & 0.18 & 0.00 & 0.13 & 0.00 \\ & & 0.80 & 4351.85 & 0.18 & 0.00 & 0.18 & 0.00 & 0.12 & 0.00 \\ & & 0.90 & 4342.49 & 0.16 & 0.00 & 0.17 & 0.00 & 0.11 & 0.00 \\ & & 1.00 & 4383.42 & 0.09 & 0.00 & 0.15 & 0.00 & 0.10 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 4570.93 & 0.21 & 0.00 & 0.24 & 0.00 & 0.17 & 0.00 \\ & & 0.10 & 4538.07 & 0.20 & 0.00 & 0.24 & 0.00 & 0.17 & 0.00 \\ & & 0.20 & 4497.06 & 0.19 & 0.00 & 0.24 & 0.00 & 0.15 & 0.00 \\ & & 0.30 & 4447.44 & 0.21 & 0.00 & 0.23 & 0.00 & 0.14 & 0.00 \\ & & 0.40 & 4422.57 & 0.20 & 0.00 & 0.21 & 0.00 & 0.15 & 0.00 \\ & & 0.50 & 4394.72 & 0.19 & 0.00 & 0.20 & 0.00 & 0.14 & 0.00 \\ & & 0.60 & 4383.50 & 0.18 & 0.00 & 0.20 & 0.00 & 0.13 & 0.00 \\ & & 0.70 & 4358.86 & 0.17 & 0.00 & 0.19 & 0.00 & 0.13 & 0.00 \\ & & 0.80 & 4339.36 & 0.17 & 0.00 & 0.17 & 0.00 & 0.12 & 0.00 \\ & & 0.90 & 4333.50 & 0.15 & 0.00 & 0.17 & 0.00 & 0.11 & 0.00 \\ & & 1.00 & 4366.65 & 0.10 & 0.00 & 0.15 & 0.00 & 0.10 & 0.00 \\ \midrule 100 & 0.00 & 0.00 & 2130.44 & 0.69 & 0.00 & 0.59 & 0.00 & 0.53 & 0.00 \\ & & 0.10 & 2089.05 & 0.68 & 0.00 & 0.60 & 0.00 & 0.53 & 0.00 \\ & & 0.20 & 2061.74 & 0.71 & 0.00 & 0.62 & 0.00 & 0.51 & 0.00 \\ & & 0.30 & 2028.53 & 0.67 & 0.00 & 0.61 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 1987.50 & 0.64 & 0.00 & 0.59 & 0.00 & 0.50 & 0.00 \\ & & 0.50 & 1932.31 & 0.64 & 0.00 & 0.64 & 0.00 & 0.50 & 0.00 \\ & & 0.60 & 1904.39 & 0.59 & 0.00 & 0.60 & 0.00 & 0.48 & 0.00 \\ & & 0.70 & 1871.03 & 0.62 & 0.00 & 0.55 & 0.00 & 0.45 & 0.00 \\ & & 0.80 & 1853.74 & 0.59 & 0.00 & 0.55 & 0.00 & 0.43 & 0.00 \\ & & 0.90 & 1854.64 & 0.49 & 0.00 & 0.47 & 0.00 & 0.37 & 0.00 \\ & & 1.00 & 1865.51 & 0.31 & 0.00 & 0.40 & 0.00 & 0.33 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 2108.30 & 0.69 & 0.00 & 0.60 & 0.00 & 0.53 & 0.00 \\ & & 0.10 & 2063.95 & 0.65 & 0.00 & 0.59 & 0.00 & 0.53 & 0.00 \\ & & 0.20 & 2034.70 & 0.69 & 0.00 & 0.59 & 0.00 & 0.52 & 0.00 \\ & & 0.30 & 1995.14 & 0.66 & 0.00 & 0.59 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 1952.22 & 0.65 & 0.00 & 0.62 & 0.00 & 0.51 & 0.00 \\ & & 0.50 & 1915.94 & 0.60 & 0.00 & 0.57 & 0.00 & 0.50 & 0.00 \\ & & 0.60 & 1878.35 & 0.61 & 0.00 & 0.60 & 0.00 & 0.48 & 0.00 \\ & & 0.70 & 1859.96 & 0.60 & 0.00 & 0.56 & 0.00 & 0.44 & 0.00 \\ & & 0.80 & 1845.99 & 0.58 & 0.00 & 0.55 & 0.00 & 0.41 & 0.00 \\ & & 0.90 & 1838.31 & 0.49 & 0.00 & 0.50 & 0.00 & 0.37 & 0.00 \\ & & 1.00 & 1847.18 & 0.31 & 0.00 & 0.40 & 0.00 & 0.32 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 2085.18 & 0.66 & 0.00 & 0.58 & 0.00 & 0.53 & 0.00 \\ & & 0.10 & 2046.20 & 0.66 & 0.00 & 0.60 & 0.00 & 0.53 & 0.00 \\ & & 0.20 & 2011.96 & 0.66 & 0.00 & 0.60 & 0.00 & 0.52 & 0.00 \\ & & 0.30 & 1969.12 & 0.66 & 0.00 & 0.56 & 0.00 & 0.52 & 0.00 \\ & & 0.40 & 1922.30 & 0.65 & 0.00 & 0.61 & 0.00 & 0.50 & 0.00 \\ & & 0.50 & 1905.01 & 0.60 & 0.00 & 0.57 & 0.00 & 0.49 & 0.00 \\ & & 0.60 & 1867.73 & 0.59 & 0.00 & 0.61 & 0.00 & 0.47 & 0.00 \\ & & 0.70 & 1851.06 & 0.60 & 0.00 & 0.57 & 0.00 & 0.44 & 0.00 \\ & & 0.80 & 1836.84 & 0.55 & 0.00 & 0.55 & 0.00 & 0.39 & 0.00 \\ & & 0.90 & 1830.04 & 0.49 & 0.00 & 0.46 & 0.00 & 0.38 & 0.00 \\ & & 1.00 & 1839.97 & 0.27 & 0.00 & 0.40 & 0.00 & 0.35 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 2053.78 & 0.67 & 0.00 & 0.61 & 0.00 & 0.56 & 0.00 \\ & & 0.10 & 2035.00 & 0.64 & 0.00 & 0.55 & 0.00 & 0.53 & 0.00 \\ & & 0.20 & 1989.96 & 0.64 & 0.00 & 0.57 & 0.00 & 0.50 & 0.00 \\ & & 0.30 & 1949.32 & 0.67 & 0.00 & 0.57 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 1914.74 & 0.62 & 0.00 & 0.58 & 0.00 & 0.50 & 0.00 \\ & & 0.50 & 1891.02 & 0.58 & 0.00 & 0.57 & 0.00 & 0.48 & 0.00 \\ & & 0.60 & 1860.11 & 0.56 & 0.00 & 0.57 & 0.00 & 0.44 & 0.00 \\ & & 0.70 & 1837.08 & 0.59 & 0.00 & 0.56 & 0.00 & 0.44 & 0.00 \\ & & 0.80 & 1833.25 & 0.52 & 0.00 & 0.51 & 0.00 & 0.39 & 0.00 \\ & & 0.90 & 1826.99 & 0.48 & 0.00 & 0.47 & 0.00 & 0.38 & 0.00 \\ & & 1.00 & 1842.00 & 0.27 & 0.00 & 0.38 & 0.00 & 0.33 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 2054.12 & 0.67 & 0.00 & 0.61 & 0.00 & 0.54 & 0.00 \\ & & 0.10 & 2027.77 & 0.63 & 0.00 & 0.62 & 0.00 & 0.53 & 0.00 \\ & & 0.20 & 1961.82 & 0.64 & 0.00 & 0.60 & 0.00 & 0.53 & 0.00 \\ & & 0.30 & 1941.87 & 0.65 & 0.00 & 0.57 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 1909.94 & 0.61 & 0.00 & 0.57 & 0.00 & 0.49 & 0.00 \\ & & 0.50 & 1875.43 & 0.59 & 0.00 & 0.55 & 0.00 & 0.47 & 0.00 \\ & & 0.60 & 1856.21 & 0.54 & 0.00 & 0.55 & 0.00 & 0.44 & 0.00 \\ & & 0.70 & 1833.91 & 0.56 & 0.00 & 0.57 & 0.00 & 0.41 & 0.00 \\ & & 0.80 & 1832.70 & 0.52 & 0.00 & 0.51 & 0.00 & 0.39 & 0.00 \\ & & 0.90 & 1824.59 & 0.46 & 0.00 & 0.44 & 0.00 & 0.36 & 0.00 \\ & & 1.00 & 1840.39 & 0.27 & 0.00 & 0.39 & 0.00 & 0.33 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 2048.06 & 0.67 & 0.00 & 0.62 & 0.00 & 0.55 & 0.00 \\ & & 0.10 & 2012.74 & 0.64 & 0.00 & 0.62 & 0.00 & 0.52 & 0.00 \\ & & 0.20 & 1967.90 & 0.62 & 0.00 & 0.59 & 0.00 & 0.51 & 0.00 \\ & & 0.30 & 1927.27 & 0.66 & 0.00 & 0.58 & 0.00 & 0.53 & 0.00 \\ & & 0.40 & 1909.65 & 0.60 & 0.00 & 0.57 & 0.00 & 0.50 & 0.00 \\ & & 0.50 & 1877.19 & 0.58 & 0.00 & 0.54 & 0.00 & 0.47 & 0.00 \\ & & 0.60 & 1850.93 & 0.54 & 0.00 & 0.55 & 0.00 & 0.44 & 0.00 \\ & & 0.70 & 1830.99 & 0.55 & 0.00 & 0.54 & 0.00 & 0.41 & 0.00 \\ & & 0.80 & 1829.42 & 0.51 & 0.00 & 0.52 & 0.00 & 0.37 & 0.00 \\ & & 0.90 & 1836.53 & 0.45 & 0.00 & 0.45 & 0.00 & 0.34 & 0.00 \\ & & 1.00 & 1846.86 & 0.27 & 0.00 & 0.38 & 0.00 & 0.31 & 0.00 \\ \midrule 125 & 0.00 & 0.00 & 1056.72 & 1.46 & 0.00 & 1.30 & 0.00 & 1.22 & 0.00 \\ & & 0.10 & 1042.40 & 1.39 & 0.00 & 1.21 & 0.00 & 1.15 & 0.00 \\ & & 0.20 & 1027.00 & 1.34 & 0.00 & 1.22 & 0.00 & 1.07 & 0.00 \\ & & 0.30 & 1008.72 & 1.30 & 0.00 & 1.21 & 0.00 & 1.05 & 0.00 \\ & & 0.40 & 995.40 & 1.29 & 0.00 & 1.13 & 0.00 & 1.00 & 0.00 \\ & & 0.50 & 986.47 & 1.29 & 0.00 & 1.12 & 0.00 & 0.95 & 0.00 \\ & & 0.60 & 972.16 & 1.25 & 0.00 & 1.00 & 0.00 & 0.87 & 0.00 \\ & & 0.70 & 965.61 & 1.22 & 0.00 & 0.96 & 0.00 & 0.80 & 0.00 \\ & & 0.80 & 955.82 & 1.21 & 0.00 & 0.86 & 0.00 & 0.73 & 0.00 \\ & & 0.90 & 953.71 & 1.09 & 0.00 & 0.76 & 0.00 & 0.65 & 0.00 \\ & & 1.00 & 942.49 & 0.43 & 0.00 & 0.55 & 0.00 & 0.59 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 1050.53 & 1.40 & 0.00 & 1.12 & 0.00 & 1.15 & 0.00 \\ & & 0.10 & 1028.31 & 1.36 & 0.00 & 1.16 & 0.00 & 1.11 & 0.00 \\ & & 0.20 & 1018.44 & 1.29 & 0.00 & 1.23 & 0.00 & 1.06 & 0.00 \\ & & 0.30 & 1007.53 & 1.29 & 0.00 & 1.17 & 0.00 & 1.00 & 0.00 \\ & & 0.40 & 992.34 & 1.28 & 0.00 & 1.11 & 0.00 & 0.96 & 0.00 \\ & & 0.50 & 983.91 & 1.27 & 0.00 & 1.11 & 0.00 & 0.91 & 0.00 \\ & & 0.60 & 965.93 & 1.22 & 0.00 & 0.98 & 0.00 & 0.85 & 0.00 \\ & & 0.70 & 964.11 & 1.20 & 0.00 & 0.95 & 0.00 & 0.79 & 0.00 \\ & & 0.80 & 954.73 & 1.19 & 0.00 & 0.84 & 0.00 & 0.71 & 0.00 \\ & & 0.90 & 953.16 & 1.09 & 0.00 & 0.77 & 0.00 & 0.64 & 0.00 \\ & & 1.00 & 941.36 & 0.42 & 0.00 & 0.55 & 0.00 & 0.57 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 1048.69 & 1.31 & 0.00 & 1.12 & 0.00 & 1.08 & 0.00 \\ & & 0.10 & 1019.85 & 1.34 & 0.00 & 1.21 & 0.00 & 1.08 & 0.00 \\ & & 0.20 & 1009.68 & 1.32 & 0.00 & 1.09 & 0.00 & 1.02 & 0.00 \\ & & 0.30 & 1006.79 & 1.23 & 0.00 & 1.16 & 0.00 & 0.95 & 0.00 \\ & & 0.40 & 987.36 & 1.26 & 0.00 & 1.10 & 0.00 & 0.93 & 0.00 \\ & & 0.50 & 979.17 & 1.24 & 0.00 & 1.01 & 0.00 & 0.89 & 0.00 \\ & & 0.60 & 965.66 & 1.20 & 0.00 & 0.97 & 0.00 & 0.83 & 0.00 \\ & & 0.70 & 965.59 & 1.16 & 0.00 & 0.94 & 0.00 & 0.74 & 0.00 \\ & & 0.80 & 954.21 & 1.10 & 0.00 & 0.85 & 0.00 & 0.68 & 0.00 \\ & & 0.90 & 950.84 & 1.09 & 0.00 & 0.74 & 0.00 & 0.63 & 0.00 \\ & & 1.00 & 941.36 & 0.43 & 0.00 & 0.53 & 0.00 & 0.58 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 1022.05 & 1.36 & 0.00 & 1.17 & 0.00 & 1.12 & 0.00 \\ & & 0.10 & 1013.55 & 1.37 & 0.00 & 1.23 & 0.00 & 1.08 & 0.00 \\ & & 0.20 & 1009.03 & 1.28 & 0.00 & 1.13 & 0.00 & 0.99 & 0.00 \\ & & 0.30 & 1001.93 & 1.23 & 0.00 & 1.17 & 0.00 & 0.96 & 0.00 \\ & & 0.40 & 987.09 & 1.22 & 0.00 & 1.08 & 0.00 & 0.89 & 0.00 \\ & & 0.50 & 978.89 & 1.19 & 0.00 & 1.00 & 0.00 & 0.85 & 0.00 \\ & & 0.60 & 965.38 & 1.17 & 0.00 & 1.01 & 0.00 & 0.79 & 0.00 \\ & & 0.70 & 965.31 & 1.16 & 0.00 & 0.91 & 0.00 & 0.70 & 0.00 \\ & & 0.80 & 953.70 & 1.08 & 0.00 & 0.84 & 0.00 & 0.70 & 0.00 \\ & & 0.90 & 949.95 & 1.06 & 0.00 & 0.73 & 0.00 & 0.64 & 0.00 \\ & & 1.00 & 941.36 & 0.42 & 0.00 & 0.55 & 0.00 & 0.56 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 1015.82 & 1.35 & 0.00 & 1.20 & 0.00 & 1.15 & 0.00 \\ & & 0.10 & 1012.90 & 1.35 & 0.00 & 1.21 & 0.00 & 1.06 & 0.00 \\ & & 0.20 & 1008.38 & 1.26 & 0.00 & 1.17 & 0.00 & 0.98 & 0.00 \\ & & 0.30 & 1001.65 & 1.21 & 0.00 & 1.16 & 0.00 & 0.94 & 0.00 \\ & & 0.40 & 986.81 & 1.13 & 0.00 & 1.08 & 0.00 & 0.90 & 0.00 \\ & & 0.50 & 978.62 & 1.13 & 0.00 & 1.02 & 0.00 & 0.84 & 0.00 \\ & & 0.60 & 965.11 & 1.15 & 0.00 & 0.99 & 0.00 & 0.77 & 0.00 \\ & & 0.70 & 965.03 & 1.16 & 0.00 & 0.92 & 0.00 & 0.72 & 0.00 \\ & & 0.80 & 952.19 & 1.08 & 0.00 & 0.85 & 0.00 & 0.69 & 0.00 \\ & & 0.90 & 949.95 & 1.05 & 0.00 & 0.71 & 0.00 & 0.63 & 0.00 \\ & & 1.00 & 941.36 & 0.43 & 0.00 & 0.53 & 0.00 & 0.56 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 1015.12 & 1.23 & 0.00 & 1.21 & 0.00 & 1.15 & 0.00 \\ & & 0.10 & 1012.25 & 1.24 & 0.00 & 1.12 & 0.00 & 1.05 & 0.00 \\ & & 0.20 & 1005.37 & 1.21 & 0.00 & 1.15 & 0.00 & 1.01 & 0.00 \\ & & 0.30 & 1001.38 & 1.20 & 0.00 & 1.02 & 0.00 & 0.94 & 0.00 \\ & & 0.40 & 986.53 & 1.15 & 0.00 & 1.10 & 0.00 & 0.92 & 0.00 \\ & & 0.50 & 978.34 & 1.12 & 0.00 & 1.00 & 0.00 & 0.83 & 0.00 \\ & & 0.60 & 964.89 & 1.12 & 0.00 & 0.97 & 0.00 & 0.79 & 0.00 \\ & & 0.70 & 965.03 & 1.14 & 0.00 & 0.90 & 0.00 & 0.72 & 0.00 \\ & & 0.80 & 952.19 & 1.06 & 0.00 & 0.82 & 0.00 & 0.66 & 0.00 \\ & & 0.90 & 949.95 & 1.06 & 0.00 & 0.73 & 0.00 & 0.62 & 0.00 \\ & & 1.00 & 941.36 & 0.43 & 0.00 & 0.51 & 0.00 & 0.56 & 0.00 \\ \midrule 150 & 0.00 & 0.00 & 669.61 & 1.67 & 0.00 & 1.88 & 0.00 & 1.86 & 0.00 \\ & & 0.10 & 667.92 & 1.59 & 0.00 & 1.83 & 0.00 & 1.73 & 0.00 \\ & & 0.20 & 664.61 & 1.62 & 0.00 & 1.85 & 0.00 & 1.53 & 0.00 \\ & & 0.30 & 657.06 & 1.69 & 0.00 & 1.76 & 0.00 & 1.46 & 0.00 \\ & & 0.40 & 651.03 & 1.70 & 0.00 & 1.68 & 0.00 & 1.42 & 0.00 \\ & & 0.50 & 647.59 & 1.64 & 0.00 & 1.74 & 0.00 & 1.29 & 0.00 \\ & & 0.60 & 636.21 & 1.68 & 0.00 & 1.73 & 0.00 & 1.22 & 0.00 \\ & & 0.70 & 635.57 & 1.63 & 0.00 & 1.49 & 0.00 & 1.13 & 0.00 \\ & & 0.80 & 628.42 & 1.68 & 0.00 & 1.35 & 0.00 & 1.02 & 0.00 \\ & & 0.90 & 627.37 & 1.63 & 0.00 & 1.09 & 0.00 & 0.92 & 0.00 \\ & & 1.00 & 616.92 & 0.53 & 0.00 & 0.57 & 0.00 & 0.82 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 668.78 & 1.56 & 0.00 & 1.90 & 0.00 & 1.61 & 0.00 \\ & & 0.10 & 667.65 & 1.53 & 0.00 & 1.85 & 0.00 & 1.63 & 0.00 \\ & & 0.20 & 664.95 & 1.58 & 0.00 & 1.72 & 0.00 & 1.49 & 0.00 \\ & & 0.30 & 657.94 & 1.66 & 0.00 & 1.76 & 0.00 & 1.38 & 0.00 \\ & & 0.40 & 648.50 & 1.61 & 0.00 & 1.75 & 0.00 & 1.35 & 0.00 \\ & & 0.50 & 645.55 & 1.60 & 0.00 & 1.74 & 0.00 & 1.29 & 0.00 \\ & & 0.60 & 634.79 & 1.67 & 0.00 & 1.77 & 0.00 & 1.19 & 0.00 \\ & & 0.70 & 634.58 & 1.61 & 0.00 & 1.53 & 0.00 & 1.12 & 0.00 \\ & & 0.80 & 627.85 & 1.64 & 0.00 & 1.33 & 0.00 & 1.04 & 0.00 \\ & & 0.90 & 627.07 & 1.59 & 0.00 & 1.08 & 0.00 & 0.89 & 0.00 \\ & & 1.00 & 616.91 & 0.52 & 0.00 & 0.60 & 0.00 & 0.79 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 668.78 & 1.53 & 0.00 & 1.89 & 0.00 & 1.60 & 0.00 \\ & & 0.10 & 667.65 & 1.51 & 0.00 & 1.81 & 0.00 & 1.54 & 0.00 \\ & & 0.20 & 664.95 & 1.52 & 0.00 & 1.71 & 0.00 & 1.46 & 0.00 \\ & & 0.30 & 657.94 & 1.60 & 0.00 & 1.75 & 0.00 & 1.40 & 0.00 \\ & & 0.40 & 648.45 & 1.59 & 0.00 & 1.71 & 0.00 & 1.33 & 0.00 \\ & & 0.50 & 645.55 & 1.57 & 0.00 & 1.64 & 0.00 & 1.24 & 0.00 \\ & & 0.60 & 634.79 & 1.63 & 0.00 & 1.77 & 0.00 & 1.18 & 0.00 \\ & & 0.70 & 634.58 & 1.60 & 0.00 & 1.48 & 0.00 & 1.10 & 0.00 \\ & & 0.80 & 627.85 & 1.60 & 0.00 & 1.36 & 0.00 & 1.01 & 0.00 \\ & & 0.90 & 627.07 & 1.58 & 0.00 & 1.05 & 0.00 & 0.90 & 0.00 \\ & & 1.00 & 616.91 & 0.52 & 0.00 & 0.60 & 0.00 & 0.81 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 668.78 & 1.50 & 0.00 & 1.80 & 0.00 & 1.56 & 0.00 \\ & & 0.10 & 667.65 & 1.49 & 0.00 & 1.74 & 0.00 & 1.52 & 0.00 \\ & & 0.20 & 664.95 & 1.42 & 0.00 & 1.78 & 0.00 & 1.46 & 0.00 \\ & & 0.30 & 657.94 & 1.57 & 0.00 & 1.75 & 0.00 & 1.36 & 0.00 \\ & & 0.40 & 648.45 & 1.58 & 0.00 & 1.68 & 0.00 & 1.30 & 0.00 \\ & & 0.50 & 645.55 & 1.55 & 0.00 & 1.57 & 0.00 & 1.21 & 0.00 \\ & & 0.60 & 634.79 & 1.56 & 0.00 & 1.71 & 0.00 & 1.20 & 0.00 \\ & & 0.70 & 634.58 & 1.58 & 0.00 & 1.48 & 0.00 & 1.11 & 0.00 \\ & & 0.80 & 627.85 & 1.62 & 0.00 & 1.33 & 0.00 & 1.03 & 0.00 \\ & & 0.90 & 627.07 & 1.48 & 0.00 & 1.09 & 0.00 & 0.91 & 0.00 \\ & & 1.00 & 616.91 & 0.53 & 0.00 & 0.59 & 0.00 & 0.80 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 668.78 & 1.47 & 0.00 & 1.82 & 0.00 & 1.58 & 0.00 \\ & & 0.10 & 667.65 & 1.43 & 0.00 & 1.73 & 0.00 & 1.50 & 0.00 \\ & & 0.20 & 664.95 & 1.40 & 0.00 & 1.92 & 0.00 & 1.42 & 0.00 \\ & & 0.30 & 657.94 & 1.55 & 0.00 & 1.62 & 0.00 & 1.34 & 0.00 \\ & & 0.40 & 648.45 & 1.57 & 0.00 & 1.58 & 0.00 & 1.27 & 0.00 \\ & & 0.50 & 645.55 & 1.55 & 0.00 & 1.50 & 0.00 & 1.22 & 0.00 \\ & & 0.60 & 634.79 & 1.53 & 0.00 & 1.58 & 0.00 & 1.13 & 0.00 \\ & & 0.70 & 634.58 & 1.57 & 0.00 & 1.45 & 0.00 & 1.09 & 0.00 \\ & & 0.80 & 627.85 & 1.59 & 0.00 & 1.33 & 0.00 & 1.02 & 0.00 \\ & & 0.90 & 627.07 & 1.49 & 0.00 & 1.06 & 0.00 & 0.87 & 0.00 \\ & & 1.00 & 616.91 & 0.51 & 0.00 & 0.62 & 0.00 & 0.85 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 668.72 & 1.50 & 0.00 & 1.81 & 0.00 & 1.57 & 0.00 \\ & & 0.10 & 667.65 & 1.42 & 0.00 & 1.72 & 0.00 & 1.52 & 0.00 \\ & & 0.20 & 664.95 & 1.38 & 0.00 & 1.91 & 0.00 & 1.41 & 0.00 \\ & & 0.30 & 657.94 & 1.59 & 0.00 & 1.62 & 0.00 & 1.34 & 0.00 \\ & & 0.40 & 648.45 & 1.54 & 0.00 & 1.62 & 0.00 & 1.25 & 0.00 \\ & & 0.50 & 645.55 & 1.54 & 0.00 & 1.56 & 0.00 & 1.23 & 0.00 \\ & & 0.60 & 634.79 & 1.53 & 0.00 & 1.58 & 0.00 & 1.15 & 0.00 \\ & & 0.70 & 634.58 & 1.56 & 0.00 & 1.52 & 0.00 & 1.09 & 0.00 \\ & & 0.80 & 627.85 & 1.58 & 0.00 & 1.30 & 0.00 & 0.99 & 0.00 \\ & & 0.90 & 627.07 & 1.45 & 0.00 & 1.03 & 0.00 & 0.89 & 0.00 \\ & & 1.00 & 616.91 & 0.51 & 0.00 & 0.63 & 0.00 & 0.83 & 0.00 \\ \bottomrule \label{table:ttestLinearDist} \end{longtable} \begin{longtable}{rrlllrrrrrrr} \kill \caption{Paired t-test results comparing the individualized compensation scheme with the benchmark schemes in terms of mean acceptance rate obtained using linear acceptance probability.} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endfirsthead \caption{(continued.)} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endhead 50 & 0.00 & 0.00 & 0.91 & -0.01 & 0.62 & -0.13 & 0.00 & -0.06 & 0.02 \\ & & 0.10 & 0.92 & -0.04 & 0.11 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.92 & -0.03 & 0.23 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.92 & -0.02 & 0.45 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.40 & 0.92 & -0.02 & 0.47 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.92 & -0.02 & 0.50 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.60 & 0.92 & -0.02 & 0.40 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 0.70 & 0.92 & -0.03 & 0.17 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.80 & 0.92 & -0.02 & 0.34 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.93 & -0.03 & 0.04 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 1.00 & 0.93 & -0.03 & 0.02 & -0.09 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.93 & -0.01 & 0.72 & -0.13 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 0.93 & -0.02 & 0.32 & -0.13 & 0.00 & -0.04 & 0.00 \\ & & 0.20 & 0.93 & -0.04 & 0.16 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.93 & -0.03 & 0.27 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.93 & -0.02 & 0.45 & -0.12 & 0.00 & -0.04 & 0.01 \\ & & 0.50 & 0.93 & -0.03 & 0.38 & -0.11 & 0.00 & -0.04 & 0.01 \\ & & 0.60 & 0.93 & -0.02 & 0.45 & -0.10 & 0.00 & -0.04 & 0.01 \\ & & 0.70 & 0.93 & -0.03 & 0.15 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.94 & -0.02 & 0.16 & -0.09 & 0.00 & -0.03 & 0.00 \\ & & 0.90 & 0.94 & -0.04 & 0.01 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.94 & -0.04 & 0.00 & -0.08 & 0.00 & -0.03 & 0.01 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.95 & -0.03 & 0.33 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 0.10 & 0.94 & -0.03 & 0.30 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 0.95 & -0.05 & 0.09 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.95 & -0.04 & 0.09 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 0.95 & -0.03 & 0.14 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.95 & -0.03 & 0.13 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.96 & -0.04 & 0.14 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.96 & -0.04 & 0.03 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.96 & -0.04 & 0.04 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.96 & -0.05 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.96 & -0.04 & 0.00 & -0.08 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.96 & -0.04 & 0.08 & -0.08 & 0.00 & -0.04 & 0.01 \\ & & 0.10 & 0.96 & -0.04 & 0.14 & -0.09 & 0.00 & -0.05 & 0.01 \\ & & 0.20 & 0.96 & -0.04 & 0.10 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.30 & 0.96 & -0.05 & 0.04 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 0.96 & -0.04 & 0.06 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.96 & -0.03 & 0.10 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.97 & -0.03 & 0.07 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.96 & -0.04 & 0.07 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.96 & -0.04 & 0.02 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.96 & -0.05 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ & & 1.00 & 0.97 & -0.05 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.98 & -0.05 & 0.04 & -0.07 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 0.98 & -0.04 & 0.07 & -0.06 & 0.00 & -0.05 & 0.01 \\ & & 0.20 & 0.98 & -0.05 & 0.06 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.30 & 0.98 & -0.05 & 0.01 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 0.97 & -0.05 & 0.02 & -0.10 & 0.01 & -0.04 & 0.01 \\ & & 0.50 & 0.97 & -0.04 & 0.04 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.60 & 0.97 & -0.04 & 0.02 & -0.09 & 0.00 & -0.03 & 0.04 \\ & & 0.70 & 0.97 & -0.04 & 0.04 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 0.97 & -0.04 & 0.01 & -0.08 & 0.00 & -0.03 & 0.01 \\ & & 0.90 & 0.97 & -0.04 & 0.02 & -0.07 & 0.00 & -0.03 & 0.01 \\ & & 1.00 & 0.97 & -0.05 & 0.00 & -0.06 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.99 & -0.05 & 0.00 & -0.05 & 0.02 & -0.04 & 0.00 \\ & & 0.10 & 0.99 & -0.05 & 0.07 & -0.06 & 0.02 & -0.04 & 0.01 \\ & & 0.20 & 0.99 & -0.05 & 0.04 & -0.07 & 0.01 & -0.04 & 0.00 \\ & & 0.30 & 0.99 & -0.06 & 0.01 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 0.99 & -0.06 & 0.02 & -0.10 & 0.01 & -0.05 & 0.00 \\ & & 0.50 & 0.98 & -0.05 & 0.03 & -0.10 & 0.01 & -0.04 & 0.00 \\ & & 0.60 & 0.98 & -0.04 & 0.04 & -0.09 & 0.00 & -0.03 & 0.00 \\ & & 0.70 & 0.98 & -0.04 & 0.03 & -0.09 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 0.98 & -0.04 & 0.01 & -0.08 & 0.00 & -0.02 & 0.00 \\ & & 0.90 & 0.98 & -0.04 & 0.01 & -0.07 & 0.00 & -0.02 & 0.01 \\ & & 1.00 & 0.98 & -0.05 & 0.00 & -0.06 & 0.00 & -0.03 & 0.00 \\ \midrule 75 & 0.00 & 0.00 & 0.93 & -0.03 & 0.08 & -0.14 & 0.00 & -0.08 & 0.00 \\ & & 0.10 & 0.93 & -0.04 & 0.01 & -0.14 & 0.00 & -0.08 & 0.00 \\ & & 0.20 & 0.93 & -0.03 & 0.05 & -0.14 & 0.00 & -0.08 & 0.00 \\ & & 0.30 & 0.94 & -0.04 & 0.09 & -0.13 & 0.00 & -0.07 & 0.00 \\ & & 0.40 & 0.94 & -0.03 & 0.13 & -0.13 & 0.00 & -0.08 & 0.00 \\ & & 0.50 & 0.94 & -0.03 & 0.17 & -0.14 & 0.00 & -0.07 & 0.00 \\ & & 0.60 & 0.95 & -0.04 & 0.04 & -0.12 & 0.00 & -0.07 & 0.00 \\ & & 0.70 & 0.95 & -0.03 & 0.17 & -0.11 & 0.00 & -0.07 & 0.00 \\ & & 0.80 & 0.95 & -0.03 & 0.04 & -0.11 & 0.00 & -0.06 & 0.00 \\ & & 0.90 & 0.95 & -0.04 & 0.00 & -0.10 & 0.00 & -0.06 & 0.00 \\ & & 1.00 & 0.95 & -0.03 & 0.00 & -0.10 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.94 & -0.03 & 0.04 & -0.13 & 0.00 & -0.07 & 0.00 \\ & & 0.10 & 0.94 & -0.03 & 0.04 & -0.14 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.95 & -0.04 & 0.07 & -0.14 & 0.00 & -0.06 & 0.00 \\ & & 0.30 & 0.95 & -0.04 & 0.02 & -0.14 & 0.00 & -0.06 & 0.00 \\ & & 0.40 & 0.95 & -0.03 & 0.04 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.50 & 0.95 & -0.03 & 0.12 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.60 & 0.96 & -0.04 & 0.09 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.70 & 0.96 & -0.04 & 0.07 & -0.11 & 0.00 & -0.06 & 0.00 \\ & & 0.80 & 0.96 & -0.04 & 0.03 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.90 & 0.96 & -0.04 & 0.00 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 1.00 & 0.96 & -0.03 & 0.01 & -0.09 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.95 & -0.03 & 0.12 & -0.11 & 0.00 & -0.06 & 0.00 \\ & & 0.10 & 0.96 & -0.03 & 0.03 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.96 & -0.04 & 0.04 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.30 & 0.96 & -0.03 & 0.05 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.96 & -0.04 & 0.02 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.50 & 0.96 & -0.03 & 0.02 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.60 & 0.96 & -0.03 & 0.04 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.70 & 0.97 & -0.04 & 0.06 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.80 & 0.97 & -0.04 & 0.01 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.97 & -0.05 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.97 & -0.03 & 0.00 & -0.08 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.97 & -0.03 & 0.07 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 0.96 & -0.02 & 0.05 & -0.08 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.97 & -0.05 & 0.02 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.30 & 0.96 & -0.04 & 0.02 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.97 & -0.04 & 0.01 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.97 & -0.03 & 0.01 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.60 & 0.97 & -0.03 & 0.03 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.70 & 0.97 & -0.04 & 0.03 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.97 & -0.04 & 0.01 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.97 & -0.04 & 0.01 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.97 & -0.04 & 0.00 & -0.08 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.98 & -0.04 & 0.03 & -0.07 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 0.98 & -0.03 & 0.03 & -0.08 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 0.98 & -0.04 & 0.01 & -0.08 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.98 & -0.04 & 0.01 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 0.98 & -0.04 & 0.01 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.97 & -0.04 & 0.01 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.98 & -0.04 & 0.02 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.98 & -0.04 & 0.02 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.98 & -0.04 & 0.01 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.98 & -0.05 & 0.01 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.98 & -0.04 & 0.00 & -0.07 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.99 & -0.03 & 0.01 & -0.05 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 0.99 & -0.03 & 0.01 & -0.06 & 0.00 & -0.04 & 0.00 \\ & & 0.20 & 0.99 & -0.03 & 0.01 & -0.08 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.99 & -0.05 & 0.00 & -0.09 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.99 & -0.04 & 0.00 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.98 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.99 & -0.03 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.99 & -0.04 & 0.01 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.99 & -0.04 & 0.01 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.99 & -0.04 & 0.00 & -0.08 & 0.00 & -0.04 & 0.01 \\ & & 1.00 & 0.99 & -0.04 & 0.00 & -0.07 & 0.00 & -0.04 & 0.00 \\ \midrule 100 & 0.00 & 0.00 & 0.93 & -0.03 & 0.01 & -0.09 & 0.00 & -0.06 & 0.00 \\ & & 0.10 & 0.94 & -0.04 & 0.00 & -0.16 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 0.94 & -0.06 & 0.01 & -0.16 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.94 & -0.04 & 0.01 & -0.15 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.95 & -0.04 & 0.01 & -0.14 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.95 & -0.04 & 0.00 & -0.14 & 0.00 & -0.05 & 0.00 \\ & & 0.60 & 0.96 & -0.04 & 0.01 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.70 & 0.96 & -0.05 & 0.00 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.80 & 0.96 & -0.04 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.97 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.97 & -0.03 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.95 & -0.04 & 0.01 & -0.08 & 0.00 & -0.07 & 0.00 \\ & & 0.10 & 0.95 & -0.04 & 0.01 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.95 & -0.06 & 0.01 & -0.14 & 0.00 & -0.06 & 0.00 \\ & & 0.30 & 0.96 & -0.05 & 0.00 & -0.14 & 0.00 & -0.06 & 0.00 \\ & & 0.40 & 0.96 & -0.05 & 0.00 & -0.14 & 0.00 & -0.06 & 0.00 \\ & & 0.50 & 0.96 & -0.04 & 0.00 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.60 & 0.97 & -0.04 & 0.01 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.70 & 0.97 & -0.04 & 0.00 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.80 & 0.97 & -0.05 & 0.00 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.97 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.97 & -0.03 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.97 & -0.03 & 0.01 & -0.08 & 0.00 & -0.06 & 0.00 \\ & & 0.10 & 0.96 & -0.04 & 0.01 & -0.08 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.96 & -0.04 & 0.00 & -0.11 & 0.00 & -0.06 & 0.00 \\ & & 0.30 & 0.96 & -0.05 & 0.00 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.40 & 0.97 & -0.05 & 0.00 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.50 & 0.97 & -0.04 & 0.01 & -0.12 & 0.00 & -0.06 & 0.00 \\ & & 0.60 & 0.97 & -0.04 & 0.00 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.70 & 0.97 & -0.04 & 0.00 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.97 & -0.04 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.98 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.98 & -0.03 & 0.00 & -0.07 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.98 & -0.04 & 0.01 & -0.06 & 0.00 & -0.05 & 0.00 \\ & & 0.10 & 0.98 & -0.04 & 0.01 & -0.07 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 0.97 & -0.04 & 0.01 & -0.08 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.97 & -0.06 & 0.00 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.98 & -0.05 & 0.01 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.97 & -0.04 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.98 & -0.04 & 0.00 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.98 & -0.04 & 0.00 & -0.11 & 0.00 & -0.05 & 0.00 \\ & & 0.80 & 0.98 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.98 & -0.04 & 0.00 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.98 & -0.03 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.99 & -0.04 & 0.00 & -0.06 & 0.00 & -0.05 & 0.00 \\ & & 0.10 & 0.98 & -0.04 & 0.00 & -0.05 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 0.98 & -0.04 & 0.01 & -0.06 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.98 & -0.06 & 0.00 & -0.09 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.98 & -0.05 & 0.00 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.98 & -0.04 & 0.01 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.98 & -0.04 & 0.01 & -0.11 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.98 & -0.04 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.80 & 0.98 & -0.03 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.98 & -0.04 & 0.00 & -0.07 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.98 & -0.03 & 0.00 & -0.06 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.99 & -0.04 & 0.00 & -0.04 & 0.00 & -0.05 & 0.00 \\ & & 0.10 & 0.99 & -0.04 & 0.00 & -0.04 & 0.00 & -0.04 & 0.00 \\ & & 0.20 & 0.99 & -0.04 & 0.00 & -0.05 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 0.99 & -0.05 & 0.00 & -0.07 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 0.99 & -0.05 & 0.00 & -0.10 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 0.99 & -0.05 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 0.98 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 0.98 & -0.04 & 0.00 & -0.10 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 0.98 & -0.04 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.90 & 0.99 & -0.03 & 0.00 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 1.00 & 0.99 & -0.03 & 0.00 & -0.06 & 0.00 & -0.03 & 0.00 \\ \midrule 125 & 0.00 & 0.00 & 0.99 & -0.08 & 0.01 & -0.06 & 0.00 & -0.07 & 0.00 \\ & & 0.10 & 0.99 & -0.08 & 0.00 & -0.16 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 1.00 & -0.07 & 0.00 & -0.15 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 1.00 & -0.07 & 0.00 & -0.14 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 1.00 & -0.06 & 0.00 & -0.13 & 0.00 & -0.05 & 0.00 \\ & & 0.50 & 1.00 & -0.06 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 1.00 & -0.05 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 1.00 & -0.05 & 0.00 & -0.09 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 1.00 & -0.05 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ & & 0.90 & 1.00 & -0.05 & 0.00 & -0.06 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.02 & 0.00 & -0.03 & 0.01 & -0.02 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 1.00 & -0.07 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ & & 0.10 & 1.00 & -0.06 & 0.00 & -0.13 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 1.00 & -0.06 & 0.00 & -0.15 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 1.00 & -0.06 & 0.00 & -0.14 & 0.00 & -0.05 & 0.00 \\ & & 0.40 & 1.00 & -0.06 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.50 & 1.00 & -0.05 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 1.00 & -0.05 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.70 & 1.00 & -0.05 & 0.00 & -0.09 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 1.00 & -0.05 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ & & 0.90 & 1.00 & -0.05 & 0.00 & -0.06 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.02 & 0.00 & -0.03 & 0.01 & -0.02 & 0.01 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 1.00 & -0.05 & 0.00 & -0.05 & 0.00 & -0.05 & 0.00 \\ & & 0.10 & 1.00 & -0.06 & 0.00 & -0.05 & 0.01 & -0.05 & 0.00 \\ & & 0.20 & 1.00 & -0.06 & 0.00 & -0.12 & 0.00 & -0.05 & 0.00 \\ & & 0.30 & 1.00 & -0.06 & 0.00 & -0.14 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.05 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.50 & 1.00 & -0.05 & 0.00 & -0.10 & 0.00 & -0.04 & 0.00 \\ & & 0.60 & 1.00 & -0.05 & 0.00 & -0.10 & 0.00 & -0.03 & 0.00 \\ & & 0.70 & 1.00 & -0.04 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 1.00 & -0.04 & 0.00 & -0.07 & 0.00 & -0.03 & 0.00 \\ & & 0.90 & 1.00 & -0.05 & 0.00 & -0.05 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.02 & 0.00 & -0.03 & 0.00 & -0.02 & 0.01 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 1.00 & -0.05 & 0.00 & -0.05 & 0.01 & -0.05 & 0.00 \\ & & 0.10 & 1.00 & -0.05 & 0.00 & -0.05 & 0.01 & -0.04 & 0.00 \\ & & 0.20 & 1.00 & -0.05 & 0.00 & -0.09 & 0.00 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.05 & 0.00 & -0.13 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.05 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.50 & 1.00 & -0.05 & 0.00 & -0.10 & 0.00 & -0.03 & 0.00 \\ & & 0.60 & 1.00 & -0.04 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.04 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 1.00 & -0.04 & 0.00 & -0.07 & 0.00 & -0.03 & 0.01 \\ & & 0.90 & 1.00 & -0.04 & 0.00 & -0.05 & 0.00 & -0.02 & 0.01 \\ & & 1.00 & 1.00 & -0.02 & 0.00 & -0.03 & 0.00 & -0.02 & 0.01 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 1.00 & -0.04 & 0.00 & -0.04 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 1.00 & -0.05 & 0.00 & -0.04 & 0.01 & -0.04 & 0.00 \\ & & 0.20 & 1.00 & -0.05 & 0.00 & -0.04 & 0.01 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.04 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.04 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.50 & 1.00 & -0.04 & 0.00 & -0.10 & 0.00 & -0.03 & 0.00 \\ & & 0.60 & 1.00 & -0.04 & 0.01 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.04 & 0.00 & -0.08 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 1.00 & -0.04 & 0.00 & -0.07 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.04 & 0.00 & -0.05 & 0.00 & -0.02 & 0.01 \\ & & 1.00 & 1.00 & -0.02 & 0.00 & -0.03 & 0.01 & -0.02 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 1.00 & -0.03 & 0.00 & -0.03 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 1.00 & -0.04 & 0.00 & -0.03 & 0.01 & -0.04 & 0.00 \\ & & 0.20 & 1.00 & -0.04 & 0.00 & -0.04 & 0.01 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.04 & 0.00 & -0.08 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.04 & 0.00 & -0.11 & 0.00 & -0.03 & 0.00 \\ & & 0.50 & 1.00 & -0.03 & 0.00 & -0.10 & 0.00 & -0.03 & 0.00 \\ & & 0.60 & 1.00 & -0.03 & 0.01 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.04 & 0.01 & -0.08 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 1.00 & -0.03 & 0.00 & -0.07 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.04 & 0.00 & -0.05 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.02 & 0.00 & -0.03 & 0.01 & -0.02 & 0.00 \\ \midrule 150 & 0.00 & 0.00 & 1.00 & -0.05 & 0.00 & -0.03 & 0.02 & -0.06 & 0.00 \\ & & 0.10 & 1.00 & -0.05 & 0.00 & -0.15 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 1.00 & -0.05 & 0.00 & -0.15 & 0.00 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.04 & 0.00 & -0.13 & 0.00 & -0.04 & 0.01 \\ & & 0.40 & 1.00 & -0.04 & 0.00 & -0.12 & 0.00 & -0.04 & 0.01 \\ & & 0.50 & 1.00 & -0.04 & 0.00 & -0.12 & 0.00 & -0.03 & 0.00 \\ & & 0.60 & 1.00 & -0.04 & 0.00 & -0.11 & 0.00 & -0.03 & 0.00 \\ & & 0.70 & 1.00 & -0.03 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 1.00 & -0.04 & 0.00 & -0.07 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.04 & 0.00 & -0.05 & 0.00 & -0.02 & 0.01 \\ & & 1.00 & 1.00 & -0.01 & 0.00 & -0.01 & 0.00 & -0.02 & 0.01 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 1.00 & -0.04 & 0.00 & -0.03 & 0.01 & -0.05 & 0.00 \\ & & 0.10 & 1.00 & -0.04 & 0.00 & -0.13 & 0.01 & -0.05 & 0.00 \\ & & 0.20 & 1.00 & -0.04 & 0.00 & -0.13 & 0.00 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.04 & 0.00 & -0.13 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.04 & 0.00 & -0.12 & 0.00 & -0.03 & 0.00 \\ & & 0.50 & 1.00 & -0.04 & 0.00 & -0.12 & 0.00 & -0.03 & 0.00 \\ & & 0.60 & 1.00 & -0.03 & 0.00 & -0.11 & 0.00 & -0.03 & 0.00 \\ & & 0.70 & 1.00 & -0.03 & 0.00 & -0.09 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 1.00 & -0.03 & 0.00 & -0.07 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.03 & 0.00 & -0.05 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.01 & 0.00 & -0.01 & 0.00 & -0.02 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 1.00 & -0.03 & 0.00 & -0.03 & 0.01 & -0.05 & 0.00 \\ & & 0.10 & 1.00 & -0.03 & 0.00 & -0.02 & 0.01 & -0.04 & 0.00 \\ & & 0.20 & 1.00 & -0.03 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.03 & 0.00 & -0.13 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.03 & 0.00 & -0.12 & 0.00 & -0.03 & 0.00 \\ & & 0.50 & 1.00 & -0.03 & 0.01 & -0.11 & 0.00 & -0.03 & 0.00 \\ & & 0.60 & 1.00 & -0.03 & 0.01 & -0.11 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.03 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.80 & 1.00 & -0.03 & 0.00 & -0.07 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.03 & 0.00 & -0.05 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.01 & 0.00 & -0.01 & 0.00 & -0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 1.00 & -0.03 & 0.00 & -0.03 & 0.00 & -0.04 & 0.00 \\ & & 0.10 & 1.00 & -0.03 & 0.00 & -0.02 & 0.01 & -0.04 & 0.00 \\ & & 0.20 & 1.00 & -0.02 & 0.00 & -0.11 & 0.01 & -0.04 & 0.00 \\ & & 0.30 & 1.00 & -0.03 & 0.00 & -0.12 & 0.00 & -0.04 & 0.00 \\ & & 0.40 & 1.00 & -0.03 & 0.00 & -0.12 & 0.00 & -0.03 & 0.01 \\ & & 0.50 & 1.00 & -0.03 & 0.01 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.60 & 1.00 & -0.02 & 0.00 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.02 & 0.01 & -0.08 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 1.00 & -0.03 & 0.00 & -0.06 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.03 & 0.00 & -0.04 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.01 & 0.00 & -0.01 & 0.00 & -0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 1.00 & -0.03 & 0.00 & -0.02 & 0.01 & -0.04 & 0.00 \\ & & 0.10 & 1.00 & -0.02 & 0.00 & -0.02 & 0.01 & -0.04 & 0.00 \\ & & 0.20 & 1.00 & -0.02 & 0.01 & -0.02 & 0.01 & -0.03 & 0.00 \\ & & 0.30 & 1.00 & -0.03 & 0.00 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.40 & 1.00 & -0.03 & 0.00 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.50 & 1.00 & -0.03 & 0.01 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.60 & 1.00 & -0.02 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.02 & 0.01 & -0.08 & 0.00 & -0.03 & 0.00 \\ & & 0.80 & 1.00 & -0.02 & 0.00 & -0.06 & 0.00 & -0.02 & 0.01 \\ & & 0.90 & 1.00 & -0.03 & 0.00 & -0.04 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.01 & 0.00 & -0.01 & 0.00 & -0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 1.00 & -0.02 & 0.00 & -0.02 & 0.01 & -0.03 & 0.00 \\ & & 0.10 & 1.00 & -0.02 & 0.00 & -0.02 & 0.01 & -0.03 & 0.00 \\ & & 0.20 & 1.00 & -0.02 & 0.00 & -0.02 & 0.02 & -0.03 & 0.00 \\ & & 0.30 & 1.00 & -0.02 & 0.00 & -0.09 & 0.01 & -0.03 & 0.01 \\ & & 0.40 & 1.00 & -0.02 & 0.00 & -0.10 & 0.00 & -0.03 & 0.01 \\ & & 0.50 & 1.00 & -0.02 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.60 & 1.00 & -0.02 & 0.00 & -0.09 & 0.00 & -0.03 & 0.01 \\ & & 0.70 & 1.00 & -0.02 & 0.01 & -0.08 & 0.00 & -0.02 & 0.01 \\ & & 0.80 & 1.00 & -0.02 & 0.00 & -0.06 & 0.00 & -0.02 & 0.00 \\ & & 0.90 & 1.00 & -0.03 & 0.00 & -0.04 & 0.00 & -0.02 & 0.00 \\ & & 1.00 & 1.00 & -0.01 & 0.00 & -0.01 & 0.00 & -0.01 & 0.01 \\ \bottomrule \label{table:ttestLinearAcceptance} \end{longtable} \begin{longtable}{rrlllrrrrrrr} \kill \caption{Paired t-test results comparing the individualized compensation scheme with the benchmark schemes in terms of total expected cost obtained using logistic acceptance probability.} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endfirsthead \caption{(continued.)} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endhead 50 & 0.00 & 0.00 & 6270.38 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.10 & 6182.13 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.20 & 6119.91 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.30 & 5980.57 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.40 & 5874.89 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.50 & 5775.77 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.60 & 5666.36 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.70 & 5545.61 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.80 & 5409.07 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.90 & 5316.13 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 1.00 & 5167.72 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 6356.08 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.10 & 6270.26 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.20 & 6207.37 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.30 & 6069.91 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.40 & 5961.49 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.50 & 5860.52 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.60 & 5749.01 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.70 & 5626.64 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.80 & 5487.28 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.90 & 5391.52 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 1.00 & 5239.93 & 0.00 & 0.00 & 0.01 & 0.00 & 0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 6420.89 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.10 & 6338.34 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.20 & 6276.30 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.30 & 6141.22 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.40 & 6031.72 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.50 & 5930.01 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.60 & 5817.12 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.70 & 5694.32 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.80 & 5552.84 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.90 & 5454.74 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 1.00 & 5299.88 & 0.00 & 0.00 & 0.01 & 0.00 & 0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 6476.43 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.10 & 6394.63 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.20 & 6333.98 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.30 & 6202.88 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.40 & 6092.49 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.50 & 5990.26 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.60 & 5876.60 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.70 & 5753.84 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.80 & 5611.09 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.90 & 5511.20 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 1.00 & 5353.98 & 0.00 & 0.00 & 0.01 & 0.00 & 0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 6526.38 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.10 & 6445.39 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.20 & 6384.76 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.30 & 6256.06 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.40 & 6147.00 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.50 & 6044.13 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.60 & 5930.37 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.70 & 5807.67 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.80 & 5664.11 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.90 & 5562.55 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 1.00 & 5403.70 & 0.00 & 0.00 & 0.01 & 0.00 & 0.01 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 6570.40 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.10 & 6491.83 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.20 & 6431.93 & 0.02 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.30 & 6303.96 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.40 & 6196.04 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.50 & 6093.26 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.60 & 5978.72 & 0.01 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ & & 0.70 & 5857.34 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.80 & 5712.78 & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 0.90 & 5609.72 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ & & 1.00 & 5450.05 & 0.00 & 0.00 & 0.01 & 0.00 & 0.02 & 0.00 \\ \midrule 75 & 0.00 & 0.00 & 5664.75 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 5532.92 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 5439.17 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.30 & 5233.61 & 0.03 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.40 & 5074.87 & 0.03 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.50 & 4927.88 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.60 & 4766.44 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.70 & 4591.37 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.80 & 4388.97 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 4249.14 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ & & 1.00 & 4029.11 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 5794.60 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 5662.56 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 5567.64 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.30 & 5361.00 & 0.03 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.40 & 5196.88 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.50 & 5046.53 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.60 & 4881.14 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.70 & 4701.78 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.80 & 4495.28 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 4350.96 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ & & 1.00 & 4125.22 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 5894.67 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.10 & 5766.17 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 5672.39 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.30 & 5467.01 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.40 & 5301.40 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.50 & 5148.81 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.60 & 4980.66 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.70 & 4798.91 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.80 & 4588.92 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 4441.16 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 4210.70 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 5980.01 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.10 & 5853.22 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.20 & 5761.15 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.30 & 5558.76 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.40 & 5393.13 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.50 & 5239.32 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.60 & 5068.77 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.70 & 4887.48 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.80 & 4674.87 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 4523.78 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 4290.02 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 6053.31 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.10 & 5930.77 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.20 & 5839.84 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 5639.54 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.40 & 5475.23 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.50 & 5320.85 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.60 & 5148.77 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.70 & 4967.29 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.80 & 4753.62 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ & & 0.90 & 4600.60 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 4363.64 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 6118.05 & 0.03 & 0.00 & 0.03 & 0.00 & 0.05 & 0.00 \\ & & 0.10 & 5999.60 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.20 & 5911.26 & 0.03 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.30 & 5712.43 & 0.03 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.40 & 5548.69 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.50 & 5394.12 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.60 & 5221.66 & 0.02 & 0.00 & 0.04 & 0.00 & 0.06 & 0.00 \\ & & 0.70 & 5040.95 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.80 & 4826.78 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.90 & 4671.18 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 4432.26 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \midrule 100 & 0.00 & 0.00 & 5234.62 & 0.05 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 5070.18 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.20 & 4950.28 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.30 & 4697.17 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.40 & 4491.97 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.50 & 4306.88 & 0.03 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 4104.79 & 0.03 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.70 & 3886.38 & 0.03 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.80 & 3633.95 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 3457.04 & 0.01 & 0.00 & 0.05 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 3185.87 & 0.01 & 0.01 & 0.04 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 5399.39 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 5233.53 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.20 & 5112.20 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.30 & 4853.13 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.40 & 4640.74 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.50 & 4449.66 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 4239.40 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 4013.43 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.80 & 3753.12 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 3570.50 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 3290.62 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 5529.35 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 5365.68 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 5246.53 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 4987.66 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.40 & 4773.60 & 0.03 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.50 & 4579.45 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 4364.24 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 4134.41 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.80 & 3867.53 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 3679.58 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 3392.20 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 5637.90 & 0.04 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.10 & 5478.77 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 5361.66 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 5104.40 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.40 & 4890.79 & 0.03 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.50 & 4694.73 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 4475.93 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 4244.64 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.80 & 3974.43 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 3782.38 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 3489.21 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 5732.16 & 0.04 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 5577.36 & 0.04 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.20 & 5462.72 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 5208.75 & 0.04 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.40 & 4996.46 & 0.03 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.50 & 4799.09 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 4577.77 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 4345.33 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.80 & 4072.51 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 3877.12 & 0.01 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 1.00 & 3580.64 & 0.01 & 0.00 & 0.05 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 5814.50 & 0.04 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.10 & 5664.73 & 0.04 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.20 & 5553.29 & 0.04 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.30 & 5302.82 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 5092.70 & 0.03 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.50 & 4894.80 & 0.03 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 4671.24 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 4438.83 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.80 & 4162.91 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.90 & 3965.44 & 0.01 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 1.00 & 3665.70 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ \midrule 125 & 0.00 & 0.00 & 4847.17 & 0.05 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.10 & 4672.42 & 0.05 & 0.00 & 0.07 & 0.00 & 0.11 & 0.00 \\ & & 0.20 & 4545.41 & 0.05 & 0.00 & 0.07 & 0.00 & 0.11 & 0.00 \\ & & 0.30 & 4276.17 & 0.04 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.40 & 4061.71 & 0.04 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.50 & 3860.68 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 3645.19 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 3425.16 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.80 & 3162.66 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 2975.58 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 2708.84 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 5002.00 & 0.05 & 0.00 & 0.07 & 0.00 & 0.11 & 0.00 \\ & & 0.10 & 4821.35 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.20 & 4690.25 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.30 & 4412.02 & 0.04 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.40 & 4189.50 & 0.04 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.50 & 3981.75 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.60 & 3758.04 & 0.03 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.70 & 3530.97 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.80 & 3260.33 & 0.02 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 3067.01 & 0.01 & 0.00 & 0.05 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 2791.99 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 5139.49 & 0.05 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.10 & 4957.61 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.20 & 4824.94 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.30 & 4541.73 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.40 & 4312.39 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.50 & 4098.54 & 0.04 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.60 & 3866.98 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.70 & 3633.24 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.80 & 3354.91 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 3155.71 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 2872.94 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 5259.21 & 0.05 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.10 & 5079.55 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.20 & 4947.71 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.30 & 4661.94 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.40 & 4428.54 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.50 & 4210.34 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.60 & 3971.93 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.70 & 3732.20 & 0.03 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.80 & 3446.73 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.90 & 3241.91 & 0.01 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 1.00 & 2951.78 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 5366.46 & 0.05 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.10 & 5188.30 & 0.05 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.20 & 5058.68 & 0.05 & 0.00 & 0.08 & 0.00 & 0.10 & 0.00 \\ & & 0.30 & 4773.35 & 0.04 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.40 & 4537.34 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.50 & 4315.24 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.60 & 4071.88 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.70 & 3827.72 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.80 & 3535.98 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.90 & 3325.75 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 1.00 & 3028.51 & 0.01 & 0.00 & 0.05 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 5462.92 & 0.05 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 5287.65 & 0.05 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.20 & 5159.20 & 0.05 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.30 & 4876.33 & 0.04 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.40 & 4640.30 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.50 & 4414.99 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.60 & 4166.09 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.70 & 3919.45 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.80 & 3622.27 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.90 & 3407.27 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 1.00 & 3103.16 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ \midrule 150 & 0.00 & 0.00 & 4561.63 & 0.06 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.10 & 4383.34 & 0.05 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.20 & 4253.99 & 0.05 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.30 & 3978.78 & 0.04 & 0.00 & 0.07 & 0.00 & 0.11 & 0.00 \\ & & 0.40 & 3763.65 & 0.03 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.50 & 3553.11 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.60 & 3330.54 & 0.03 & 0.00 & 0.07 & 0.00 & 0.08 & 0.00 \\ & & 0.70 & 3113.84 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.80 & 2853.20 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.90 & 2665.55 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 2413.22 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 4702.76 & 0.06 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.10 & 4518.03 & 0.06 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.20 & 4384.70 & 0.05 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.30 & 4101.15 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.40 & 3878.63 & 0.04 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.50 & 3661.52 & 0.03 & 0.00 & 0.08 & 0.00 & 0.10 & 0.00 \\ & & 0.60 & 3431.10 & 0.03 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.70 & 3207.38 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.80 & 2938.18 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 2744.41 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ & & 1.00 & 2483.65 & 0.00 & 0.00 & 0.02 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 4835.74 & 0.06 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.10 & 4646.57 & 0.06 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.20 & 4509.82 & 0.05 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.30 & 4218.82 & 0.05 & 0.00 & 0.08 & 0.00 & 0.13 & 0.00 \\ & & 0.40 & 3989.40 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.50 & 3766.25 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.60 & 3528.35 & 0.03 & 0.00 & 0.08 & 0.00 & 0.10 & 0.00 \\ & & 0.70 & 3298.21 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.80 & 3020.89 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 2821.21 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 2552.48 & 0.01 & 0.00 & 0.03 & 0.00 & 0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 4958.39 & 0.06 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.10 & 4767.35 & 0.06 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.20 & 4628.59 & 0.06 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.30 & 4331.94 & 0.05 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.40 & 4096.21 & 0.04 & 0.00 & 0.08 & 0.00 & 0.13 & 0.00 \\ & & 0.50 & 3867.51 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.60 & 3622.46 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.70 & 3386.41 & 0.03 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.80 & 3101.45 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 2895.98 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 2619.70 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 5071.59 & 0.07 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.10 & 4879.48 & 0.06 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.20 & 4740.59 & 0.06 & 0.00 & 0.09 & 0.00 & 0.12 & 0.00 \\ & & 0.30 & 4439.97 & 0.05 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.40 & 4199.20 & 0.05 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.50 & 3965.43 & 0.04 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.60 & 3713.61 & 0.03 & 0.00 & 0.09 & 0.00 & 0.12 & 0.00 \\ & & 0.70 & 3472.11 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.80 & 3179.90 & 0.02 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.90 & 2968.82 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 2685.19 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 5174.09 & 0.07 & 0.00 & 0.07 & 0.00 & 0.10 & 0.00 \\ & & 0.10 & 4984.45 & 0.06 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.20 & 4845.21 & 0.06 & 0.00 & 0.09 & 0.00 & 0.12 & 0.00 \\ & & 0.30 & 4543.44 & 0.05 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.40 & 4297.92 & 0.05 & 0.00 & 0.09 & 0.00 & 0.14 & 0.00 \\ & & 0.50 & 4060.22 & 0.04 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.60 & 3801.95 & 0.04 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.70 & 3555.46 & 0.03 & 0.00 & 0.08 & 0.00 & 0.11 & 0.00 \\ & & 0.80 & 3256.36 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ & & 0.90 & 3039.78 & 0.02 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 1.00 & 2749.16 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \bottomrule \label{table:ttestLogisticCost} \end{longtable} \begin{longtable}{rrlllrrrrrrr} \kill \caption{Paired t-test results comparing the individualized compensation scheme with the benchmark schemes in terms of total expected distance obtained using logistic acceptance probability.} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endfirsthead \caption{(continued.)} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endhead 50 & 0.00 & 0.00 & 11268.06 & 0.06 & 0.00 & 0.05 & 0.00 & 0.08 & 0.00 \\ & & 0.10 & 11131.90 & 0.05 & 0.00 & 0.05 & 0.00 & 0.08 & 0.00 \\ & & 0.20 & 11058.70 & 0.05 & 0.00 & 0.05 & 0.00 & 0.08 & 0.00 \\ & & 0.30 & 10842.34 & 0.04 & 0.00 & 0.05 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 10700.61 & 0.04 & 0.00 & 0.05 & 0.00 & 0.08 & 0.00 \\ & & 0.50 & 10566.26 & 0.03 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.60 & 10410.92 & 0.03 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 10256.93 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.80 & 10101.72 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 0.90 & 9987.65 & 0.02 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 9814.20 & 0.02 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 11208.94 & 0.06 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.10 & 11047.88 & 0.06 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.20 & 10983.52 & 0.05 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.30 & 10766.22 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.40 & 10606.12 & 0.04 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.50 & 10497.71 & 0.03 & 0.00 & 0.05 & 0.01 & 0.08 & 0.00 \\ & & 0.60 & 10323.43 & 0.03 & 0.00 & 0.06 & 0.00 & 0.07 & 0.00 \\ & & 0.70 & 10180.06 & 0.02 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.80 & 10024.03 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 9918.59 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 9740.63 & 0.01 & 0.00 & 0.03 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 11185.96 & 0.06 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.10 & 11030.12 & 0.06 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.20 & 10956.76 & 0.05 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.30 & 10726.84 & 0.05 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.40 & 10563.34 & 0.04 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.50 & 10433.13 & 0.03 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.60 & 10281.37 & 0.03 & 0.00 & 0.06 & 0.00 & 0.08 & 0.00 \\ & & 0.70 & 10130.59 & 0.02 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.80 & 9971.17 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 9867.17 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ & & 1.00 & 9692.31 & 0.01 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 11158.20 & 0.07 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.10 & 11033.96 & 0.06 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.20 & 10954.68 & 0.06 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.30 & 10720.54 & 0.05 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.40 & 10551.23 & 0.04 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.50 & 10433.93 & 0.03 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.60 & 10235.54 & 0.03 & 0.00 & 0.06 & 0.01 & 0.08 & 0.00 \\ & & 0.70 & 10090.81 & 0.02 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.80 & 9934.92 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 0.90 & 9832.47 & 0.01 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 9657.87 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 11176.50 & 0.08 & 0.01 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.10 & 10995.82 & 0.07 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.20 & 10935.74 & 0.06 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.30 & 10731.13 & 0.05 & 0.00 & 0.06 & 0.00 & 0.09 & 0.00 \\ & & 0.40 & 10536.75 & 0.05 & 0.00 & 0.06 & 0.02 & 0.10 & 0.00 \\ & & 0.50 & 10429.99 & 0.04 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.60 & 10263.67 & 0.03 & 0.00 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.70 & 10078.36 & 0.03 & 0.00 & 0.06 & 0.01 & 0.07 & 0.00 \\ & & 0.80 & 9913.17 & 0.02 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 9799.93 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 9624.73 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 11248.70 & 0.07 & 0.01 & 0.06 & 0.01 & 0.09 & 0.00 \\ & & 0.10 & 11029.70 & 0.07 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.20 & 10942.13 & 0.07 & 0.00 & 0.06 & 0.00 & 0.10 & 0.00 \\ & & 0.30 & 10739.93 & 0.05 & 0.00 & 0.06 & 0.00 & 0.10 & 0.01 \\ & & 0.40 & 10581.34 & 0.04 & 0.00 & 0.05 & 0.02 & 0.09 & 0.00 \\ & & 0.50 & 10432.83 & 0.04 & 0.00 & 0.06 & 0.02 & 0.09 & 0.00 \\ & & 0.60 & 10258.68 & 0.03 & 0.00 & 0.06 & 0.02 & 0.09 & 0.00 \\ & & 0.70 & 10061.63 & 0.03 & 0.00 & 0.06 & 0.01 & 0.07 & 0.00 \\ & & 0.80 & 9887.08 & 0.02 & 0.00 & 0.05 & 0.00 & 0.07 & 0.00 \\ & & 0.90 & 9779.18 & 0.02 & 0.00 & 0.05 & 0.00 & 0.06 & 0.00 \\ & & 1.00 & 9599.07 & 0.01 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 \\ \midrule 75 & 0.00 & 0.00 & 9378.39 & 0.11 & 0.00 & 0.10 & 0.00 & 0.18 & 0.00 \\ & & 0.10 & 9182.12 & 0.10 & 0.00 & 0.10 & 0.00 & 0.17 & 0.00 \\ & & 0.20 & 9064.91 & 0.10 & 0.00 & 0.10 & 0.00 & 0.17 & 0.00 \\ & & 0.30 & 8786.61 & 0.08 & 0.00 & 0.10 & 0.00 & 0.17 & 0.00 \\ & & 0.40 & 8570.18 & 0.07 & 0.00 & 0.09 & 0.00 & 0.17 & 0.00 \\ & & 0.50 & 8381.61 & 0.06 & 0.00 & 0.09 & 0.00 & 0.16 & 0.00 \\ & & 0.60 & 8144.36 & 0.06 & 0.00 & 0.09 & 0.00 & 0.16 & 0.00 \\ & & 0.70 & 7929.32 & 0.05 & 0.00 & 0.09 & 0.00 & 0.14 & 0.00 \\ & & 0.80 & 7693.18 & 0.04 & 0.00 & 0.08 & 0.00 & 0.12 & 0.00 \\ & & 0.90 & 7522.71 & 0.03 & 0.00 & 0.08 & 0.00 & 0.10 & 0.00 \\ & & 1.00 & 7264.62 & 0.03 & 0.01 & 0.07 & 0.00 & 0.08 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 9244.52 & 0.11 & 0.00 & 0.11 & 0.00 & 0.19 & 0.00 \\ & & 0.10 & 9032.91 & 0.11 & 0.00 & 0.11 & 0.00 & 0.19 & 0.00 \\ & & 0.20 & 8919.69 & 0.10 & 0.00 & 0.11 & 0.00 & 0.19 & 0.00 \\ & & 0.30 & 8651.39 & 0.09 & 0.00 & 0.10 & 0.00 & 0.18 & 0.00 \\ & & 0.40 & 8434.31 & 0.08 & 0.00 & 0.10 & 0.00 & 0.18 & 0.00 \\ & & 0.50 & 8264.46 & 0.07 & 0.00 & 0.10 & 0.00 & 0.17 & 0.00 \\ & & 0.60 & 8027.20 & 0.06 & 0.00 & 0.09 & 0.00 & 0.18 & 0.00 \\ & & 0.70 & 7817.88 & 0.05 & 0.00 & 0.09 & 0.00 & 0.16 & 0.00 \\ & & 0.80 & 7581.46 & 0.04 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 0.90 & 7420.00 & 0.03 & 0.00 & 0.09 & 0.00 & 0.11 & 0.00 \\ & & 1.00 & 7165.19 & 0.02 & 0.00 & 0.07 & 0.00 & 0.09 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 9201.10 & 0.11 & 0.00 & 0.12 & 0.00 & 0.21 & 0.00 \\ & & 0.10 & 8977.16 & 0.11 & 0.00 & 0.12 & 0.00 & 0.20 & 0.00 \\ & & 0.20 & 8861.78 & 0.10 & 0.00 & 0.12 & 0.00 & 0.20 & 0.00 \\ & & 0.30 & 8578.91 & 0.09 & 0.00 & 0.10 & 0.00 & 0.19 & 0.00 \\ & & 0.40 & 8359.49 & 0.08 & 0.00 & 0.10 & 0.00 & 0.19 & 0.00 \\ & & 0.50 & 8176.40 & 0.07 & 0.00 & 0.10 & 0.00 & 0.18 & 0.00 \\ & & 0.60 & 7949.33 & 0.06 & 0.00 & 0.10 & 0.00 & 0.18 & 0.00 \\ & & 0.70 & 7734.00 & 0.05 & 0.00 & 0.10 & 0.00 & 0.17 & 0.00 \\ & & 0.80 & 7499.76 & 0.04 & 0.00 & 0.10 & 0.00 & 0.14 & 0.00 \\ & & 0.90 & 7340.63 & 0.03 & 0.00 & 0.09 & 0.00 & 0.12 & 0.00 \\ & & 1.00 & 7090.22 & 0.02 & 0.00 & 0.08 & 0.00 & 0.10 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 9195.90 & 0.13 & 0.00 & 0.13 & 0.00 & 0.22 & 0.00 \\ & & 0.10 & 8965.34 & 0.11 & 0.00 & 0.13 & 0.00 & 0.22 & 0.00 \\ & & 0.20 & 8851.98 & 0.10 & 0.00 & 0.13 & 0.00 & 0.21 & 0.00 \\ & & 0.30 & 8548.69 & 0.09 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.40 & 8305.99 & 0.09 & 0.00 & 0.10 & 0.00 & 0.20 & 0.00 \\ & & 0.50 & 8127.76 & 0.07 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.60 & 7894.32 & 0.06 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.70 & 7676.04 & 0.05 & 0.00 & 0.09 & 0.00 & 0.18 & 0.00 \\ & & 0.80 & 7437.57 & 0.04 & 0.00 & 0.10 & 0.00 & 0.15 & 0.00 \\ & & 0.90 & 7282.91 & 0.03 & 0.00 & 0.09 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 7031.77 & 0.02 & 0.00 & 0.08 & 0.00 & 0.10 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 9264.32 & 0.12 & 0.00 & 0.14 & 0.00 & 0.21 & 0.00 \\ & & 0.10 & 8954.04 & 0.12 & 0.00 & 0.13 & 0.00 & 0.23 & 0.00 \\ & & 0.20 & 8793.58 & 0.11 & 0.00 & 0.14 & 0.01 & 0.23 & 0.00 \\ & & 0.30 & 8552.86 & 0.09 & 0.00 & 0.11 & 0.00 & 0.21 & 0.00 \\ & & 0.40 & 8300.09 & 0.08 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.50 & 8081.91 & 0.08 & 0.00 & 0.12 & 0.00 & 0.21 & 0.00 \\ & & 0.60 & 7863.77 & 0.07 & 0.00 & 0.10 & 0.00 & 0.20 & 0.00 \\ & & 0.70 & 7627.93 & 0.06 & 0.00 & 0.11 & 0.00 & 0.19 & 0.00 \\ & & 0.80 & 7398.23 & 0.04 & 0.00 & 0.10 & 0.00 & 0.16 & 0.00 \\ & & 0.90 & 7239.53 & 0.03 & 0.00 & 0.10 & 0.00 & 0.13 & 0.00 \\ & & 1.00 & 6981.87 & 0.02 & 0.00 & 0.09 & 0.00 & 0.11 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 9318.69 & 0.14 & 0.00 & 0.16 & 0.00 & 0.21 & 0.00 \\ & & 0.10 & 9024.85 & 0.11 & 0.00 & 0.13 & 0.01 & 0.22 & 0.00 \\ & & 0.20 & 8851.12 & 0.12 & 0.00 & 0.13 & 0.01 & 0.24 & 0.00 \\ & & 0.30 & 8547.12 & 0.09 & 0.00 & 0.13 & 0.01 & 0.22 & 0.00 \\ & & 0.40 & 8297.06 & 0.08 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.50 & 8135.06 & 0.07 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.60 & 7849.69 & 0.07 & 0.00 & 0.10 & 0.00 & 0.21 & 0.00 \\ & & 0.70 & 7614.50 & 0.06 & 0.00 & 0.11 & 0.00 & 0.20 & 0.00 \\ & & 0.80 & 7361.26 & 0.04 & 0.00 & 0.10 & 0.00 & 0.17 & 0.00 \\ & & 0.90 & 7194.87 & 0.04 & 0.00 & 0.10 & 0.00 & 0.14 & 0.00 \\ & & 1.00 & 6958.61 & 0.02 & 0.00 & 0.09 & 0.00 & 0.11 & 0.00 \\ \midrule 100 & 0.00 & 0.00 & 8030.87 & 0.17 & 0.00 & 0.15 & 0.00 & 0.27 & 0.00 \\ & & 0.10 & 7778.73 & 0.15 & 0.00 & 0.13 & 0.00 & 0.27 & 0.00 \\ & & 0.20 & 7633.03 & 0.14 & 0.00 & 0.13 & 0.00 & 0.27 & 0.00 \\ & & 0.30 & 7289.48 & 0.12 & 0.00 & 0.12 & 0.00 & 0.26 & 0.00 \\ & & 0.40 & 7000.08 & 0.11 & 0.00 & 0.11 & 0.00 & 0.26 & 0.00 \\ & & 0.50 & 6754.69 & 0.10 & 0.00 & 0.13 & 0.00 & 0.27 & 0.00 \\ & & 0.60 & 6464.00 & 0.09 & 0.00 & 0.13 & 0.00 & 0.26 & 0.00 \\ & & 0.70 & 6175.96 & 0.08 & 0.00 & 0.13 & 0.00 & 0.24 & 0.00 \\ & & 0.80 & 5866.75 & 0.07 & 0.00 & 0.13 & 0.00 & 0.20 & 0.00 \\ & & 0.90 & 5647.54 & 0.06 & 0.00 & 0.13 & 0.00 & 0.18 & 0.00 \\ & & 1.00 & 5317.25 & 0.04 & 0.00 & 0.12 & 0.00 & 0.14 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 7848.34 & 0.18 & 0.00 & 0.17 & 0.00 & 0.30 & 0.00 \\ & & 0.10 & 7590.15 & 0.17 & 0.00 & 0.15 & 0.00 & 0.30 & 0.00 \\ & & 0.20 & 7440.70 & 0.15 & 0.00 & 0.14 & 0.00 & 0.30 & 0.00 \\ & & 0.30 & 7114.12 & 0.13 & 0.00 & 0.14 & 0.00 & 0.27 & 0.00 \\ & & 0.40 & 6818.98 & 0.12 & 0.00 & 0.12 & 0.00 & 0.28 & 0.00 \\ & & 0.50 & 6601.90 & 0.10 & 0.00 & 0.13 & 0.00 & 0.29 & 0.00 \\ & & 0.60 & 6320.32 & 0.09 & 0.00 & 0.13 & 0.00 & 0.28 & 0.00 \\ & & 0.70 & 6051.35 & 0.08 & 0.00 & 0.13 & 0.00 & 0.25 & 0.00 \\ & & 0.80 & 5755.51 & 0.07 & 0.00 & 0.13 & 0.00 & 0.22 & 0.00 \\ & & 0.90 & 5541.02 & 0.06 & 0.00 & 0.13 & 0.00 & 0.20 & 0.00 \\ & & 1.00 & 5228.38 & 0.04 & 0.00 & 0.12 & 0.00 & 0.16 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 7772.97 & 0.19 & 0.00 & 0.19 & 0.00 & 0.33 & 0.00 \\ & & 0.10 & 7509.14 & 0.18 & 0.00 & 0.16 & 0.00 & 0.33 & 0.00 \\ & & 0.20 & 7366.13 & 0.16 & 0.00 & 0.16 & 0.00 & 0.33 & 0.00 \\ & & 0.30 & 7001.07 & 0.14 & 0.00 & 0.15 & 0.00 & 0.31 & 0.00 \\ & & 0.40 & 6699.39 & 0.12 & 0.00 & 0.14 & 0.00 & 0.29 & 0.00 \\ & & 0.50 & 6488.36 & 0.11 & 0.00 & 0.15 & 0.00 & 0.30 & 0.00 \\ & & 0.60 & 6211.55 & 0.10 & 0.00 & 0.14 & 0.00 & 0.29 & 0.00 \\ & & 0.70 & 5935.19 & 0.09 & 0.00 & 0.14 & 0.00 & 0.28 & 0.00 \\ & & 0.80 & 5654.16 & 0.07 & 0.00 & 0.14 & 0.00 & 0.24 & 0.00 \\ & & 0.90 & 5445.42 & 0.06 & 0.00 & 0.14 & 0.00 & 0.21 & 0.00 \\ & & 1.00 & 5145.11 & 0.04 & 0.00 & 0.12 & 0.00 & 0.17 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 7776.78 & 0.19 & 0.00 & 0.21 & 0.00 & 0.35 & 0.00 \\ & & 0.10 & 7467.87 & 0.19 & 0.00 & 0.18 & 0.00 & 0.36 & 0.00 \\ & & 0.20 & 7293.19 & 0.18 & 0.00 & 0.18 & 0.00 & 0.35 & 0.00 \\ & & 0.30 & 6928.85 & 0.16 & 0.00 & 0.16 & 0.00 & 0.32 & 0.00 \\ & & 0.40 & 6621.58 & 0.13 & 0.00 & 0.16 & 0.00 & 0.32 & 0.00 \\ & & 0.50 & 6410.99 & 0.11 & 0.00 & 0.15 & 0.00 & 0.32 & 0.00 \\ & & 0.60 & 6111.49 & 0.10 & 0.00 & 0.16 & 0.00 & 0.33 & 0.00 \\ & & 0.70 & 5861.08 & 0.09 & 0.00 & 0.14 & 0.00 & 0.29 & 0.00 \\ & & 0.80 & 5576.26 & 0.07 & 0.00 & 0.14 & 0.00 & 0.26 & 0.00 \\ & & 0.90 & 5369.04 & 0.06 & 0.00 & 0.15 & 0.00 & 0.23 & 0.00 \\ & & 1.00 & 5071.29 & 0.04 & 0.00 & 0.13 & 0.00 & 0.18 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 7817.00 & 0.20 & 0.00 & 0.25 & 0.00 & 0.36 & 0.00 \\ & & 0.10 & 7478.87 & 0.19 & 0.00 & 0.21 & 0.00 & 0.37 & 0.00 \\ & & 0.20 & 7306.27 & 0.18 & 0.00 & 0.22 & 0.00 & 0.38 & 0.00 \\ & & 0.30 & 6929.98 & 0.16 & 0.00 & 0.16 & 0.00 & 0.34 & 0.00 \\ & & 0.40 & 6563.89 & 0.14 & 0.00 & 0.17 & 0.00 & 0.34 & 0.00 \\ & & 0.50 & 6371.76 & 0.12 & 0.00 & 0.17 & 0.00 & 0.33 & 0.00 \\ & & 0.60 & 6061.91 & 0.11 & 0.00 & 0.16 & 0.00 & 0.34 & 0.00 \\ & & 0.70 & 5781.37 & 0.10 & 0.00 & 0.16 & 0.00 & 0.31 & 0.00 \\ & & 0.80 & 5527.24 & 0.07 & 0.00 & 0.14 & 0.00 & 0.27 & 0.00 \\ & & 0.90 & 5308.46 & 0.06 & 0.00 & 0.15 & 0.00 & 0.25 & 0.00 \\ & & 1.00 & 5007.00 & 0.04 & 0.00 & 0.14 & 0.00 & 0.20 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 7931.94 & 0.21 & 0.00 & 0.24 & 0.00 & 0.33 & 0.00 \\ & & 0.10 & 7488.00 & 0.19 & 0.00 & 0.23 & 0.00 & 0.37 & 0.00 \\ & & 0.20 & 7355.84 & 0.17 & 0.00 & 0.23 & 0.00 & 0.38 & 0.00 \\ & & 0.30 & 6892.31 & 0.17 & 0.00 & 0.17 & 0.00 & 0.37 & 0.00 \\ & & 0.40 & 6567.07 & 0.15 & 0.00 & 0.17 & 0.00 & 0.34 & 0.00 \\ & & 0.50 & 6332.08 & 0.13 & 0.00 & 0.18 & 0.00 & 0.35 & 0.00 \\ & & 0.60 & 6046.43 & 0.11 & 0.00 & 0.17 & 0.00 & 0.34 & 0.00 \\ & & 0.70 & 5735.80 & 0.10 & 0.00 & 0.16 & 0.00 & 0.33 & 0.00 \\ & & 0.80 & 5464.18 & 0.07 & 0.00 & 0.15 & 0.00 & 0.29 & 0.00 \\ & & 0.90 & 5268.64 & 0.06 & 0.00 & 0.15 & 0.00 & 0.26 & 0.00 \\ & & 1.00 & 4962.14 & 0.04 & 0.00 & 0.15 & 0.00 & 0.21 & 0.00 \\ \midrule 125 & 0.00 & 0.00 & 6919.60 & 0.20 & 0.00 & 0.19 & 0.00 & 0.36 & 0.00 \\ & & 0.10 & 6662.17 & 0.18 & 0.00 & 0.17 & 0.00 & 0.36 & 0.00 \\ & & 0.20 & 6507.86 & 0.17 & 0.00 & 0.17 & 0.00 & 0.37 & 0.00 \\ & & 0.30 & 6170.82 & 0.14 & 0.00 & 0.17 & 0.00 & 0.36 & 0.00 \\ & & 0.40 & 5869.62 & 0.13 & 0.00 & 0.16 & 0.00 & 0.37 & 0.00 \\ & & 0.50 & 5608.26 & 0.11 & 0.00 & 0.17 & 0.00 & 0.35 & 0.00 \\ & & 0.60 & 5298.18 & 0.11 & 0.00 & 0.17 & 0.00 & 0.33 & 0.00 \\ & & 0.70 & 5006.56 & 0.10 & 0.00 & 0.17 & 0.00 & 0.29 & 0.00 \\ & & 0.80 & 4681.63 & 0.07 & 0.00 & 0.15 & 0.00 & 0.23 & 0.00 \\ & & 0.90 & 4435.32 & 0.05 & 0.00 & 0.14 & 0.00 & 0.19 & 0.00 \\ & & 1.00 & 4095.32 & 0.03 & 0.00 & 0.12 & 0.00 & 0.13 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 6689.77 & 0.22 & 0.00 & 0.21 & 0.00 & 0.41 & 0.00 \\ & & 0.10 & 6443.72 & 0.20 & 0.00 & 0.19 & 0.00 & 0.40 & 0.00 \\ & & 0.20 & 6291.92 & 0.19 & 0.00 & 0.18 & 0.00 & 0.40 & 0.00 \\ & & 0.30 & 5986.40 & 0.16 & 0.00 & 0.16 & 0.00 & 0.38 & 0.00 \\ & & 0.40 & 5706.76 & 0.14 & 0.00 & 0.16 & 0.00 & 0.38 & 0.00 \\ & & 0.50 & 5464.89 & 0.12 & 0.00 & 0.17 & 0.00 & 0.38 & 0.00 \\ & & 0.60 & 5170.72 & 0.10 & 0.00 & 0.17 & 0.00 & 0.36 & 0.00 \\ & & 0.70 & 4899.34 & 0.10 & 0.00 & 0.17 & 0.00 & 0.32 & 0.00 \\ & & 0.80 & 4587.49 & 0.08 & 0.00 & 0.16 & 0.00 & 0.26 & 0.00 \\ & & 0.90 & 4344.52 & 0.06 & 0.00 & 0.16 & 0.00 & 0.22 & 0.00 \\ & & 1.00 & 4020.91 & 0.03 & 0.00 & 0.13 & 0.00 & 0.15 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 6571.39 & 0.24 & 0.00 & 0.24 & 0.00 & 0.47 & 0.00 \\ & & 0.10 & 6308.30 & 0.21 & 0.00 & 0.21 & 0.00 & 0.44 & 0.00 \\ & & 0.20 & 6134.46 & 0.20 & 0.00 & 0.20 & 0.00 & 0.44 & 0.00 \\ & & 0.30 & 5826.23 & 0.16 & 0.00 & 0.17 & 0.00 & 0.41 & 0.00 \\ & & 0.40 & 5556.05 & 0.15 & 0.00 & 0.16 & 0.00 & 0.40 & 0.00 \\ & & 0.50 & 5330.71 & 0.13 & 0.00 & 0.16 & 0.00 & 0.41 & 0.00 \\ & & 0.60 & 5047.31 & 0.11 & 0.00 & 0.17 & 0.00 & 0.40 & 0.00 \\ & & 0.70 & 4791.01 & 0.10 & 0.00 & 0.17 & 0.00 & 0.35 & 0.00 \\ & & 0.80 & 4494.42 & 0.08 & 0.00 & 0.17 & 0.00 & 0.28 & 0.00 \\ & & 0.90 & 4261.36 & 0.06 & 0.00 & 0.17 & 0.00 & 0.24 & 0.00 \\ & & 1.00 & 3952.08 & 0.04 & 0.00 & 0.13 & 0.00 & 0.17 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 6507.61 & 0.25 & 0.00 & 0.30 & 0.00 & 0.52 & 0.00 \\ & & 0.10 & 6229.57 & 0.24 & 0.00 & 0.24 & 0.00 & 0.50 & 0.00 \\ & & 0.20 & 6041.62 & 0.21 & 0.00 & 0.23 & 0.00 & 0.50 & 0.00 \\ & & 0.30 & 5700.58 & 0.18 & 0.00 & 0.19 & 0.00 & 0.45 & 0.00 \\ & & 0.40 & 5439.15 & 0.16 & 0.00 & 0.17 & 0.00 & 0.44 & 0.00 \\ & & 0.50 & 5219.68 & 0.14 & 0.00 & 0.18 & 0.00 & 0.44 & 0.00 \\ & & 0.60 & 4940.30 & 0.12 & 0.00 & 0.18 & 0.00 & 0.43 & 0.00 \\ & & 0.70 & 4693.11 & 0.11 & 0.00 & 0.17 & 0.00 & 0.38 & 0.00 \\ & & 0.80 & 4408.07 & 0.09 & 0.00 & 0.17 & 0.00 & 0.31 & 0.00 \\ & & 0.90 & 4181.51 & 0.07 & 0.00 & 0.17 & 0.00 & 0.26 & 0.00 \\ & & 1.00 & 3885.41 & 0.04 & 0.00 & 0.15 & 0.00 & 0.19 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 6475.62 & 0.28 & 0.00 & 0.33 & 0.00 & 0.54 & 0.00 \\ & & 0.10 & 6194.57 & 0.25 & 0.00 & 0.31 & 0.00 & 0.55 & 0.00 \\ & & 0.20 & 6015.12 & 0.24 & 0.00 & 0.26 & 0.00 & 0.56 & 0.00 \\ & & 0.30 & 5633.30 & 0.20 & 0.00 & 0.20 & 0.00 & 0.49 & 0.00 \\ & & 0.40 & 5327.76 & 0.18 & 0.00 & 0.19 & 0.00 & 0.47 & 0.00 \\ & & 0.50 & 5133.92 & 0.14 & 0.00 & 0.19 & 0.00 & 0.47 & 0.00 \\ & & 0.60 & 4859.74 & 0.12 & 0.00 & 0.18 & 0.00 & 0.45 & 0.00 \\ & & 0.70 & 4602.11 & 0.11 & 0.00 & 0.19 & 0.00 & 0.41 & 0.00 \\ & & 0.80 & 4326.43 & 0.09 & 0.00 & 0.18 & 0.00 & 0.34 & 0.00 \\ & & 0.90 & 4109.85 & 0.08 & 0.00 & 0.18 & 0.00 & 0.29 & 0.00 \\ & & 1.00 & 3817.66 & 0.05 & 0.00 & 0.15 & 0.00 & 0.21 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 6567.75 & 0.31 & 0.00 & 0.35 & 0.00 & 0.51 & 0.00 \\ & & 0.10 & 6133.22 & 0.26 & 0.00 & 0.32 & 0.00 & 0.58 & 0.00 \\ & & 0.20 & 5978.22 & 0.24 & 0.00 & 0.32 & 0.00 & 0.59 & 0.00 \\ & & 0.30 & 5588.40 & 0.20 & 0.00 & 0.21 & 0.00 & 0.51 & 0.00 \\ & & 0.40 & 5249.92 & 0.19 & 0.00 & 0.20 & 0.00 & 0.49 & 0.00 \\ & & 0.50 & 5047.18 & 0.16 & 0.00 & 0.21 & 0.00 & 0.49 & 0.00 \\ & & 0.60 & 4792.23 & 0.13 & 0.00 & 0.19 & 0.00 & 0.48 & 0.00 \\ & & 0.70 & 4533.64 & 0.11 & 0.00 & 0.19 & 0.00 & 0.45 & 0.00 \\ & & 0.80 & 4255.79 & 0.09 & 0.00 & 0.18 & 0.00 & 0.37 & 0.00 \\ & & 0.90 & 4044.07 & 0.08 & 0.00 & 0.19 & 0.00 & 0.31 & 0.00 \\ & & 1.00 & 3760.05 & 0.05 & 0.00 & 0.16 & 0.00 & 0.23 & 0.00 \\ \midrule 150 & 0.00 & 0.00 & 6193.50 & 0.23 & 0.00 & 0.22 & 0.00 & 0.42 & 0.00 \\ & & 0.10 & 5945.10 & 0.20 & 0.00 & 0.21 & 0.00 & 0.42 & 0.00 \\ & & 0.20 & 5796.60 & 0.19 & 0.00 & 0.21 & 0.00 & 0.43 & 0.00 \\ & & 0.30 & 5454.97 & 0.16 & 0.00 & 0.19 & 0.00 & 0.43 & 0.00 \\ & & 0.40 & 5160.97 & 0.14 & 0.00 & 0.19 & 0.00 & 0.41 & 0.00 \\ & & 0.50 & 4896.22 & 0.11 & 0.00 & 0.20 & 0.00 & 0.38 & 0.00 \\ & & 0.60 & 4579.99 & 0.10 & 0.00 & 0.20 & 0.00 & 0.34 & 0.00 \\ & & 0.70 & 4300.14 & 0.08 & 0.00 & 0.19 & 0.00 & 0.29 & 0.00 \\ & & 0.80 & 3958.47 & 0.06 & 0.00 & 0.17 & 0.00 & 0.23 & 0.00 \\ & & 0.90 & 3703.41 & 0.05 & 0.00 & 0.15 & 0.00 & 0.18 & 0.00 \\ & & 1.00 & 3362.38 & 0.03 & 0.00 & 0.11 & 0.00 & 0.12 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 5957.31 & 0.25 & 0.00 & 0.23 & 0.00 & 0.47 & 0.00 \\ & & 0.10 & 5730.92 & 0.22 & 0.00 & 0.21 & 0.00 & 0.45 & 0.00 \\ & & 0.20 & 5587.00 & 0.20 & 0.00 & 0.20 & 0.00 & 0.45 & 0.00 \\ & & 0.30 & 5283.70 & 0.17 & 0.00 & 0.19 & 0.00 & 0.43 & 0.00 \\ & & 0.40 & 5007.07 & 0.15 & 0.00 & 0.19 & 0.00 & 0.45 & 0.00 \\ & & 0.50 & 4763.57 & 0.12 & 0.00 & 0.20 & 0.00 & 0.41 & 0.00 \\ & & 0.60 & 4464.15 & 0.10 & 0.00 & 0.20 & 0.00 & 0.38 & 0.00 \\ & & 0.70 & 4203.52 & 0.09 & 0.00 & 0.20 & 0.00 & 0.32 & 0.00 \\ & & 0.80 & 3876.84 & 0.07 & 0.00 & 0.18 & 0.00 & 0.25 & 0.00 \\ & & 0.90 & 3631.03 & 0.05 & 0.00 & 0.16 & 0.00 & 0.21 & 0.00 \\ & & 1.00 & 3308.06 & 0.03 & 0.00 & 0.12 & 0.00 & 0.14 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 5755.66 & 0.27 & 0.00 & 0.26 & 0.00 & 0.55 & 0.00 \\ & & 0.10 & 5543.66 & 0.25 & 0.00 & 0.22 & 0.00 & 0.50 & 0.00 \\ & & 0.20 & 5396.58 & 0.23 & 0.00 & 0.22 & 0.00 & 0.50 & 0.00 \\ & & 0.30 & 5119.00 & 0.18 & 0.00 & 0.19 & 0.00 & 0.45 & 0.00 \\ & & 0.40 & 4863.04 & 0.16 & 0.00 & 0.19 & 0.00 & 0.45 & 0.00 \\ & & 0.50 & 4637.39 & 0.14 & 0.00 & 0.20 & 0.00 & 0.45 & 0.00 \\ & & 0.60 & 4354.12 & 0.11 & 0.00 & 0.21 & 0.00 & 0.42 & 0.00 \\ & & 0.70 & 4111.71 & 0.09 & 0.00 & 0.19 & 0.00 & 0.35 & 0.00 \\ & & 0.80 & 3799.94 & 0.07 & 0.00 & 0.18 & 0.00 & 0.28 & 0.00 \\ & & 0.90 & 3565.88 & 0.05 & 0.00 & 0.17 & 0.00 & 0.23 & 0.00 \\ & & 1.00 & 3253.09 & 0.03 & 0.00 & 0.14 & 0.00 & 0.15 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 5619.15 & 0.31 & 0.00 & 0.34 & 0.00 & 0.63 & 0.00 \\ & & 0.10 & 5401.30 & 0.27 & 0.00 & 0.27 & 0.00 & 0.60 & 0.00 \\ & & 0.20 & 5263.64 & 0.24 & 0.00 & 0.23 & 0.00 & 0.57 & 0.00 \\ & & 0.30 & 4967.89 & 0.20 & 0.00 & 0.20 & 0.00 & 0.50 & 0.00 \\ & & 0.40 & 4729.25 & 0.17 & 0.00 & 0.19 & 0.00 & 0.47 & 0.00 \\ & & 0.50 & 4522.14 & 0.14 & 0.00 & 0.20 & 0.00 & 0.46 & 0.00 \\ & & 0.60 & 4253.75 & 0.12 & 0.00 & 0.21 & 0.00 & 0.45 & 0.00 \\ & & 0.70 & 4022.70 & 0.10 & 0.00 & 0.20 & 0.00 & 0.38 & 0.00 \\ & & 0.80 & 3727.40 & 0.08 & 0.00 & 0.19 & 0.00 & 0.30 & 0.00 \\ & & 0.90 & 3497.46 & 0.06 & 0.00 & 0.18 & 0.00 & 0.25 & 0.00 \\ & & 1.00 & 3194.93 & 0.04 & 0.00 & 0.14 & 0.00 & 0.18 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 5551.69 & 0.36 & 0.00 & 0.39 & 0.00 & 0.66 & 0.00 \\ & & 0.10 & 5290.63 & 0.30 & 0.00 & 0.34 & 0.00 & 0.68 & 0.00 \\ & & 0.20 & 5158.44 & 0.27 & 0.00 & 0.29 & 0.00 & 0.66 & 0.00 \\ & & 0.30 & 4847.25 & 0.22 & 0.00 & 0.21 & 0.00 & 0.55 & 0.00 \\ & & 0.40 & 4601.70 & 0.19 & 0.00 & 0.19 & 0.00 & 0.49 & 0.00 \\ & & 0.50 & 4407.37 & 0.15 & 0.00 & 0.21 & 0.00 & 0.51 & 0.00 \\ & & 0.60 & 4152.59 & 0.13 & 0.00 & 0.20 & 0.00 & 0.48 & 0.00 \\ & & 0.70 & 3940.74 & 0.10 & 0.00 & 0.20 & 0.00 & 0.42 & 0.00 \\ & & 0.80 & 3660.53 & 0.09 & 0.00 & 0.20 & 0.00 & 0.33 & 0.00 \\ & & 0.90 & 3440.80 & 0.06 & 0.00 & 0.19 & 0.00 & 0.27 & 0.00 \\ & & 1.00 & 3140.93 & 0.05 & 0.00 & 0.16 & 0.00 & 0.20 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 5589.06 & 0.40 & 0.00 & 0.44 & 0.00 & 0.65 & 0.00 \\ & & 0.10 & 5211.51 & 0.34 & 0.00 & 0.40 & 0.00 & 0.72 & 0.00 \\ & & 0.20 & 5060.30 & 0.30 & 0.00 & 0.37 & 0.00 & 0.73 & 0.00 \\ & & 0.30 & 4728.64 & 0.24 & 0.00 & 0.23 & 0.00 & 0.62 & 0.00 \\ & & 0.40 & 4503.95 & 0.20 & 0.00 & 0.21 & 0.00 & 0.56 & 0.00 \\ & & 0.50 & 4299.06 & 0.16 & 0.00 & 0.21 & 0.00 & 0.54 & 0.00 \\ & & 0.60 & 4057.67 & 0.14 & 0.00 & 0.21 & 0.00 & 0.51 & 0.00 \\ & & 0.70 & 3859.58 & 0.11 & 0.00 & 0.20 & 0.00 & 0.45 & 0.00 \\ & & 0.80 & 3596.83 & 0.09 & 0.00 & 0.20 & 0.00 & 0.36 & 0.00 \\ & & 0.90 & 3378.96 & 0.07 & 0.00 & 0.20 & 0.00 & 0.30 & 0.00 \\ & & 1.00 & 3093.53 & 0.05 & 0.00 & 0.17 & 0.00 & 0.21 & 0.00 \\ \bottomrule \label{table:ttestLogisticDist} \end{longtable} \begin{longtable}{rrlllrrrrrrr} \kill \caption{Paired t-test results comparing the individualized compensation scheme with the benchmark schemes in terms of mean acceptance rate obtained using logistic acceptance probability.} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endfirsthead \caption{(continued.)} \\ \toprule \multirow{2}{*}{$\odno$} & \multirow{2}{*}{$\pnlty$} & \multirow{2}{*}{$\utlty$} & \multirow{2}{*}{Individualized} & \multicolumn{2}{c}{Detour}& \multicolumn{2}{c}{Distance}& \multicolumn{2}{c}{Flat} \\ \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} & & & & $\%$ Diff. & p-Val & $\%$ Diff. & p- Val & $\%$ Diff. & p-Val \\ \midrule \endhead 50 & 0.00 & 0.00 & 0.51 & -0.07 & 0.01 & -0.12 & 0.00 & -0.18 & 0.00 \\ & & 0.10 & 0.53 & -0.06 & 0.02 & -0.11 & 0.00 & -0.18 & 0.00 \\ & & 0.20 & 0.54 & -0.05 & 0.02 & -0.11 & 0.00 & -0.17 & 0.00 \\ & & 0.30 & 0.56 & -0.05 & 0.02 & -0.10 & 0.00 & -0.15 & 0.00 \\ & & 0.40 & 0.58 & -0.04 & 0.02 & -0.10 & 0.00 & -0.14 & 0.00 \\ & & 0.50 & 0.59 & -0.03 & 0.02 & -0.09 & 0.00 & -0.12 & 0.00 \\ & & 0.60 & 0.61 & -0.01 & 0.25 & -0.09 & 0.00 & -0.12 & 0.00 \\ & & 0.70 & 0.63 & -0.01 & 0.19 & -0.10 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.64 & -0.01 & 0.04 & -0.08 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.65 & -0.02 & 0.05 & -0.07 & 0.00 & -0.09 & 0.00 \\ & & 1.00 & 0.67 & -0.01 & 0.00 & -0.06 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.56 & -0.09 & 0.00 & -0.08 & 0.02 & -0.13 & 0.00 \\ & & 0.10 & 0.57 & -0.08 & 0.00 & -0.09 & 0.01 & -0.14 & 0.00 \\ & & 0.20 & 0.57 & -0.05 & 0.00 & -0.09 & 0.00 & -0.14 & 0.00 \\ & & 0.30 & 0.58 & -0.05 & 0.04 & -0.09 & 0.00 & -0.14 & 0.00 \\ & & 0.40 & 0.59 & -0.01 & 0.60 & -0.09 & 0.00 & -0.14 & 0.00 \\ & & 0.50 & 0.61 & -0.02 & 0.08 & -0.09 & 0.00 & -0.12 & 0.00 \\ & & 0.60 & 0.62 & -0.01 & 0.47 & -0.09 & 0.00 & -0.11 & 0.00 \\ & & 0.70 & 0.64 & -0.01 & 0.42 & -0.08 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.66 & -0.01 & 0.48 & -0.08 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.67 & 0.00 & 0.92 & -0.07 & 0.00 & -0.09 & 0.00 \\ & & 1.00 & 0.69 & -0.00 & 0.42 & -0.06 & 0.00 & -0.07 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.61 & -0.12 & 0.00 & -0.08 & 0.01 & -0.10 & 0.00 \\ & & 0.10 & 0.61 & -0.10 & 0.00 & -0.07 & 0.02 & -0.11 & 0.00 \\ & & 0.20 & 0.61 & -0.08 & 0.00 & -0.06 & 0.04 & -0.11 & 0.00 \\ & & 0.30 & 0.62 & -0.06 & 0.00 & -0.07 & 0.01 & -0.12 & 0.00 \\ & & 0.40 & 0.63 & -0.05 & 0.02 & -0.07 & 0.01 & -0.12 & 0.00 \\ & & 0.50 & 0.64 & -0.02 & 0.17 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.60 & 0.65 & -0.02 & 0.19 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.70 & 0.66 & -0.01 & 0.59 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.67 & 0.00 & 0.84 & -0.08 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.68 & 0.01 & 0.49 & -0.07 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.70 & 0.01 & 0.47 & -0.06 & 0.00 & -0.07 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.63 & -0.09 & 0.02 & -0.06 & 0.01 & -0.08 & 0.00 \\ & & 0.10 & 0.64 & -0.11 & 0.00 & -0.07 & 0.01 & -0.09 & 0.00 \\ & & 0.20 & 0.65 & -0.09 & 0.00 & -0.07 & 0.01 & -0.10 & 0.00 \\ & & 0.30 & 0.65 & -0.07 & 0.00 & -0.06 & 0.02 & -0.09 & 0.00 \\ & & 0.40 & 0.65 & -0.05 & 0.00 & -0.06 & 0.01 & -0.11 & 0.00 \\ & & 0.50 & 0.66 & -0.03 & 0.01 & -0.06 & 0.01 & -0.11 & 0.00 \\ & & 0.60 & 0.67 & -0.02 & 0.01 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.70 & 0.68 & -0.01 & 0.31 & -0.07 & 0.00 & -0.09 & 0.00 \\ & & 0.80 & 0.69 & -0.01 & 0.46 & -0.06 & 0.01 & -0.08 & 0.00 \\ & & 0.90 & 0.70 & -0.01 & 0.26 & -0.05 & 0.01 & -0.07 & 0.00 \\ & & 1.00 & 0.71 & -0.00 & 0.99 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.65 & -0.07 & 0.01 & -0.05 & 0.01 & -0.06 & 0.00 \\ & & 0.10 & 0.66 & -0.11 & 0.00 & -0.05 & 0.01 & -0.07 & 0.00 \\ & & 0.20 & 0.66 & -0.09 & 0.00 & -0.07 & 0.00 & -0.09 & 0.00 \\ & & 0.30 & 0.67 & -0.07 & 0.00 & -0.07 & 0.01 & -0.09 & 0.00 \\ & & 0.40 & 0.67 & -0.05 & 0.01 & -0.05 & 0.02 & -0.08 & 0.00 \\ & & 0.50 & 0.68 & -0.04 & 0.03 & -0.05 & 0.02 & -0.09 & 0.00 \\ & & 0.60 & 0.69 & -0.02 & 0.01 & -0.05 & 0.01 & -0.09 & 0.00 \\ & & 0.70 & 0.70 & -0.02 & 0.02 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.80 & 0.71 & -0.02 & 0.02 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.71 & -0.01 & 0.48 & -0.06 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.73 & -0.01 & 0.51 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.68 & -0.06 & 0.00 & -0.04 & 0.01 & -0.05 & 0.01 \\ & & 0.10 & 0.68 & -0.07 & 0.00 & -0.05 & 0.01 & -0.06 & 0.00 \\ & & 0.20 & 0.68 & -0.09 & 0.00 & -0.05 & 0.01 & -0.06 & 0.00 \\ & & 0.30 & 0.69 & -0.07 & 0.00 & -0.06 & 0.01 & -0.08 & 0.00 \\ & & 0.40 & 0.70 & -0.06 & 0.00 & -0.06 & 0.01 & -0.09 & 0.00 \\ & & 0.50 & 0.70 & -0.04 & 0.00 & -0.05 & 0.01 & -0.09 & 0.00 \\ & & 0.60 & 0.71 & -0.03 & 0.06 & -0.05 & 0.02 & -0.08 & 0.00 \\ & & 0.70 & 0.71 & -0.02 & 0.00 & -0.05 & 0.02 & -0.07 & 0.00 \\ & & 0.80 & 0.72 & -0.01 & 0.02 & -0.06 & 0.00 & -0.07 & 0.00 \\ & & 0.90 & 0.73 & -0.01 & 0.02 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.74 & -0.00 & 0.47 & -0.05 & 0.00 & -0.06 & 0.00 \\ \midrule 75 & 0.00 & 0.00 & 0.51 & -0.06 & 0.12 & -0.14 & 0.00 & -0.22 & 0.00 \\ & & 0.10 & 0.53 & -0.06 & 0.07 & -0.13 & 0.00 & -0.22 & 0.00 \\ & & 0.20 & 0.54 & -0.05 & 0.01 & -0.13 & 0.00 & -0.20 & 0.00 \\ & & 0.30 & 0.56 & -0.04 & 0.01 & -0.12 & 0.00 & -0.19 & 0.00 \\ & & 0.40 & 0.58 & -0.03 & 0.00 & -0.11 & 0.00 & -0.18 & 0.00 \\ & & 0.50 & 0.60 & -0.03 & 0.01 & -0.11 & 0.00 & -0.17 & 0.00 \\ & & 0.60 & 0.62 & -0.03 & 0.02 & -0.10 & 0.00 & -0.15 & 0.00 \\ & & 0.70 & 0.64 & -0.02 & 0.04 & -0.10 & 0.00 & -0.14 & 0.00 \\ & & 0.80 & 0.65 & -0.01 & 0.02 & -0.09 & 0.00 & -0.11 & 0.00 \\ & & 0.90 & 0.66 & -0.01 & 0.00 & -0.08 & 0.00 & -0.09 & 0.00 \\ & & 1.00 & 0.68 & -0.01 & 0.15 & -0.06 & 0.00 & -0.07 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.56 & -0.07 & 0.00 & -0.08 & 0.02 & -0.14 & 0.00 \\ & & 0.10 & 0.57 & -0.05 & 0.06 & -0.10 & 0.01 & -0.15 & 0.00 \\ & & 0.20 & 0.58 & -0.04 & 0.13 & -0.10 & 0.00 & -0.15 & 0.00 \\ & & 0.30 & 0.59 & -0.03 & 0.05 & -0.10 & 0.00 & -0.16 & 0.00 \\ & & 0.40 & 0.60 & -0.03 & 0.07 & -0.09 & 0.00 & -0.15 & 0.00 \\ & & 0.50 & 0.62 & -0.02 & 0.06 & -0.10 & 0.00 & -0.14 & 0.00 \\ & & 0.60 & 0.63 & -0.01 & 0.06 & -0.09 & 0.00 & -0.14 & 0.00 \\ & & 0.70 & 0.65 & -0.02 & 0.07 & -0.09 & 0.00 & -0.13 & 0.00 \\ & & 0.80 & 0.66 & -0.00 & 0.80 & -0.09 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.68 & -0.00 & 0.52 & -0.08 & 0.00 & -0.10 & 0.00 \\ & & 1.00 & 0.70 & -0.00 & 0.31 & -0.07 & 0.00 & -0.07 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.61 & -0.10 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.10 & 0.61 & -0.09 & 0.00 & -0.07 & 0.01 & -0.12 & 0.00 \\ & & 0.20 & 0.62 & -0.07 & 0.00 & -0.07 & 0.01 & -0.12 & 0.00 \\ & & 0.30 & 0.62 & -0.04 & 0.03 & -0.07 & 0.01 & -0.13 & 0.00 \\ & & 0.40 & 0.64 & -0.04 & 0.01 & -0.08 & 0.00 & -0.14 & 0.00 \\ & & 0.50 & 0.65 & -0.04 & 0.00 & -0.09 & 0.00 & -0.12 & 0.00 \\ & & 0.60 & 0.66 & -0.02 & 0.02 & -0.08 & 0.00 & -0.12 & 0.00 \\ & & 0.70 & 0.66 & -0.00 & 0.80 & -0.08 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.67 & 0.00 & 0.92 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.69 & -0.00 & 0.99 & -0.08 & 0.00 & -0.09 & 0.00 \\ & & 1.00 & 0.71 & 0.00 & 0.47 & -0.07 & 0.00 & -0.08 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.64 & -0.10 & 0.00 & -0.06 & 0.00 & -0.07 & 0.00 \\ & & 0.10 & 0.65 & -0.09 & 0.00 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.20 & 0.65 & -0.08 & 0.00 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.30 & 0.66 & -0.07 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.40 & 0.66 & -0.04 & 0.01 & -0.06 & 0.01 & -0.11 & 0.00 \\ & & 0.50 & 0.67 & -0.03 & 0.01 & -0.07 & 0.01 & -0.11 & 0.00 \\ & & 0.60 & 0.68 & -0.02 & 0.04 & -0.07 & 0.01 & -0.11 & 0.00 \\ & & 0.70 & 0.69 & -0.02 & 0.01 & -0.08 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.69 & 0.00 & 0.92 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.70 & 0.00 & 0.86 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.72 & 0.00 & 0.85 & -0.06 & 0.00 & -0.07 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.67 & -0.10 & 0.00 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 0.10 & 0.67 & -0.08 & 0.00 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 0.20 & 0.67 & -0.07 & 0.00 & -0.05 & 0.00 & -0.08 & 0.00 \\ & & 0.30 & 0.68 & -0.07 & 0.00 & -0.05 & 0.00 & -0.10 & 0.00 \\ & & 0.40 & 0.68 & -0.05 & 0.00 & -0.06 & 0.01 & -0.10 & 0.00 \\ & & 0.50 & 0.69 & -0.03 & 0.02 & -0.06 & 0.02 & -0.10 & 0.00 \\ & & 0.60 & 0.70 & -0.02 & 0.02 & -0.06 & 0.01 & -0.10 & 0.00 \\ & & 0.70 & 0.71 & -0.03 & 0.00 & -0.06 & 0.01 & -0.10 & 0.00 \\ & & 0.80 & 0.72 & -0.02 & 0.00 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.72 & -0.01 & 0.19 & -0.06 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.73 & 0.00 & 0.88 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.69 & -0.08 & 0.00 & -0.04 & 0.02 & -0.06 & 0.00 \\ & & 0.10 & 0.69 & -0.08 & 0.00 & -0.04 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.69 & -0.08 & 0.00 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 0.30 & 0.70 & -0.06 & 0.00 & -0.05 & 0.01 & -0.09 & 0.00 \\ & & 0.40 & 0.70 & -0.05 & 0.00 & -0.05 & 0.01 & -0.09 & 0.00 \\ & & 0.50 & 0.71 & -0.05 & 0.00 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.60 & 0.72 & -0.03 & 0.00 & -0.06 & 0.01 & -0.10 & 0.00 \\ & & 0.70 & 0.72 & -0.03 & 0.02 & -0.05 & 0.01 & -0.09 & 0.01 \\ & & 0.80 & 0.73 & -0.02 & 0.01 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.73 & -0.01 & 0.00 & -0.05 & 0.01 & -0.07 & 0.00 \\ & & 1.00 & 0.75 & -0.01 & 0.00 & -0.06 & 0.00 & -0.06 & 0.00 \\ \midrule 100 & 0.00 & 0.00 & 0.49 & -0.03 & 0.06 & -0.14 & 0.00 & -0.24 & 0.00 \\ & & 0.10 & 0.51 & -0.03 & 0.05 & -0.13 & 0.00 & -0.23 & 0.00 \\ & & 0.20 & 0.52 & -0.03 & 0.08 & -0.13 & 0.00 & -0.22 & 0.00 \\ & & 0.30 & 0.55 & -0.01 & 0.53 & -0.12 & 0.00 & -0.21 & 0.00 \\ & & 0.40 & 0.57 & -0.00 & 0.87 & -0.11 & 0.00 & -0.19 & 0.00 \\ & & 0.50 & 0.59 & -0.01 & 0.80 & -0.11 & 0.00 & -0.19 & 0.00 \\ & & 0.60 & 0.61 & -0.01 & 0.58 & -0.11 & 0.00 & -0.18 & 0.00 \\ & & 0.70 & 0.63 & -0.01 & 0.34 & -0.10 & 0.00 & -0.15 & 0.00 \\ & & 0.80 & 0.65 & -0.01 & 0.09 & -0.10 & 0.00 & -0.13 & 0.00 \\ & & 0.90 & 0.66 & -0.01 & 0.12 & -0.09 & 0.00 & -0.11 & 0.00 \\ & & 1.00 & 0.69 & -0.00 & 0.48 & -0.07 & 0.00 & -0.08 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.56 & -0.05 & 0.07 & -0.07 & 0.02 & -0.12 & 0.00 \\ & & 0.10 & 0.56 & -0.04 & 0.18 & -0.07 & 0.00 & -0.12 & 0.00 \\ & & 0.20 & 0.57 & -0.05 & 0.01 & -0.07 & 0.00 & -0.12 & 0.00 \\ & & 0.30 & 0.58 & -0.02 & 0.00 & -0.07 & 0.01 & -0.13 & 0.00 \\ & & 0.40 & 0.59 & -0.00 & 0.66 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.50 & 0.61 & 0.01 & 0.45 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.60 & 0.62 & 0.01 & 0.55 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.70 & 0.64 & 0.00 & 0.91 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.66 & 0.00 & 0.66 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.67 & 0.00 & 0.85 & -0.07 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.70 & 0.00 & 0.91 & -0.06 & 0.00 & -0.07 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.61 & -0.10 & 0.00 & -0.05 & 0.00 & -0.08 & 0.00 \\ & & 0.10 & 0.61 & -0.07 & 0.00 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.20 & 0.61 & -0.06 & 0.01 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.30 & 0.62 & -0.05 & 0.01 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.40 & 0.63 & -0.04 & 0.00 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.50 & 0.64 & -0.02 & 0.00 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.60 & 0.65 & -0.01 & 0.42 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.70 & 0.66 & -0.00 & 0.98 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.80 & 0.67 & 0.01 & 0.59 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.69 & 0.01 & 0.39 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.71 & 0.01 & 0.41 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.64 & -0.09 & 0.00 & -0.04 & 0.00 & -0.06 & 0.00 \\ & & 0.10 & 0.65 & -0.08 & 0.00 & -0.04 & 0.00 & -0.07 & 0.00 \\ & & 0.20 & 0.65 & -0.07 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.30 & 0.65 & -0.05 & 0.01 & -0.05 & 0.00 & -0.10 & 0.00 \\ & & 0.40 & 0.65 & -0.04 & 0.02 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.50 & 0.66 & -0.03 & 0.02 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.60 & 0.68 & -0.02 & 0.02 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.70 & 0.69 & -0.02 & 0.15 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.70 & -0.00 & 0.54 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.71 & -0.00 & 0.68 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.72 & 0.00 & 0.32 & -0.04 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.67 & -0.10 & 0.00 & -0.03 & 0.00 & -0.06 & 0.00 \\ & & 0.10 & 0.67 & -0.08 & 0.00 & -0.04 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.67 & -0.07 & 0.00 & -0.04 & 0.00 & -0.07 & 0.00 \\ & & 0.30 & 0.68 & -0.06 & 0.00 & -0.04 & 0.00 & -0.09 & 0.00 \\ & & 0.40 & 0.68 & -0.04 & 0.01 & -0.04 & 0.01 & -0.08 & 0.00 \\ & & 0.50 & 0.69 & -0.04 & 0.01 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.60 & 0.70 & -0.03 & 0.03 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.70 & 0.70 & -0.02 & 0.06 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.80 & 0.72 & -0.01 & 0.02 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.72 & -0.01 & 0.28 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.73 & -0.00 & 0.67 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.69 & -0.07 & 0.02 & -0.03 & 0.01 & -0.05 & 0.00 \\ & & 0.10 & 0.69 & -0.09 & 0.00 & -0.03 & 0.00 & -0.05 & 0.00 \\ & & 0.20 & 0.69 & -0.07 & 0.00 & -0.03 & 0.00 & -0.06 & 0.00 \\ & & 0.30 & 0.69 & -0.06 & 0.00 & -0.04 & 0.00 & -0.07 & 0.00 \\ & & 0.40 & 0.70 & -0.05 & 0.00 & -0.04 & 0.00 & -0.08 & 0.00 \\ & & 0.50 & 0.70 & -0.04 & 0.00 & -0.04 & 0.00 & -0.08 & 0.00 \\ & & 0.60 & 0.72 & -0.03 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.70 & 0.72 & -0.02 & 0.05 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.80 & 0.73 & -0.02 & 0.01 & -0.05 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.74 & -0.01 & 0.01 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.75 & -0.00 & 0.47 & -0.05 & 0.00 & -0.07 & 0.00 \\ \midrule 125 & 0.00 & 0.00 & 0.57 & -0.08 & 0.01 & -0.13 & 0.00 & -0.25 & 0.00 \\ & & 0.10 & 0.59 & -0.07 & 0.00 & -0.12 & 0.00 & -0.23 & 0.00 \\ & & 0.20 & 0.60 & -0.07 & 0.00 & -0.12 & 0.00 & -0.23 & 0.00 \\ & & 0.30 & 0.63 & -0.06 & 0.00 & -0.10 & 0.00 & -0.21 & 0.00 \\ & & 0.40 & 0.65 & -0.05 & 0.00 & -0.10 & 0.00 & -0.20 & 0.00 \\ & & 0.50 & 0.67 & -0.05 & 0.00 & -0.10 & 0.00 & -0.18 & 0.00 \\ & & 0.60 & 0.69 & -0.04 & 0.00 & -0.09 & 0.00 & -0.16 & 0.00 \\ & & 0.70 & 0.71 & -0.03 & 0.00 & -0.08 & 0.00 & -0.13 & 0.00 \\ & & 0.80 & 0.73 & -0.02 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.75 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.77 & -0.01 & 0.01 & -0.05 & 0.00 & -0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.59 & -0.08 & 0.00 & -0.08 & 0.00 & -0.14 & 0.00 \\ & & 0.10 & 0.61 & -0.06 & 0.00 & -0.08 & 0.00 & -0.14 & 0.00 \\ & & 0.20 & 0.62 & -0.05 & 0.01 & -0.08 & 0.00 & -0.16 & 0.00 \\ & & 0.30 & 0.64 & -0.05 & 0.00 & -0.08 & 0.00 & -0.16 & 0.00 \\ & & 0.40 & 0.66 & -0.05 & 0.00 & -0.08 & 0.00 & -0.16 & 0.00 \\ & & 0.50 & 0.68 & -0.05 & 0.00 & -0.09 & 0.00 & -0.17 & 0.00 \\ & & 0.60 & 0.70 & -0.03 & 0.00 & -0.08 & 0.00 & -0.15 & 0.00 \\ & & 0.70 & 0.72 & -0.03 & 0.00 & -0.08 & 0.00 & -0.13 & 0.00 \\ & & 0.80 & 0.74 & -0.02 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.76 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.78 & -0.01 & 0.01 & -0.05 & 0.00 & -0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.63 & -0.08 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.10 & 0.64 & -0.08 & 0.00 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.20 & 0.64 & -0.06 & 0.00 & -0.06 & 0.00 & -0.12 & 0.00 \\ & & 0.30 & 0.66 & -0.05 & 0.00 & -0.07 & 0.00 & -0.13 & 0.00 \\ & & 0.40 & 0.67 & -0.04 & 0.00 & -0.07 & 0.00 & -0.13 & 0.00 \\ & & 0.50 & 0.69 & -0.04 & 0.00 & -0.08 & 0.00 & -0.14 & 0.00 \\ & & 0.60 & 0.71 & -0.04 & 0.00 & -0.08 & 0.00 & -0.14 & 0.00 \\ & & 0.70 & 0.73 & -0.03 & 0.00 & -0.07 & 0.00 & -0.12 & 0.00 \\ & & 0.80 & 0.75 & -0.02 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.76 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.78 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.66 & -0.10 & 0.00 & -0.03 & 0.01 & -0.05 & 0.01 \\ & & 0.10 & 0.66 & -0.07 & 0.00 & -0.05 & 0.00 & -0.08 & 0.00 \\ & & 0.20 & 0.67 & -0.07 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.30 & 0.67 & -0.05 & 0.00 & -0.05 & 0.00 & -0.10 & 0.00 \\ & & 0.40 & 0.69 & -0.04 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.50 & 0.70 & -0.03 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.60 & 0.72 & -0.04 & 0.00 & -0.06 & 0.00 & -0.12 & 0.00 \\ & & 0.70 & 0.73 & -0.03 & 0.00 & -0.07 & 0.00 & -0.12 & 0.00 \\ & & 0.80 & 0.75 & -0.02 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.90 & 0.77 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.79 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.68 & -0.10 & 0.00 & -0.02 & 0.02 & -0.05 & 0.01 \\ & & 0.10 & 0.68 & -0.09 & 0.00 & -0.03 & 0.00 & -0.06 & 0.00 \\ & & 0.20 & 0.69 & -0.06 & 0.00 & -0.04 & 0.00 & -0.07 & 0.00 \\ & & 0.30 & 0.69 & -0.05 & 0.00 & -0.04 & 0.00 & -0.09 & 0.00 \\ & & 0.40 & 0.70 & -0.04 & 0.01 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.50 & 0.71 & -0.04 & 0.00 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.60 & 0.73 & -0.03 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.70 & 0.74 & -0.03 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.76 & -0.02 & 0.00 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.77 & -0.02 & 0.01 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.79 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.70 & -0.08 & 0.01 & -0.02 & 0.04 & -0.05 & 0.01 \\ & & 0.10 & 0.70 & -0.09 & 0.00 & -0.03 & 0.01 & -0.05 & 0.01 \\ & & 0.20 & 0.70 & -0.07 & 0.00 & -0.03 & 0.01 & -0.06 & 0.00 \\ & & 0.30 & 0.71 & -0.06 & 0.00 & -0.04 & 0.00 & -0.07 & 0.00 \\ & & 0.40 & 0.72 & -0.04 & 0.01 & -0.04 & 0.00 & -0.08 & 0.00 \\ & & 0.50 & 0.73 & -0.03 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.60 & 0.74 & -0.03 & 0.00 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.70 & 0.75 & -0.03 & 0.00 & -0.05 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.77 & -0.02 & 0.00 & -0.06 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.78 & -0.01 & 0.02 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 1.00 & 0.80 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ \midrule 150 & 0.00 & 0.00 & 0.62 & -0.11 & 0.00 & -0.13 & 0.00 & -0.23 & 0.00 \\ & & 0.10 & 0.63 & -0.10 & 0.00 & -0.11 & 0.00 & -0.22 & 0.00 \\ & & 0.20 & 0.65 & -0.09 & 0.00 & -0.11 & 0.00 & -0.22 & 0.00 \\ & & 0.30 & 0.67 & -0.07 & 0.00 & -0.09 & 0.00 & -0.20 & 0.00 \\ & & 0.40 & 0.69 & -0.05 & 0.00 & -0.09 & 0.00 & -0.18 & 0.00 \\ & & 0.50 & 0.71 & -0.04 & 0.00 & -0.08 & 0.00 & -0.15 & 0.00 \\ & & 0.60 & 0.73 & -0.03 & 0.00 & -0.08 & 0.00 & -0.12 & 0.00 \\ & & 0.70 & 0.75 & -0.03 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.78 & -0.02 & 0.00 & -0.06 & 0.00 & -0.07 & 0.00 \\ & & 0.90 & 0.79 & -0.01 & 0.00 & -0.04 & 0.00 & -0.05 & 0.00 \\ & & 1.00 & 0.82 & -0.01 & 0.00 & -0.03 & 0.00 & -0.03 & 0.00 \\ \cmidrule(lr){2-10} & 0.05 & 0.00 & 0.63 & -0.10 & 0.00 & -0.10 & 0.00 & -0.17 & 0.00 \\ & & 0.10 & 0.65 & -0.09 & 0.00 & -0.09 & 0.00 & -0.18 & 0.00 \\ & & 0.20 & 0.66 & -0.08 & 0.00 & -0.10 & 0.00 & -0.19 & 0.00 \\ & & 0.30 & 0.68 & -0.06 & 0.00 & -0.09 & 0.00 & -0.17 & 0.00 \\ & & 0.40 & 0.70 & -0.05 & 0.00 & -0.08 & 0.00 & -0.17 & 0.00 \\ & & 0.50 & 0.72 & -0.04 & 0.00 & -0.08 & 0.00 & -0.16 & 0.00 \\ & & 0.60 & 0.74 & -0.03 & 0.00 & -0.07 & 0.00 & -0.13 & 0.00 \\ & & 0.70 & 0.76 & -0.03 & 0.00 & -0.07 & 0.00 & -0.10 & 0.00 \\ & & 0.80 & 0.78 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.80 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ & & 1.00 & 0.82 & -0.01 & 0.00 & -0.03 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.10 & 0.00 & 0.65 & -0.09 & 0.00 & -0.07 & 0.00 & -0.09 & 0.00 \\ & & 0.10 & 0.66 & -0.09 & 0.00 & -0.08 & 0.00 & -0.13 & 0.00 \\ & & 0.20 & 0.67 & -0.08 & 0.00 & -0.08 & 0.00 & -0.14 & 0.00 \\ & & 0.30 & 0.69 & -0.06 & 0.00 & -0.08 & 0.00 & -0.15 & 0.00 \\ & & 0.40 & 0.71 & -0.05 & 0.00 & -0.07 & 0.00 & -0.15 & 0.00 \\ & & 0.50 & 0.73 & -0.04 & 0.00 & -0.07 & 0.00 & -0.15 & 0.00 \\ & & 0.60 & 0.75 & -0.03 & 0.00 & -0.07 & 0.00 & -0.13 & 0.00 \\ & & 0.70 & 0.76 & -0.03 & 0.00 & -0.07 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.79 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.80 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ & & 1.00 & 0.82 & -0.01 & 0.00 & -0.03 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.15 & 0.00 & 0.67 & -0.10 & 0.00 & -0.04 & 0.00 & -0.06 & 0.01 \\ & & 0.10 & 0.68 & -0.08 & 0.00 & -0.05 & 0.00 & -0.08 & 0.00 \\ & & 0.20 & 0.69 & -0.07 & 0.00 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.30 & 0.70 & -0.06 & 0.00 & -0.06 & 0.00 & -0.12 & 0.00 \\ & & 0.40 & 0.72 & -0.05 & 0.00 & -0.07 & 0.00 & -0.13 & 0.00 \\ & & 0.50 & 0.73 & -0.04 & 0.00 & -0.07 & 0.00 & -0.14 & 0.00 \\ & & 0.60 & 0.75 & -0.03 & 0.00 & -0.06 & 0.00 & -0.13 & 0.00 \\ & & 0.70 & 0.77 & -0.03 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.79 & -0.02 & 0.00 & -0.06 & 0.00 & -0.08 & 0.00 \\ & & 0.90 & 0.81 & -0.01 & 0.00 & -0.05 & 0.00 & -0.06 & 0.00 \\ & & 1.00 & 0.83 & -0.01 & 0.00 & -0.04 & 0.00 & -0.04 & 0.00 \\ \cmidrule(lr){2-10} & 0.20 & 0.00 & 0.69 & -0.08 & 0.00 & -0.02 & 0.07 & -0.05 & 0.01 \\ & & 0.10 & 0.69 & -0.08 & 0.00 & -0.03 & 0.00 & -0.06 & 0.01 \\ & & 0.20 & 0.70 & -0.07 & 0.00 & -0.04 & 0.00 & -0.07 & 0.00 \\ & & 0.30 & 0.71 & -0.06 & 0.00 & -0.06 & 0.00 & -0.10 & 0.00 \\ & & 0.40 & 0.73 & -0.05 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.50 & 0.74 & -0.04 & 0.00 & -0.06 & 0.00 & -0.12 & 0.00 \\ & & 0.60 & 0.76 & -0.03 & 0.00 & -0.06 & 0.00 & -0.13 & 0.00 \\ & & 0.70 & 0.77 & -0.03 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.79 & -0.02 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.81 & -0.02 & 0.00 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.83 & -0.01 & 0.00 & -0.04 & 0.00 & -0.05 & 0.00 \\ \cmidrule(lr){2-10} & 0.25 & 0.00 & 0.71 & -0.08 & 0.00 & -0.01 & 0.19 & -0.04 & 0.02 \\ & & 0.10 & 0.71 & -0.08 & 0.00 & -0.02 & 0.02 & -0.05 & 0.01 \\ & & 0.20 & 0.71 & -0.07 & 0.00 & -0.03 & 0.00 & -0.06 & 0.01 \\ & & 0.30 & 0.72 & -0.06 & 0.00 & -0.04 & 0.00 & -0.08 & 0.00 \\ & & 0.40 & 0.74 & -0.05 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.50 & 0.75 & -0.04 & 0.00 & -0.05 & 0.00 & -0.11 & 0.00 \\ & & 0.60 & 0.77 & -0.03 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.70 & 0.78 & -0.02 & 0.00 & -0.06 & 0.00 & -0.11 & 0.00 \\ & & 0.80 & 0.80 & -0.02 & 0.00 & -0.05 & 0.00 & -0.09 & 0.00 \\ & & 0.90 & 0.81 & -0.02 & 0.00 & -0.05 & 0.00 & -0.07 & 0.00 \\ & & 1.00 & 0.83 & -0.01 & 0.00 & -0.04 & 0.00 & -0.05 & 0.00 \\ \bottomrule \label{table:ttestLogisticAccept} \end{longtable} \end{document}
arXiv
Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$? I've been wondering about this one for a while; I find it a little weird how abruptly it happens. Basically, why do we need just three uniforms for $Z_n$ to smooth out like it does? And why does the smoothing-out happen so relatively quickly? $Z_2$: (images shamelessly stolen from John D. Cook's blog: http://www.johndcook.com/blog/2009/02/12/sums-of-uniform-random-values/) Why doesn't it take, say, four uniforms? Or five? Or...? normal-distribution mathematical-statistics uniform central-limit-theorem tetragrammatontetragrammaton $\begingroup$ well, to be so simple as to be facile, because the sum of 3 uniforms has quadratic segments in its p.f., and once you get two or more uniforms you have a peak at the mean. A quadratic peak is "smooth"... and the joins between quadratic pieces are at 1 and 2, so it can't kink at 1.5 ; there are other ways of arriving at the same conclusion $\endgroup$ – Glen_b -Reinstate Monica Oct 30 '12 at 0:57 We can take various approaches to this, any of which may seem intuitive to some people and less than intuitive to others. To accommodate such variation, this answer surveys several such approaches, covering the major divisions of mathematical thought--analysis (the infinite and the infinitesimal), geometry/topology (spatial relationships), and algebra (formal patterns of symbolic manipulation)--as well as probability itself. It culminates in an observation that unifies all four approaches, demonstrates there is a genuine question to be answered here, and shows exactly what the issue is. Each approach provides, in its own way, deeper insight into the nature of the shapes of the probability distribution functions of sums of independent uniform variables. The Uniform $[0,1]$ distribution has several basic descriptions. When $X$ has such a distribution, The chance that $X$ lies in a measurable set $A$ is just the measure (length) of $A \cap [0,1]$, written $|A \cap [0,1]|$. From this it is immediate that the cumulative distribution function (CDF) is $$F_X(x) = \Pr(X \le x) = |(-\infty, x] \cap [0,1]| = |[0,\min(x,1)]| = \begin{array}{ll} \left\{ \begin{array}{ll} 0 & x\lt 0 \\ x & 0\leq x\leq 1 \\ 1 & x\gt 1. \end{array}\right. \end{array} $$ The probability density function (PDF), which is the derivative of the CDF, is $f_X(x) = 1$ for $0 \le x \le 1$ and $f_X(x)=0$ otherwise. (It is undefined at $0$ and $1$.) Intuition from Characteristic Functions (Analysis) The characteristic function (CF) of any random variable $X$ is the expectation of $\exp(i t X)$ (where $i$ is the imaginary unit, $i^2=-1$). Using the PDF of a uniform distribution we can compute $$\phi_X(t) = \int_{-\infty}^\infty \exp(i t x) f_X(x) dx = \int_0^1 \exp(i t x) dx = \left. \frac{\exp(itx)}{it} \right|_{x=0}^{x=1} = \frac{\exp(it)-1}{it}.$$ The CF is a (version of the) Fourier transform of the PDF, $\phi(t) = \hat{f}(t)$. The most basic theorems about Fourier transforms are: The CF of a sum of independent variables $X+Y$ is the product of their CFs. When the original PDF $f$ is continuous and $X$ is bounded, $f$ can be recovered from the CF $\phi$ by a closely related version of the Fourier transform, $$f(x) = \check{\phi}(x) = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt.$$ When $f$ is differentiable, its derivative can be computed under the integral sign: $$f'(x) = \frac{d}{dx} \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt = \frac{-i}{2\pi} \int_{-\infty}^\infty t \exp(-i x t) \phi(t) dt.$$ For this to be well-defined, the last integral must converge absolutely; that is, $$\int_{-\infty}^\infty |t \exp(-i x t) \phi(t)| dt = \int_{-\infty}^\infty |t| |\phi(t)| dt$$ must converge to a finite value. Conversely, when it does converge, the derivative exists everywhere by virtue of these inversion formulas. It is now clear exactly how differentiable the PDF for a sum of $n$ uniform variables is: from the first bullet, the CF of the sum of iid variables is the CF of one of them raised to the $n^\text{th}$ power, here equal to $(\exp(i t) - 1)^n / (i t)^n$. The numerator is bounded (it consists of sine waves) while the denominator is $O(t^{n})$. We can multiply such an integrand by $t^{s}$ and it will still converge absolutely when $s \lt n-1$ and converge conditionally when $s = n-1$. Thus, repeated application of the third bullet shows that the PDF for the sum of $n$ uniform variates will be continuously $n-2$ times differentiable and, in most places, it will be $n-1$ times differentiable. The blue shaded curve is a log-log plot of the absolute value of the real part of the CF of the sum of $n=10$ iid uniform variates. The dashed red line is an asymptote; its slope is $-10$, showing that the PDF is $10 - 2 = 8$ times differentiable. For reference, the gray curve plots the real part of the CF for a similarly shaped Gaussian function (a normal PDF). Intuition from Probability Let $Y$ and $X$ be independent random variables where $X$ has a Uniform $[0,1]$ distribution. Consider a narrow interval $(t, t+dt]$. We decompose the chance that $X+Y \in (t, t+dt]$ into the chance that $Y$ is sufficiently close to this interval times the chance that $X$ is just the right size to place $X+Y$ in this interval, given that $Y$ is close enough: $$\eqalign{ f_{X+Y}(t) dt = &\Pr(X+Y\in (t,t+dt])\\ & = \Pr(X+Y\in (t,t+dt] | Y \in (t-1, t+dt]) \Pr(Y \in (t-1, t+dt]) \\ & = \Pr(X \in (t-Y, t-Y+dt] | Y \in (t-1, t+dt]) \left(F_Y(t+dt) - F_Y(t-1)\right) \\ & = 1 dt \left(F_Y(t+dt) - F_Y(t-1)\right). }$$ The final equality comes from the expression for the PDF of $X$. Dividing both sides by $dt$ and taking the limit as $dt\to 0$ gives $$f_{X+Y}(t) = F_Y(t) - F_Y(t-1).$$ In other words, adding a Uniform $[0,1]$ variable $X$ to any variable $Y$ changes the pdf $f_Y$ into a differenced CDF $F_Y(t) - F_Y(t-1)$. Because the PDF is the derivative of the CDF, this implies that each time we add an independent uniform variable to $Y$, the resulting PDF is one time more differentiable than before. Let's apply this insight, starting with a uniform variable $Y$. The original PDF is not differentiable at $0$ or $1$: it is discontinuous there. The PDF of $Y+X$ is not differentiable at $0$, $1$, or $2$, but it must be continuous at those points, because it is the difference of integrals of the PDF of $Y$. Add another independent uniform variable $X_2$: the PDF of $Y+X+X_2$ is differentiable at $0$,$1$,$2$, and $3$--but it does not necessarily have second derivatives at those points. And so on. Intuition from Geometry The CDF at $t$ of a sum of $n$ iid uniform variates equals the volume of the unit hypercube $[0,1]^n$ lying within the half-space $x_1+x_2+\cdots+x_n \le t$. The situation for $n=3$ variates is shown here, with $t$ set at $1/2$, $3/2$, and then $5/2$. As $t$ progresses from $0$ through $n$, the hyperplane $H_n(t): x_1+x_2+\cdots+x_n=t$ crosses vertices at $t=0$, $t=1, \ldots, t=n$. At each time the shape of the cross section changes: in the figure it first is a triangle (a $2$-simplex), then a hexagon, then a triangle again. Why doesn't the PDF have sharp bends at these values of $t$? To understand this, first consider small values of $t$. Here, the hyperplane $H_n(t)$ cuts off an $n-1$-simplex. All $n-1$ dimensions of the simplex are directly proportional to $t$, whence its "area" is proportional to $t^{n-1}$. Some notation for this will come in handy later. Let $\theta$ be the "unit step function," $$\theta(x) = \begin{array}{ll} \left\{ \begin{array}{ll} 0 & x \lt 0 \\ 1 & x\ge 0. \end{array}\right. \end{array} $$ If it were not for the presence of the other corners of the hypercube, this scaling would continue indefinitely. A plot of the area of the $n-1$-simplex would look like the solid blue curve below: it is zero at negative values and equals $t^{n-1}/(n-1)!$ at the positive one, conveniently written $\theta(t) t^{n-1}/(n-1)!$. It has a "kink" of order $n-2$ at the origin, in the sense that all derivatives through order $n-3$ exist and are continuous, but that left and right derivatives of order $n-2$ exist but do not agree at the origin. (The other curves shown in this figure are $-3\theta(t-1) (t-1)^{2}/2!$ (red), $3\theta(t-2) (t-2)^{2}/2!$ (gold), and $-\theta(t-3) (t-3)^{2}/2!$ (black). Their roles in the case $n=3$ are discussed further below.) To understand what happens when $t$ crosses $1$, let's examine in detail the case $n=2$, where all the geometry happens in a plane. We may view the unit "cube" (now just a square) as a linear combination of quadrants, as shown here: The first quadrant appears in the lower left panel, in gray. The value of $t$ is $1.5$, determining the diagonal line shown in all five panels. The CDF equals the yellow area shown at right. This yellow area is comprised of: The triangular gray area in the lower left panel, minus the triangular green area in the upper left panel, minus the triangular red area in the low middle panel, plus any blue area in the upper middle panel (but there isn't any such area, nor will there be until $t$ exceeds $2$). Every one of these $2^n=4$ areas is the area of a triangle. The first one scales like $t^n=t^2$, the next two are zero for $t\lt 1$ and otherwise scale like $(t-1)^n = (t-1)^2$, and the last is zero for $t\lt 2$ and otherwise scales like $(t-2)^n$. This geometric analysis has established that the CDF is proportional to $\theta(t)t^2 - \theta(t-1)(t-1)^2 - \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$ = $\theta(t)t^2 - 2 \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$; equivalently, the PDF is proportional to the sum of the three functions $\theta(t)t$, $-2\theta(t-1)(t-1)$, and $\theta(t-2)(t-2)$ (each of them scaling linearly when $n=2$). The left panel of this figure shows their graphs: evidently, they are all versions of the original graph $\theta(t)t$, but (a) shifted by $0$, $1$, and $2$ units to the right and (b) rescaled by $1$, $-2$, and $1$, respectively. The right panel shows the sum of these graphs (the solid black curve, normalized to have unit area: this is precisely the angular-looking PDF shown in the original question. Now we can understand the nature of the "kinks" in the PDF of any sum of iid uniform variables. They are all exactly like the "kink" that occurs at $0$ in the function $\theta(t)t^{n-1}$, possibly rescaled, and shifted to the integers $1,2,\ldots, n$ corresponding to where the hyperplane $H_n(t)$ crosses the vertices of the hypercube. For $n=2$, this is a visible change in direction: the right derivative of $\theta(t)t$ at $0$ is $0$ while its left derivative is $1$. For $n=3$, this is a continuous change in direction, but a sudden (discontinuous) change in second derivative. For general $n$, there will be continuous derivatives through order $n-2$ but a discontinuity in the $n-1^\text{st}$ derivative. Intuition from Algebraic Manipulation The integration to compute the CF, the form of the conditional probability in the probabilistic analysis, and the synthesis of a hypercube as a linear combination of quadrants all suggest returning to the original uniform distribution and re-expressing it as a linear combination of simpler things. Indeed, its PDF can be written $$f_X(x) = \theta(x) - \theta(x-1).$$ Let us introduce the shift operator $\Delta$: it acts on any function $f$ by shifting its graph one unit to the right: $$(\Delta f)(x) = f(x-1).$$ Formally, then, for the PDF of a uniform variable $X$ we may write $$f_X = (1 - \Delta)\theta.$$ The PDF of a sum of $n$ iid uniforms is the convolution of $f_X$ with itself $n$ times. This follows from the definition of a sum of random variables: the convolution of two functions $f$ and $g$ is the function $$(f \star g)(x) = \int_{-\infty}^{\infty} f(x-y)g(y) dy.$$ It is easy to verify that convolution commutes with $\Delta$. Just change the variable of integration from $y$ to $y+1$: $$\eqalign{ (f \star (\Delta g)) &= \int_{-\infty}^{\infty} f(x-y)(\Delta g)(y) dy \\ &= \int_{-\infty}^{\infty} f(x-y)g(y-1) dy \\ &= \int_{-\infty}^{\infty} f((x-1)-y)g(y) dy \\ &= (\Delta (f \star g))(x). }$$ For the PDF of the sum of $n$ iid uniforms, we may now proceed algebraically to write $$f = f_X^{\star n} = ((1 - \Delta)\theta)^{\star n} = (1-\Delta)^n \theta^{\star n}$$ (where the $\star n$ "power" denotes repeated convolution, not pointwise multiplication!). Now $\theta^{\star n}$ is a direct, elementary integration, giving $$\theta^{\star n}(x) = \theta(x) \frac{x^{n-1}}{{n-1}!}.$$ The rest is algebra, because the Binomial Theorem applies (as it does in any commutative algebra over the reals): $$f = (1-\Delta)^n \theta^{\star n} = \sum_{i=0}^{n} (-1)^i \binom{n}{i} \Delta^i \theta^{\star n}.$$ Because $\Delta^i$ merely shifts its argument by $i$, this exhibits the PDF $f$ as a linear combination of shifted versions of $\theta(x) x^{n-1}$, exactly as we deduced geometrically: $$f(x) = \frac{1}{(n-1)!}\sum_{i=0}^{n} (-1)^i \binom{n}{i} (x-i)^{n-1}\theta(x-i).$$ (John Cook quotes this formula later in his blog post, using the notation $(x-i)^{n-1}_+$ for $(x-i)^{n-1}\theta(x-i)$.) Accordingly, because $x^{n-1}$ is a smooth function everywhere, any singular behavior of the PDF will occur only at places where $\theta(x)$ is singular (obviously just $0$) and at those places shifted to the right by $1, 2, \ldots, n$. The nature of that singular behavior--the degree of smoothness--will therefore be the same at all $n+1$ locations. Illustrating this is the picture for $n=8$, showing (in the left panel) the individual terms in the sum and (in the right panel) the partial sums, culminating in the sum itself (solid black curve): It is useful to note that this last approach has finally yielded a compact, practical expression for computing the PDF of a sum of $n$ iid uniform variables. (A formula for the CDF is similarly obtained.) The Central Limit Theorem has little to say here. After all, a sum of iid Binomial variables converges to a Normal distribution, but that sum is always discrete: it never even has a PDF at all! We should not hope for any intuition about "kinks" or other measures of differentiability of a PDF to come from the CLT. $\begingroup$ (+1) Fantastic! Now, how long did it take for you to put all of this together?! $\endgroup$ – cardinal Nov 7 '12 at 19:20 $\begingroup$ @Cardinal This was the last question I read before losing power last Monday. During the ensuing week, the long dark evenings provided opportunities to think it through :-) and, for amusement, to develop multiple answers. After the power was restored last weekend, it was just a matter of finding some time to make the illustrations and write it all up (which took longer than expected, I confess). I hope that perhaps some of this thread might serve as a reference for related future questions about sums of random variables. $\endgroup$ – whuber♦ Nov 7 '12 at 20:18 $\begingroup$ Wow. I wish I could 'favourite' this answer. $\endgroup$ – Rhubbarb Nov 8 '12 at 14:45 $\begingroup$ whuber, this is absolutely amazing. I never realized how deep such a simple question could be. It's gonna take me a while to grok your answer, but for now, thank you so much! $\endgroup$ – tetragrammaton Nov 8 '12 at 22:59 $\begingroup$ I will violate SE policy on comments, by saying that we (all of the crossvalidate.com) should bribe your power company to cut off the power more often :) $\endgroup$ – mpiktas Sep 26 '13 at 17:48 You could argue that the probability density function of a uniform random variable is finite, so its integral the cumulative density function of a uniform random variable is continuous, so the probability density function of the sum of two uniform random variables is continuous, so its integral the cumulative density function of the sum of two uniform random variables is smooth (continuously differentiable), so the probability density function of the sum of three uniform random variables is smooth. HenryHenry I think the more surprising thing is that you get the sharp peak for $n=2$. The Central Limit Theorem says that for large enough sample sizes the distribution of the mean (and the sum is just the mean times $n$, a fixed constant for each graph) will be approximately normal. It turns out that the uniform distribution is really well behaved with respect to the CLT (symmetric, no heavy tails (well not much of any tails), no possibility of outliers), so for the uniform the sample size needed to be "large enough" is not very big (around 5 or 6 for a good approximation), you are already seeing the OK approximation at $n=3$. Greg SnowGreg Snow Not the answer you're looking for? Browse other questions tagged normal-distribution mathematical-statistics uniform central-limit-theorem or ask your own question. Prove that sum of uniform distribution (-1,1) is also uniform (-n,n)? Sum of N random variables from the same distributions Characteristic function of uniform random variable Why is the convolution of two box kernels a triangle kernel? Distribution of samples from a uniform distribution Convolution of two indicator functions Why does the number of continuous uniform variables on (0,1) needed for their sum to exceed one have mean $e$? Uniform random variable as sum of two random variables What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions? Expectation of square root of sum of independent squared uniform random variables Central limit theorem for sum from varied distributions Summing normal instead of beta distributions, consequences for the density function of the sum? Given n uniformly distributed r.v's, what is the PDF for one r.v. divided by the sum of all n r.v's? Finding pdf of transformed variable for uniform distribution Is the sum of two independent non-overlapping uniforms uniform? The mean of the max of two uniform distributions Does the sum of discrete uniforms coverge to a discrete Gaussian?
CommonCrawl
Rough number A k-rough number, as defined by Finch in 2001 and 2003, is a positive integer whose prime factors are all greater than or equal to k. k-roughness has alternately been defined as requiring all prime factors to strictly exceed k.[1] Examples (after Finch) 1. Every odd positive integer is 3-rough. 2. Every positive integer that is congruent to 1 or 5 mod 6 is 5-rough. 3. Every positive integer is 2-rough, since all its prime factors, being prime numbers, exceed 1. See also • Buchstab function, used to count rough numbers • Smooth number Notes 1. p. 130, Naccache and Shparlinski 2009. References • Weisstein, Eric W. "Rough Number". MathWorld. • Finch's definition from Number Theory Archives • "Divisibility, Smoothness and Cryptographic Applications", D. Naccache and I. E. Shparlinski, pp. 115–173 in Algebraic Aspects of Digital Communications, eds. Tanush Shaska and Engjell Hasimaj, IOS Press, 2009, ISBN 9781607500193. The On-Line Encyclopedia of Integer Sequences (OEIS) lists p-rough numbers for small p: • 2-rough numbers: A000027 • 3-rough numbers: A005408 • 5-rough numbers: A007310 • 7-rough numbers: A007775 • 11-rough numbers: A008364 • 13-rough numbers: A008365 • 17-rough numbers: A008366 • 19-rough numbers: A166061 • 23-rough numbers: A166063 Divisibility-based sets of integers Overview • Integer factorization • Divisor • Unitary divisor • Divisor function • Prime factor • Fundamental theorem of arithmetic Factorization forms • Prime • Composite • Semiprime • Pronic • Sphenic • Square-free • Powerful • Perfect power • Achilles • Smooth • Regular • Rough • Unusual Constrained divisor sums • Perfect • Almost perfect • Quasiperfect • Multiply perfect • Hemiperfect • Hyperperfect • Superperfect • Unitary perfect • Semiperfect • Practical • Erdős–Nicolas With many divisors • Abundant • Primitive abundant • Highly abundant • Superabundant • Colossally abundant • Highly composite • Superior highly composite • Weird Aliquot sequence-related • Untouchable • Amicable (Triple) • Sociable • Betrothed Base-dependent • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith Other sets • Arithmetic • Deficient • Friendly • Solitary • Sublime • Harmonic divisor • Descartes • Refactorable • Superperfect Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
The Perspective and Orthographic Projection Matrix What Are Projection Matrices and Where/Why Are They Used? Projection Matrices: What You Need to Know First Building a Basic Perspective Projection Matrix The OpenGL Perspective Projection Matrix About the Projection Matrix, the GPU Rendering Pipeline and Clipping The OpenGL Orthographic Projection Matrix What Will We Study in this Chapter? In the first chapter of this lesson, we said that projection matrices were used in the GPU rendering pipeline. We mentioned that there were two GPU rendering pipelines: the old one, called the fixed-function pipeline and the new one which is generally referred to as the programmable rendering pipeline. We also talked about how clipping, the process that consists of discarding or trimming primitives that are either outside or straddling across the boundaries of the frustum, happens somehow while the points are being transformed by the projection matrix. Finally, we also explained that in fact, projection matrices don't convert points from camera space to NDC space but to homogeneous clip space. It is time to provide more information about these different topics. Let's explain what it means when we say that clipping happens while the points are being transformed. Let's explain what clip space is. And finally let's review how the projection matrices are used in the old and the new GPU rendering pipeline. Clipping and Clip Space Figure 1: example of clipping in 2D. At the clipping stage, new triangles may be generated wherever the original geometry overlaps the boundaries of the viewing frustum. Figure 2: example of clipping in 3D. Let's recall quickly that the main purpose of clipping is to essentially "reject" geometric primitives which are behind the eye or located exactly at the eye position (this would mean a division by 0 which we don't want) and more generally trim off part of the geometric primitives which are outside the viewing area (more information on this topic can be found in chapter 2). This viewing area is defined by the truncated pyramid of the perspective or viewing frustum. Any professional rendering system actually somehow needs to implement this step. Note that the process can result into creating more triangles as shown in figure 1 than the scenes initially contained. The most common clipping algorithms are the Cohen-Sutherland algorithm for lines and the Sutherland-Hodgman algorithm for polygons. It happens that clipping is more easily done in clip space than in camera space (before vertices are transformed by the projection matrix) or screen space (after the perspective divide). Remember that when the points are transformed by the projection matrix, they are first transformed as you would with any other 4x4 matrix, and the transformed coordinates are then normalized: that is, the x- y- and z-coordinates of the transformed points are divided by the transformed point z-coordinate. Clip space is the space points are in just before they get normalized. In summary, what happens on a GPU is this. Points are transformed from camera space to clip space in the vertex shader. The input vertex is converted from Cartesian coordinates to homogeneous coordinates and its w-coordinate is set to 1. The predefined gl_Position variable, in which the transformed point is stored, is also a point with homogeneous coordinates. Though when the input vertex is multiplied by the projection matrix, the normalized step is not yet performed. gl_Position is in homogeneous clip space. When all the vertices have been processed by the vertex shader, triangles whose vertices are now in clip space are clipped. Once clipping is done, all vertices are normalized. Their x- y- and z-coordinates of each vertex are divided by their respective w-coordinate. This is where and when perspective divide occurs. Let's recall, that after the normalization step, points which are visible to the camera are all contained in the range [-1,1] both in x and y. This happens in the last part of the point-matrix multiplication process, when the coordinates are normalized as we just said: $$\begin{array}{l} -1 \leq \dfrac{x'}{w'} \leq 1 \\ -1 \leq \dfrac{y'}{w'} \leq 1 \\ -1 \leq \dfrac{z'}{w'} \leq 1 \\ \end{array}$$ Or: \(0 \leq \dfrac{z'}{w'} \leq 1\) depending on the convention you are using. Therefore we can also write: $$\begin{array}{l} -w' \leq x' \leq w' \\ -w' \leq y' \leq w' \\ -w' \leq z' \leq w' \\ \end{array}$$ Which is the state x', y' and z' are before they get normalized by w' or to say it different, when coordinates are in clip space. We can add a fourth equation: \(0 \lt w'\). The purpose of this equation is to guarantee that we will never divide any of the coordinates by 0 (which would be a degenerate case). These equations mathematically works. You don't really need though to try to represent what vertices look like or what it means to work with a four-dimensional space. All it says is that the clip space of a given vertex whose coordinates are {x, y, z} is defined by the extents [-w,w] (the w value indicates what the dimensions of the clip space are). Note that this clip space is the same for each coordinate of the point and the clip space of any given vertex is a cube. Though note also that each point is likely to have its own clip space (each set of x, y and z-coordinate is likely to have a different w value). In other words, every vertex has its own clip space in which it exists (and basically needs to "fit" in). This lesson is only about projection matrices. All we need to know in the context of this lesson, is to know where clipping occurs in the vertex transformation pipeline and what clip space means, which we just explained. Everything else will be explained in the lessons on the Sutherland-Hodgman and the Cohen-Sutherland algorithms which you can find in the Advanced Rasterization Techniques section. The "Old" Point (or Vertex) Transformation Pipeline The fixed-function pipeline is now deprecated in OpenGL and other graphics APIs. Do not use it anymore. Use the "new" programmable GPU rendering pipeline instead. We only kept this section for reference and because you might still come across some articles on the Web referencing methods from the old pipeline. Vertex is a better term when it comes to describe how points (vertices) are transformed in OpenGL (or Direct3D, Metal or any other graphics API you can think of). OpenGL (and other graphics APIs) had (in the old fixed-function pipeline) two possible modes for modifying the state of the camera: GL_PROJECTION and GL_MODELVIEW. GL_PROJECTION allowed to set the projection matrix itself. As we know by now (see previous chapter) this matrix is build from the left, right, bottom and top screen coordinates (which are computed from the camera's field of view and near clipping plane), as well as the near and far clipping planes (which are parameters of the camera). These parameters define the shape of the camera's frustum and all the vertices or points from the scene contained within this frustum are visible. In OpenGL, these parameters were passed to the API through a call to glFrustum (which we show an implementation of in the previous chapter): glFrustum(float left, float right, float bottom, float top, float near, float far); GL_MODELVIEW mode allowed to set the world-to-camera matrix. A typical OpenGL program set the perspective projection matrix and the model-view matrix using the following sequence of calls: glMatrixMode (GL_PROJECTION); glLoadIdentity(); glFrustum(l, r, b, t, n, f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslate(0, 0, 10); ... First we would make the GL_PROJECTION mode active (line 1). Next, to set up the projection matrix, we would make a call to glFrustum passing as arguments to the function, the left, right, bottom and top screen coordinates as well as the near and far clipping planes. Once the projection matrix was set, we would switch to the GL_MODELVIEW mode (line 4). Actually, the GL_MODELVIEW matrix could be seen as the combination of the "VIEW" transformation matrix (the world-to-camera matrix) with the "MODEL" matrix which is the transformation applied to the object (the object-to-world matrix). There was not concept of world-to-camera transform separate from the object-to-world transform. The two transforms were combined in the GL_MODELVIEW matrix. $${GL\_MODELVIEW} = M_{object-to-world} * M_{world-to-camera}$$ First a point \(P_w\) expressed in world space was transformed to camera space (or eye space) using the GL_MODELVIEW matrix. The resulting point \(P_c\) was then projected onto the image plane using the GL_PROJECTION matrix. We ended up with a point expressed in homogeneous coordinates in which the coordinate w contained the point \(P_c\)'s z coordinate. The Vertex Transformation Pipeline in the New Programmable GPU Rendering Pipeline The pipeline in the new programmable GPU rendering pipeline is more or less the same than the old pipeline, but what is really different in this new pipeline, is the way you set things up. In the new pipeline, there is no more concept of GL_MODELVIEW or GL_PROJECTION mode. This step can now be freely programmed in a vertex shader. As mentioned in the first chapter of this lesson, the vertex shader, is like a small program. You can program this vertex shader to tell the GPU how vertices making up the geometry of the scene should be processed. In other words, this is where you should be doing all your vertex transformations: the world-to-camera transformation if necessary but more importantly the projection transformation. A program using an OpenGL API doesn't produced an image if the vertex and its correlated fragment shader are not defined. The simplest form of vertex shader looks like this: in vec3 vert; void main() { // does not alter the vertices at all gl_Position = vec4(vert, 1); } This program doesn't even transform the input vertex with a perspective projection matrix, which in some cases can produce a visible result depending on the size and the position of the geometry as well as how the viewport is set. But this is not relevant in this lesson. What we can see by looking at this code is that the input vertex is set to be a vec4 which is nothing else than a point with homogeneous coordinates. Note that gl_Position too is a point with homogeneous coordinates. As expected, the vertex shader output the position of the vertex in clip space (see diagram of the vertex transformation pipeline above). In reality you are more likely to use a vertex shader like this one: uniform mat4 worldToCamMatrix, projMatrix; in vec3 vert; void main() { gl_Position = projMatrix * worldToCamMatrix * vec4(vert, 1); } It uses both a world-to-camera and projection matrix to transform the vertex to camera space and then clip space. Both matrices are set externally in program using some calls (glGetUniformLocation to find the location of the variable in the shader and glUniformMatrix4fv to set the matrix variable using the previously found location) that are provided to you by the OpenGL API: Matrix44f worldToCamera = ... // See note below to learn about whether you need to transpose the matrix of not before using it in glUniformMatrix4fv //worldToCamera.transposeMe(); //projMatrix.transposeMe(); GLuint projMatrixLoc = glGetUniformLocation(p, "projMatrix"); GLuint worldToCamLoc = glGetUniformLocation(p, "worldToCamMatrix"); glUniformMatrix4fv(projMatrixLoc, 1, GL_FALSE, projMatrix); glUniformMatrix4fv(worldToCamLoc, 1, GL_FALSE, worldToCamera); Edit - January 2017: do I need to transpose the matrix in an OpenGL program or not? Despite our effort to make things as clear as possible, it is easy to still get confused by things such as "should I transpose my matrix before passing it to the graphics pipeline, etc.". In the OpenGL specifications, matrices were/are written using the column-major order convention. Though the confusing part is that API calls such as glUniformMatrix4vfx() accept coefficients mapped in memory in the row-major form. In conclusion if in your code the coefficients of the matrices are laid out in memory in a row-major order, then you don't need to transpose the matrix. Otherwise you may have to. You "may" because in fact this is something you can control via a flag in the glUniformMatrix4vfx() function itself. The third parameter of the function which is set to GL_FALSE in the example above indicates to the graphics API whether you wish the API to transpose the coefficients of the matrix for you. So even if your coefficients are mapped in memory in a column-major order, you don't necessarily need to transpose matrices specifically before using them with glUniformMatrix4vfx(). What you can do instead is to set the transpose flag of glUniformMatrix4vfx() to GL_TRUE. In fact things get even more confusing if you look at the order in which the matrices are used in the OpenGL vertex shader. You will notice we write \(Proj * View * vtx\) instead of \(vtx * View * Proj\). The former form is used when you deal with column-major matrices (because it implies that you multiply the matrix by the point rather than the point by the matrix as explained in our lesson on Geometry. Conclusion? OpenGL assume matrices are column-major (so this is how you need to use them in shaders) yet coefficients are mapped in memory using a row-major order form. Confusing? Remember that matrices in OpenGL (and vectors) use column-major order. Thus if you use a row vectors like we do on Scratchapixel, you will need to transpose the matrix before setting up the matrix of the vertex shader (line 2). They are other ways of doing this in modern OpenGL but we will skip them in this lesson which is not devoted to that topic. This information can easily be found on the Web anyway.
CommonCrawl
How many ways are there to put 5 balls in 2 boxes if the balls are distinguishable but the boxes are not? Since the boxes are indistinguishable, there are 3 possibilities for arrangements of the number of balls in each box. Case 1: 5 balls in one box, 0 in the other box. We must choose 5 balls to go in one box, which can be done in $\binom{5}{5} = 1$ way. Case 2: 4 balls in one box, 1 in the other box. We must choose 4 balls to go in one box, which can be done in $\binom{5}{4} = 5$ ways. Case 3: 3 balls in one box, 2 in the other box. We must choose 3 balls to go in one box, which can be done in $\binom{5}{3} = 10$ ways. This gives us a total of $1 + 5 + 10 = \boxed{16}$ arrangements.
Math Dataset
\begin{document} \title{Infinite not contact isotopic embeddings in $(S^{2n-1} \begin{abstract} For $n\ge 4$, we show that there are infinitely many formally contact isotopic embeddings of $(ST^*S^{n-1},\xi_{\rm{std}})$ to $(S^{2n-1},\xi_{\rm{std}})$ that are not contact isotopic. This resolves a conjecture of Casals and Etnyre \cite{Nonsimple} except for the $n=3$ case. The argument does not appeal to the surgery formula of critical handle attachment for Floer theory/SFT. \end{abstract} \section{Introduction} Isocontact embeddings in dimension $3$, a.k.a.\ transverse links, are a fundamental object in the study of contact $3$-folds \cite{contact}. Their higher dimensional analogues are less studied until recently. A recent breakthrough in this direction is due to Casals and Etnyre \cite{Nonsimple}, who proved that isocontact embeddings of codimension $2$ submanifolds is not simple, i.e.\ being contact isotopic is finer than the underlying topological information (being formally contact isotopic). Such rigidity result emphasizes the fundamental difference between isocontact embeddings of codimension $2$, where non-topological obstructions appear, and isocontact embeddings of codimension at least $4$, which are governed by $h$-principle \cite{h-principle}, hence completely determined by their formal topological data. More precisely, Casals and Etnyre \cite[Theorem 1.1]{Nonsimple} showed that there are two contact embeddings of $(ST^*S^{n-1},\xi_{\rm{std}})$, i.e.\ the unit sphere bundle of the cotangent bundle of $S^{n-1}$ equipped with the canonical Liouville structure, into $(S^{2n-1},\xi_{\rm{std}})$ for $n\ge 3$ that are formally contact isotopic but not contact isotopic. Then they conjectured \cite[Conjecture 1.5]{Nonsimple} that there are infinitely many formally contact isotopic embeddings of the standard $ST^*S^{n-1}$ into $(S^{2n-1},\xi_{\rm{std}})$ that are not contact isotopic to each other. In this note, we give a proof of their conjecture for $n\ge 4$. \begin{theorem}\label{thm:main} For each $n\ge 4$, there exists infinitely many contact embeddings of the standard $ST^*S^{n-1}$ into $(S^{2n-1},\xi_{\rm{std}})$ that are formally isotopic but not contact isotopic. \end{theorem} Let us briefly recall the geometric constructions in \cite{Nonsimple}. Starting from a Legendrian sphere $\Lambda$ in $(S^{2n-1},\xi_{\rm{std}})$, using the Weinstein neighborhood theorem, we get a contact push-off, i.e.\ a contact embedding of the standard $ST^*S^{n-1}$ into $(S^{2n-1},\xi_{\rm{std}})$. When $\Lambda,\Lambda'$ are formally Legendrian isotopic, the contact embeddings are also formally isotopic. Then Casals and Etnyre considered the branched double cover of $S^{2n-1}$ with branching locus $ST^*\Lambda$, where contact isotopic embeddings will induce contactormophic branched covers. The key fact in \cite{Nonsimple} is that the branched cover is precisely the contact manifold obtained by attaching a critical handle along the Legendrian sum $\Lambda \# \Lambda$. Therefore the problem of finding formally isotopotic but not contact isotopic emebeddings is reduced to finding formally isotopic Legendrians with different contact boundaries after the surgery along $\Lambda\#\Lambda$. Finally, Casals and Etnyre found a pair of such Legendrians $\Lambda,\Lambda'$ with $\Lambda'$ loose, such that the resulting contact manifolds are $ST^*S^n$ and $\partial (\mathrm{Flex}(T^*S^n))$, which are different by \cite{vanishing}. \begin{remark} There is another method to distinguish contact submanifolds via studying contact homology coupled with the intersection data with the holomorphic hypersurface given by the symplectization of the contact submanifold. Such invariants were introduced in \cite{cote2020homological} by C\^ot\'e and Fauteux-Chapleau, who used them to provide an alternative proof that some of different contact submanifolds built in \cite{Nonsimple} via contact push-off are indeed not contact isotopic, reproving \cite[Theorem 1.1]{Nonsimple} for $(S^{4n-1},\xi_{\rm{std}})$ and $n>1$. \end{remark} With the geometric constructions above, to prove \cite[Conjecture 1.5]{Nonsimple}, Casals and Etnyre suggested to find infinitely many distinct but formally isotopic Legendrian spheres, then appeal to the surgery formula \cite{surgery} to show that they result in different contact boundaries after the surgery along $\Lambda \# \Lambda$. This is certainly plausible given the richness of formally isotopic Legendrian spheres on $(S^{2n-1},\xi_{\rm{std}})$. However, it is still a nontrivial task to compute their holomorphic curve invariants even using \cite{surgery}. Moreover, computing augmented invariants like symplectic cohomology as in \cite{surgery} is not sufficient to tell the differences of the contact boundaries. In fact, the latter is the main reason why we have to restrict to the case of $n\ge 4$ in this note. More precisely, we solve the $n\ge 4$ case by applying their geometric construction to the Legendrian spheres arising from the construction of exotic $T^*S^n$ by Eliashberg, Ganatra and Lazarev \cite{flexible}. More precisely, those exotic $T^*S^n$ are constructed from attaching a critical handle to formally isotopic Legendrian spheres that are not Legendrian isotopic. Those Legendrian spheres are boundaries of flexible Lagrangians of different diffeomorphism types in ${\mathbb C}^n$, whose existence is confirmed by $h$-principles in \cite{flexible}. The resulting contact manifolds are distinguished by their positive symplectic cohomology, which is a contact invariant for those asymptotically dynamically convex manifolds admitting Weinstein fillings \cite{ADC}. We note here that our proof does not rely on the surgery formula for critical handle attachments. In the $n=3$ case, even though we do not find infinitely many not contact isotopic embeddings of $ST^*S^2$, we do have at least one more that is different from the two examples from \cite{Nonsimple}. \begin{proposition}\label{prop:three} There are three formally contact isotopic embeddings of $ST^*S^2$ into $(S^5,\xi_{\rm{std}})$ that are pairwisely not contact isotopic. \end{proposition} \subsection*{Acknowledgments} The author would like to thank Roger Casals for his comments and interests, and Ruizhi Huang for helpful conversations. \section{Proof of the $n\ge 4$ case} The starting point is the flexible Lagrangian (with Legendrian boundary) $\hat{L}$ in the unit ball $\mathbb{D}^n\subset \mathbb{C}^n$ in the construction of the exotic $T^*S^n$ \cite[Theorem 4.7]{flexible}. Here we list the properties of them. \begin{enumerate} \item $\hat{L}_k$ is a closed manifold $L_k$ minus a disk. The Legendrian boundary of $\partial \hat{L}_k$ is formally Legendrian isotopic to the standard unknot in $(S^{2n-1},\xi_{\rm{std}})$. \item $\hat{L}_k$ is a flexible Lagrangian. In particular, when we attach a critical handle along $\partial \hat{L}_k$, we get an exotic $T^*S^n$, which is obtained from $T^*L_k$ and a flexible Weinstein cobordism. \item $\pi_1(L_k)=0$ and $H^2(L_k)={\mathbb Z}^{2k}$. In view of \cite[Theorem 4.7]{flexible}, to have such $\widehat{L}_k$, in particular, to have $\partial \hat{L}_k$ is formally Legendrian isotopic to the standard unknot in $(S^{2n-1},\xi_{\rm{std}})$, we require $L_k$ to have the following properties. \begin{enumerate} \item $L_k$ is stably trivializable, which will imply that $TL_k\otimes {\mathbb C}$ is a trivial complex bundle. \item $\chi(L_k)=2$ when $n$ is even. \item $\chi_{\frac{1}{2}}(L_k):=\sum_{i=0}^m \rank H_i(L_k)=1\mod 2$ when $n=2m+1>3$. \end{enumerate} We can build such $L_k$ as the boundary of a regular neighborhood of an embedding of a CW complex $W_k$ in $\mathbb{R}^{n+1}$ with $2k$ $2$-cells and $2k$ $3$-cells with trivial attaching maps. Then $L_k$ is stably trivializable. When $n$ is even, we have $\chi(L_k)=2\chi(W_k)=2$. Moreover, we have $\pi_i(L_k)\to \pi_i(W_k)$ is an isomorphism when $i<\frac{n+1}{2}$. In particular, we have $\pi_1(L_k)=0$ and $H^2(L_k)={\mathbb Z}^{2k}$. When $n>5$ odd, we have $\chi_{\frac{1}{2}}(L_k)=1+2k+2k=1\mod 2$ and when $n=5$, we have $\chi_{\frac{1}{2}}(L_k)=1+2k=1\mod 2$. In other words, all the conditions can be arranged. \end{enumerate} \begin{proof}[Proof of Theorem \ref{thm:main}] Let $\Lambda_k$ denote the Legendrian boundary of $\hat{L}_k$. We claim the contact manifold $Y_k$ obtained as the boundary of the Weinstein domain given by attaching a handle along $\Lambda_k\#\Lambda_k$ is different for different $k$. Then the theorem follows from the same proof of \cite[Theorem 1.1]{Nonsimple}. First note that $\hat{L}_k\sqcup \hat{L}_k$ is a flexible Lagrangian in $D^{2n}\natural D^{2n}$, where $\natural$ is the boundary connected sum. By \cite{ambient}, there is a Lagrangian cobordism from $\Lambda_k\sqcup\Lambda_k$ to $\Lambda_k\#\Lambda_k$ in the symplectization of $(S^{2n-1},\xi_{\rm{std}})$, such that the Lagrangian cobordism is also flexible. More precisely, the Legendrian sum, i.e.\ the ambient $0$-surgery in \cite{ambient}, can be understood as a two-steps procedure: we first attach a $1$-handle to $(S^{2n-1},\xi_{\rm{std}})$, where the attaching isotropic sphere $S^0$ is two points from each component of $\Lambda_k\sqcup\Lambda_k$. The resulting Weinstein cobordism contains a Lagrangian cobordism from $\Lambda_k\sqcup\Lambda_k$ to the connected sum. Then we attach another $2$-handle, where the attaching circle is the union of the core of the $1$-handle and the isotropic arc in the construction of the Legendrian sum, without changing the Lagrangian cobordism. This handle will cancel the previously attached $1$-handle symplectically, hence we get a flexible Lagrangian cobordism from $\Lambda_k\sqcup\Lambda_k$ to $\Lambda_k\#\Lambda_k$ in the symplectization of $(S^{2n-1},\xi_{\rm{std}})$. As a consequence, $\Lambda_k\#\Lambda_k$ has a flexible Lagrangian filling by $\widehat{L_k\#L_k}$, i.e.\ $L_k\# L_k$ minus a disk. Therefore the resulted contact manifold $Y_k$, which is almost contactomorphic to $(S^*TS^n,\xi_{\rm{std}})$, has a Weinstein filling $W_k$ from attaching subcritical/flexible handles to $T^*(L_k\#L_k)$. \begin{claim} The flexible cobordism between $ST^*(L_k\#L_k)$ and $Y_k$ has no $1$-handles nor $n$-handles. \end{claim} \begin{proof} Let $C_k$ denote the cobordism. By van Kampen theorem, we have $\pi_1(C_k)=\pi_1(ST^*(L_k\#L_k))=0$. Now since $L_k\#L_k\to W_k$ induces an isomorphism on $n$-th homology, the long exact sequences of $(W_k,T^*(L_k\#L_k))$ and excision implies that $H_n(C_k,ST^*(L_k\#L_k))=H_{n-1}(C_k,ST^*(L_k\#L_k))=0$. By Smale's simplification of the handle representation, we know that $C_k$ has a handle decomposition without $1$-handles nor $n$-handles since $\dim C_k=2n\ge 8$ \cite[\S 7, 8]{MR0190942}. Finally, since $C_k$ is flexible, such topological handle decomposition can be presented in a symplectic way \cite[Chapter 14]{MR3012475}. \end{proof} \begin{claim} $Y_k$ is asymptotically dynamically convex. \end{claim} \begin{proof} Since $\dim n \ge 4$, $ST^*(L_k\#L_k)$ is tautologically asymptotically dynamically convex. $ST^*(L_k\#L_k)$ is simply connected, the attachment of $2,\ldots, n-1$-subcritical handles does not change the asymptotically dynamical convexity by \cite[Theorem 3.14]{ADC}. In view of the first claim, we have $Y_k$ is also asymptotically dynamically convex. \end{proof} \begin{claim} $SH^{n-1}_+(W_k;{\mathbb Q})$ are different for different $k$. \end{claim} \begin{proof} Since $L_k$ is stably trivializable, $L_k$ is spin. Hence $L_k\#L_k$ is also spin, as $w_2(L_k\# L_k)=w_2(L_k)\oplus w_2(L_k) = 0 \in H^2(L_k;{\mathbb Z}/2)\oplus H^2(L_k;{\mathbb Z}/2)=H^2(L_k\#L_k;{\mathbb Z}/2)$. Since $W_k$ is obtained from $T^*(L_k\#L_k)$ by attaching subcritical handles, by \cite{subcritical}, we have $SH^*(W_k;{\mathbb Q})=SH^*(T^*(L_k\#L_k);{\mathbb Q})$. Then by the Viterbo isomorphism \cite{MR2190223,MR3444367,MR2276534,MR1726235}, we have $SH^*(W_k;{\mathbb Q})=SH^*(T^*(L_k\#L_k);{\mathbb Q})=H_{n-*}(\Lambda (L_k\# L_k);{\mathbb Q})$ and $SH^*_+(W_k;{\mathbb Q})=H_{n-*}(\Lambda (L_k\# L_k), L_k\#L_k;{\mathbb Q})$. Since $L_k\#L_k$ is simply connected, Sullivan's minimal model $V_k$ of $L_k\# L_k$ has exactly $4k=\rank H^2(L_k\#L_k;{\mathbb Q})$ generators $x_1,\ldots,x_{4k}$ in degree $2$, which are also closed. Then by \cite{MR455028}, $H_{1}(\Lambda(L_k\# L_k), L_k\#L_k;{\mathbb Q})=H^{1}(\Lambda(L_k\# L_k), L_k\#L_k;{\mathbb Q})=H^1(\bigwedge(V_k\oplus sV_k)/\bigwedge V_k)={\mathbb Q}^{4k}$, generated by $sx_1,\ldots,sx_{4k}\in sV$, where $sV=V[1]$. \end{proof} Now since $SH^*_+(W_k;{\mathbb Q})$ is a contact invariant for those asymptotically dynamically convex manifolds with Weinstein fillings \cite[Proposition 3.8]{ADC}, we know that $Y_k$ are different contact manifolds, and the theorem follows. \end{proof} \section{Discussion of the $n=3$ case} The fundamental difficulty to apply the above argument to the $n=3$ case is that we never have asymptotically dynamical convexity unless $L=S^3$. \begin{proposition}\label{prop:not} Assume the Liouville domain $W$ has vanishing first Chern class and $\partial W$ is simply connected. If $SH^{m}_{+,S^1}(W)\ne 0$ for some $m\ge 2n-3$, then $\partial W$ is not asymptotically dynamically convex. \end{proposition} \begin{proof} Assume $\partial W$ is asymptotically dynamically convex, i.e.\ there are nesting exact subdomains $\ldots \subset W_n\subset \ldots W_1=W$ with $W_i$ Liouville homotopic to $W$, such that there exist $D_1<\ldots < D_n<\ldots \to \infty$ with the Reeb orbits on $\partial W_k$ of period up to $D_k$ are non-degenerate and $\mu_{CZ}+n-3>0$. By \cite{MR3734608}, there is a spectral sequence converging to $SH^*_{+,S^1}(W_k)$ with the first page spanned by Reeb orbits of $\partial W_k$ with grading $n-\mu_{CZ}$. The same spectral sequence holds for filtered $S^1$ equivariant positive symplectic cohomology $SH^{*,<D_k}_{+,S^1}(W_k)$ generated by Reeb orbits of period up to $D_k$ and is compatible with continuation maps and Viterbo transfer maps. As a consequence, $SH^{*,<D_k}_{+,S^1}(W_k)$ is supported in grading $*<2n-3$, and the same holds for $SH^*_{+,S^1}(W)=\varinjlim SH^{*,<D_k}_{+,S^1}(W_k)$, which contradicts the condition. \end{proof} \begin{proposition}\label{prop:ADC} Let $W(L)$ denote the exotic $T^*S^3$ obtained from $T^*L$ for any oriented $3$-fold $L$ as in \cite[Theorem 4.7]{flexible}, i.e.\ $L\ne S^3$. Then $\partial W(L)$ is not asymptotically dynamically convex if $L\ne S^3$. \end{proposition} \begin{proof} Since $L\ne S^3$, $L$ is not simply connected. If the contact form on $ST^*L$ is induced from a Riemannian metric, then there is a non-trivial geodesic loop in each non-trivial conjugacy class of $\pi_1(L)$ minimizing the length (or the energy functional). Under non-degeneracy assumptions, the Morse-index of such geodesic loop, i.e.\ the Conley-Zehnder index of the corresponding Reeb orbit, is $0$, which is a borderline failure for asymptotically dynamical convexity. Moreover, this loop contributes non-trivially in $H^{S^1}_0(\Lambda L)$, the $S^1$ equivariant homology of the free loop space. However, it is important to note that such Reeb orbit is not contractible, hence is not considered in the definition of asymptotically dynamical convexity for $T^*L$. On the other hand, those homotopy classes will be trivialized after attaching the flexible cobordism to obtain $W(L)$. We claim the unique trivialization $\eta$ of $\det_{{\mathbb C}}W(L)$ restricted to $L$ is the natural trivialization $\det_{{\mathbb C}} T^*L = \det_{\mathbb{R}}L \otimes {\mathbb C}$ used to obtain the ${\mathbb Z}$-graded isomorphism $SH^*_{S^1}(T^*L)=H^{S^1}_{n-*}(\Lambda L)$. This can be seen from the construction of $W(L)$ as follows. It follows from \cite[Corollary 4.5]{flexible} that $\eta$ restricted $\widehat{L}\subset {\mathbb C}^n$ is the restriction of the natural one. Since $H^1(\partial \widehat{L})=0$, there is a unique way glue the trivialization on $\widehat{L}$ and the core of the critical handle. Hence the claim follows. As a consequence, we have $SH^*_{+,S^1}(W(L))=H^{S^1}_{3-*}(\Lambda L, L)$ as ${\mathbb Z}$-graded spaces by \cite{surgery}, where the positive $S^1$-equivariant symplectic cohomology is graded by $3$ minus the Conley-Zehnder index. In particular, we have $SH^{3}_{+,S^1}(W(L))\ne 0$, hence $\partial W(L)$ is not asymptotically dynamically convex by Proposition \ref{prop:not}. \end{proof} \begin{remark} It is also not clear if we actually need critical handles to build $W(L)$ from $T^*L$, as $ST^*L$ and the flexible cobordism are no longer simply connected. Presumably, there should be a definitive answer to this question if one takes a closer look at the topological type of the flexible cobordism. \end{remark} The absence of asymptotically dynamical convexity makes it hard to tell the contact boundaries apart. However, in some special cases, we can still argue the contact boundaries are different. \begin{proposition}\label{prop:lens} Let $W_k$ denote the exotic $T^*S^3$ obtained from $T^*L(k,1)$ for the lens space $L(k,1)$ in \cite[Theorem 4.7]{flexible}. Then $\partial W_k$ is different from $\partial W_{k'}$ if $k\ne k'$. \end{proposition} \begin{proof} Let $g$ be the round metric on $L(k,1)$, then for each nontrivial homotopy class of loops, the closed geodesics in this homotopy class is parameterized by a family of $S^2$ indexed by ${\mathbb N}$, with only one of them (i.e.\ the simple loops) realizing the local minimum of the energy functional. Then after a small perturbation of the contact form induced from the round metric, there is a unique Reeb orbit $\gamma_i$, for $i\in {\mathbb Z}/k \backslash \{0\}$, with Conley-Zehnder index $0$ in each nontrivial homotopy class of loops, and all others have Conley-Zehnder indices at least $2$. Moreover, $\gamma_i$ contributes to the nontrivial class in $SH^{3}_{+,S^1}(T^*L(k,1))=H^{S^1}_{0}(\Lambda L(k,1),L(k,1))$ and $\{\gamma_1,\ldots,\gamma_{k-1}\}$ span a basis of the space. Now $W_k$ is obtained from $T^*L(k,1)$ by attaching flexible handles. Following the argument in Proposition \ref{prop:ADC}, even though we have $2$-handles in the flexible cobordism, (the proof of) \cite[Theorem 3.14]{ADC} can still be applied, as the trivialization of complex determinant bundle can be extended naturally. As a consequence, after attaching all the subcritical handles, we have a contact form on the boundary such that the all Reeb orbits have Conley-Zehnder indices at least $2$ except for $\gamma_i$\footnote{Strictly speaking, we need to work with a nesting family of Liouville domains with an increasing sequence of period threshold as in \cite{ADC}, we omit this for simplicity. Also by a bit abuse of language, $\gamma_i$ here are the corresponding orbits after the surgery, as we can assume they are away from the surgery region.}. Finally, we attach the critical flexible handles, by \cite[Theorem 3.15]{ADC}, the new contact boundary has new Reeb orbits with Conley-Zehnder index at least $1$. However by the proof of \cite[Theorem 3.15]{ADC}, those orbits with Conley-Zehnder index $1$ are from single Reeb chords with arbitrarily small period from the small zig-zags on loose Legendrians. In particular, we can assume the corresponding Reeb orbits have period smaller that of $\gamma_i$. As a consequence, $\gamma_i$ still contributes non-trivially to $SH^{3}_{+,S^1}(W)$ and spans the space ($\simeq {{\mathbb Q}}^{k-1}$ if we use ${\mathbb Q}$ as the coefficient), for any Weinstein filling $W$ of $\partial W_k$, as they can not be eliminated by those orbits with Conley-Zehnder index $1$. As a consequence, $\partial W_k\ne \partial W_{k'}$ for $k\ne k'$ \end{proof} \begin{corollary} For any $n\ge 3$, there exist infinitely many exotic $T^*S^n$ with the standard almost Weinstein structure, such that the contact boundaries are also pairwisely different. \end{corollary} \begin{remark} For $n\ge 9$, Zhao \cite{zhao} proved that for any $N$, there exist $N$ different $2n$-dimensional Weinstein domains that have the same almost Weinstein structure, and the contact boundaries are pairwisely different. \end{remark} Another difficulty is that $\pi_1(L\#L))$ has infinitely many conjuacy classes as long as $L\ne S^3$, hence we can not apply the argument in Proposition \ref{prop:lens} to $L\#L$\footnote{Moreover, there may not be any nice Reeb flow on $ST^*(L\#L)$. In view of Meyer's theorem \cite[Theorem 19.4]{MR0163331}, a nice geodesic flow requires, for example, positive Ricci curvature, which in dimension 3, only happens on quotients of $S^3$.}. Moreover, we do not know that the exotic $T^*S^3$ given by $W(L\#L)$ are different from each other, let alone the contact boundary. The former question, in principle, can be answered by understanding $H_*(\Lambda (L\#L))$ along with the rich structures on it. \begin{proof}[Proof of Proposition \ref{prop:three}] Let $L\ne S^3$, it suffices to prove that $\partial W(L\# L)$ is different from $ST^*S^3$ and $\partial (\mathrm{Flex}(T^*S^3))$. This is clear from Proposition \ref{prop:ADC}, as the latter two are both asymptotically dynamically convex. \end{proof} \Addresses \end{document}
arXiv
Analytical regularization In physics and applied mathematics, analytical regularization is a technique used to convert boundary value problems which can be written as Fredholm integral equations of the first kind involving singular operators into equivalent Fredholm integral equations of the second kind. The latter may be easier to solve analytically and can be studied with discretization schemes like the finite element method or the finite difference method because they are pointwise convergent. In computational electromagnetics, it is known as the method of analytical regularization. It was first used in mathematics during the development of operator theory before acquiring a name.[1] Method Analytical regularization proceeds as follows. First, the boundary value problem is formulated as an integral equation. Written as an operator equation, this will take the form $GX=Y$ with $Y$ representing boundary conditions and inhomogeneities, $X$ representing the field of interest, and $G$ the integral operator describing how Y is given from X based on the physics of the problem. Next, $G$ is split into $G_{1}+G_{2}$, where $G_{1}$ is invertible and contains all the singularities of $G$ and $G_{2}$ is regular. After splitting the operator and multiplying by the inverse of $G_{1}$, the equation becomes $X+G_{1}^{-1}G_{2}X=G_{1}^{-1}Y$ or $X+AX=B$ which is now a Fredholm equation of the second type because by construction $A$ is compact on the Hilbert space of which $B$ is a member. In general, several choices for $\mathbf {G} _{1}$ will be possible for each problem.[1] References 1. Nosich, A.I. (1999). "The method of analytical regularization in wave-scattering and eigenvalue problems: foundations and review of solutions". IEEE Antennas and Propagation Magazine. Institute of Electrical and Electronics Engineers (IEEE). 41 (3): 34–49. Bibcode:1999IAPM...41...34N. doi:10.1109/74.775246. ISSN 1045-9243. • Santos, F C; Tort, A C; Elizalde, E (10 May 2006). "Analytical regularization for confined quantum fields between parallel surfaces". Journal of Physics A: Mathematical and General. IOP Publishing. 39 (21): 6725–6732. arXiv:quant-ph/0511230. Bibcode:2006JPhA...39.6725S. doi:10.1088/0305-4470/39/21/s73. ISSN 0305-4470. S2CID 18855340. • Panin, Sergey B.; Smith, Paul D.; Vinogradova, Elena D.; Tuchkin, Yury A.; Vinogradov, Sergey S. (5 January 2009). "Regularization of the Dirichlet Problem for Laplace's Equation: Surfaces of Revolution". Electromagnetics. Informa UK Limited. 29 (1): 53–76. doi:10.1080/02726340802529775. ISSN 0272-6343. S2CID 121978722. • Kleinert, H.; Schulte-Frohlinde, V. (2001), Critical Properties of φ4-Theories, pp. 1–474, ISBN 978-981-02-4659-4, archived from the original on 2008-02-26, retrieved 2011-02-24, Paperpack ISBN 978-981-02-4659-4 (also available online). Read Chapter 8 for Analytic Regularization. External links • E-Polarized Wave Scattering from Infinitely Thin and Finitely Width Strip Systems • Tuchkin, Yu. A. (2002). "Analytical Regularization Method for Wave Diffraction by Bowl-Shaped Screen of Revolution". Ultra-Wideband, Short-Pulse Electromagnetics 5. Boston: Kluwer Academic Publishers. pp. 153–157. doi:10.1007/0-306-47948-6_18. ISBN 0-306-47338-0.
Wikipedia
Abstract: We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general approach for proving lower bounds in the setting of parallel convex optimization.
CommonCrawl
\begin{definition}[Definition:Golden Mean/Definition 1] Let a line segment $AB$ be divided at $C$ such that: :$AB : AC = AC : BC$ Then the '''golden mean''' $\phi$ is defined as: :$\phi := \dfrac {AB} {AC}$ \end{definition}
ProofWiki
\begin{document} \setlength{\baselineskip}{15pt} \title{A characterization of finite $p$-groups by their Schur multiplier} \author{Sumana Hatui} \address{School of Mathematics, Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019, INDIA} \address{\& Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400085, India} \email{[email protected], [email protected]} \subjclass[2010]{20D15, 20E34} \keywords{Schur Multiplier, Finite $p$-groups} \begin{abstract} Let $G$ be a finite $p$-group of order $p^n$ and $M(G)$ be its Schur multiplier. It is well known result by Green that $|M(G)|= p^{\frac{1}{2}n(n-1)-t(G)}$ for some $t(G) \geq 0$. In this article we classify non-abelian $p$-groups $G$ of order $p^n$ for $t(G)=\log_p(|G|)+1$. \end{abstract} \maketitle \section{Introduction} The Schur multiplier $M(G)$ of a group $G$ was introduced by Schur \cite{IS1} in 1904 while studying of projective representation of groups. In 1956, Green \cite{JG} gave an upper bound $p^{\frac{1}{2}n(n-1)}$ on the order of the Schur Multiplier $M(G)$ for $p$-groups $G$ of order $p^n$. So there is an integer $t(G) \geq 0$ such that $|M(G)|=p^{\frac{1}{2}n(n-1)-t(G)}$. This integer $t(G)$ is called corank of $G$ defined in \cite{EW}. It is an interesting problem to classify the structure of all non-abelian $p$-groups $G$ by the order of the Schur multiplier $M(G)$, i.e., when $t(G)$ is known. Several authors studied this problem for various values of $t(G)$. First Berkovich \cite{BY} and Zhou \cite{ZH} classified all groups $G$ for $t(G)=0,1,2$. Ellis \cite{EG} also classified groups $G$ for $t(G)=0,1,2,3$ by a different method. After that several authors have classified the groups of order $p^n$ for $t(G)=4,5,6$ in \cite{PN3,PN1, SHJ}. Peyman Niroomand \cite{PN} improved the Green's bound and showed that for non abelian $p$-groups of order $p^n$, $|M(G)|=p^{\frac{1}{2}(n-1)(n-2)+1-s(G)}$, for some $s(G) \geq 0$. This integer $s(G)$ is called generalized corank of $G$ defined in \cite{PN5}. The structure of non-abelian $p$-groups for $s(G) = 0,1,2$ had been determined in \cite{PN2,PN4} which is the same as to classify group $G$ for $t(G)=\log_p(|G|)-2, \log_p(|G|)-1,\log_p(|G|)$ respectively. In this paper, we take this line of investigation and classify all non-abelian finite $p$-groups $G$ for which $t(G) = \log_p(|G|)+1$, which is same as classifing $G$ for $s(G)=3$, i.e., $|M(G)|=p^{\frac{1}{2}n(n-3)-1}$. Before stating our main result we set some notations. By $ES_p(p^3)$ we denote the extra-special $p$-group of order $p^3$ having exponent $p$. By $\mathbb{Z}_p^{(k)}$ we denote $\mathbb{Z}_p \times \mathbb{Z}_p \times \cdots \times \mathbb{Z}_p$($k$ times). For a group $G$, $\gamma_i(G)$ denotes the $i$-th term of the lower central series of group $G$ and $G^{ab}$ denotes the quotient group $G/\gamma_2(G)$. We denote $\gamma_2(G)$ by $G'$. A group $G$ is called capable group if there exists a group $H$ such that $G \cong H/Z(H)$, where $Z(H)$ denotes center of $H$. We denote the epicenter of a group $G$ by $Z^*(G)$, which is the smallest central subgroup of $G$ such that $G/Z^*(G)$ is capable. James \cite{RJ} classified all $p$-groups of order $p^n$ for $n \leq 6$ upto isoclinism which are denoted by $\Phi_k$. We use his notation throughout this paper. Our main theorem is the following: \begin{thm}{\bf(Main Theorem)} Let $G$ be a finite non-abelian $p$-group of order $p^n$ with $t(G)=\log_p(|G|)+1$. Then for odd prime $p$, $G$ is isomorphic to one of the following groups: \begin{enumerate} \item $\Phi_2(22)= \langle{\alpha,\alpha_1,\alpha_2 \mid [\alpha_1,\alpha]=\alpha^{p}=\alpha_2, \alpha_1^{p^2}=\alpha_2^p=1\rangle}$, \item $ \Phi_3(211)a = \langle{\alpha,\alpha_1,\alpha_2, \alpha_3 \mid [\alpha_1,\alpha]=\alpha_2, [\alpha_2,\alpha]=\alpha^p=\alpha_3 \alpha_1^{(p)}=\alpha_2^p=\alpha_3^p=1\rangle},$ \item $\Phi_3(211)b_r = \langle \alpha,\alpha_1,\alpha_2, \alpha_3 \mid [\alpha_1,\alpha]=\alpha_2, [\alpha_2,\alpha]=\alpha^p=\alpha_3,\alpha_1^{(p)}=\alpha_2^p=\alpha_3^p=1 \rangle $ \item $\Phi_2(2111)c =\Phi_2(211)c \times \mathbb{Z}_p$, where $\Phi_2(211)c = \langle \alpha,\alpha_1,\alpha_2 \mid [\alpha_1,\alpha]=\alpha_2, \alpha^{p^2}=\alpha_1^p=\alpha_2^p=1 \rangle$, \item $\Phi_2(2111)d = ES_p(p^3) \times \mathbb{Z}_{p^2}$, \item $\Phi_3(1^5)=\Phi_3(1^4) \times \mathbb{Z}_p$, where $\Phi_3(1^4) = \langle \alpha,\alpha_1,\alpha_2,\alpha_3 \mid [\alpha_i,\alpha]=\alpha_{i+1},\alpha^p=\alpha_i^{(p)}=\alpha_3^p=1(i=1,2)\rangle$, \item $\Phi_7(1^5)=\langle \alpha,\alpha_1,\alpha_2,\alpha_3,\beta \mid [\alpha_i,\alpha]=\alpha_{i+1},[\alpha_1,\beta]=\alpha_3, \alpha^p=\alpha_1^{(p)}=\alpha_{i+1}^p=\beta^p=1 (i=1,2)\rangle$, \item $\Phi_{11}(1^6)=\langle \alpha_1,\beta_1,\alpha_2,\beta_2,\alpha_3, \beta_3 \mid [\alpha_1,\alpha_2]=\beta_3, [\alpha_2,\alpha_3]=\beta_1, [\alpha_3,\alpha_1]=\beta_2,\alpha_i^{(p)}=\beta_i^p=1(i=1,2,3)\rangle$, \item $\Phi_{12}(1^6)=ES_p(p^3) \times ES_p(p^3)$, \item $\Phi_{13}(1^6)=\langle \alpha_1,\alpha_2,\alpha_3,\alpha_4,\beta_1,\beta_2 \mid [\alpha_i, \alpha_{i+1}]=\beta_i, [\alpha_2,\alpha_4]=\beta_2, \alpha_i^p=\alpha_3^p=\alpha_4^p=\beta_i^p=1(i=1,2)\rangle$, \item $\Phi_{15}(1^6)=\langle \alpha_1,\alpha_2,\alpha_3,\alpha_4,\beta_1,\beta_2 \mid [\alpha_i, \alpha_{i+1}]=\beta_i, [\alpha_3,\alpha_4]=\beta_1,[\alpha_2,\alpha_4]=\beta_2^g, \alpha_i^p =\alpha_3^p = \alpha_4^p =\beta_i^p=1(i=1,2)\rangle$, where $g$ is non-quadratic residue modulo $p$, \item $(\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p) \times \mathbb{Z}_p^{(2)}$.\\ Moreover for $p=2$, $G$ is isomorphic to one of the following groups: \item $\mathbb{Z}_2^{(4)} \rtimes \mathbb{Z}_2$, \item $\mathbb{Z}_2 \times((\mathbb{Z}_4 \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2)$, \item $\mathbb{Z}_4 \rtimes \mathbb{Z}_4$, \item $ D_{16}$, the Dihedral group of order $16$.\\ \end{enumerate} \end{thm} \section{Preliminaries} In this section we list following results which will be used in the proof of our main theorem. \begin{thm}$($\cite[Theorem 4.1]{MRRR}$)$\label{J} Let $G$ be a finite group and $K$ a central subgroup of $G$. Set $A = G/K$. Then $|M(G)||G'\cap K|$ divides $|M(A)| |M(K)| |A^{ab} \otimes K|$. \end{thm} The following result gives $M(G)$ of non-abelian $p$-groups $G$ of order $p^4$ for $|G'|=p$, follows from \cite{KO} and for $|G'|=p^2$, follows from \cite[page. 4177]{EG} . \begin{thm}\label{SHHH} Let $G$ be a non-abelian $p$-group of order $p^4$, $p$ odd.\\ (i) For $|G'|=p$, $G \cong \Phi_2(211)a,$ $\Phi_2(1^4),$ $\Phi_2(31), \Phi_2(22),$ $\Phi_2(211)b$ or $\Phi_2(211)c$. $M(G) \cong \mathbb{Z}_p \times \mathbb{Z}_p,$ $\mathbb{Z}_p^{(4)}$, ${1}$, $\mathbb{Z}_p,$ $\mathbb{Z}_p \times \mathbb{Z}_p,$ or $\mathbb{Z}_p \times \mathbb{Z}_p$ respectively. \\ (ii) For $|G'|=p^2$, $G \cong \Phi_3(211)a,$ $\Phi_3(211)b_r$ or $\Phi_3(1^4)$. $M(G) \cong \mathbb{Z}_p$, $\mathbb{Z}_p$ or $\mathbb{Z}_p \times \mathbb{Z}_p$ respectively. \end{thm} Now we explain a method by Blackburn and Evens \cite{BE} for computing Schur multiplier of $p$-groups $G$ of class $2$ with $G/G'$ elementary abelian. We can view $G/G'$ and $G'$ as vector spaces over $GF(p)$, which we denote by $V, W$ respectively. Let $v_1,v_2 \in V$ such that $v_i=g_iG', i \in \{1,2\}$ and consider a bilinear mapping $(,)$ of $V$ into $W$ defined by $(v_1,v_2)=[g_1,g_2]$. Let $X_1$ be the subspace of $V \otimes W$ spanned by all elements of type\\ \centerline{$v_1 \otimes (v_2,v_3) + v_2 \otimes (v_3,v_1) + v_3 \otimes (v_1,v_2), $ for $v_1,v_2,v_3 \in V$.} Consider a map $f:V \rightarrow W$ given by $f(gG')=g^p$, $g \in G$. We denote by $X_2$ the subspace spanned by all $v \otimes f(v)$ for $v \in V$. Let $X_1+X_2$ be denoted by $X$, which will be used throughout this paper without further reference. Now the following result follows from \cite{BE}. \begin{thm}\label{BEE} Let $G$ be a $p$-group of class $2$ with $G/G'$ elementary abelian. Then $|M(G)|/|N|=|V \wedge V|/|W|$, where $N \cong (V \otimes W)/X$. \end{thm} Let $G$ be a finite $p$-group of nilpotency class $3$ with centre $Z(G)$. Set $\bar{G}=G/Z(G)$. Define a homomorphism $\psi_2 :\bar{G}^{ab} \otimes \bar{G}^{ab} \otimes \bar{G}^{ab} \rightarrow \frac{\gamma_2(G)}{\gamma_3(G)} \otimes \bar{G}^{ab}$ such that \\ $\psi_2(\bar{x}_1 \otimes \bar{x}_2 \otimes \bar{x}_3)=[x_1,x_2]_{\gamma} \otimes \bar{x}_3 + [x_2,x_3]_{\gamma} \otimes \bar{x}_1 + [x_3,x_1]_{\gamma} \otimes \bar{x}_2$, where $\bar{x}$ denotes the image in $\bar{G}$ of the element $x \in G$ and $[x,y]_{\gamma}$ denotes the image in $\frac{\gamma_2(G)}{\gamma_3(G)}$ of the commutator $[x,y] \in G$. Define another homomorphism $\psi_3 : \bar{G}^{ab} \otimes \bar{G}^{ab} \otimes \bar{G}^{ab} \otimes \bar{G}^{ab} \rightarrow \gamma_3(G) \otimes \bar{G}^{ab}$ such that \[\psi_3(\bar{x}_1 \otimes \bar{x}_2 \otimes \bar{x}_3 \otimes \bar{x}_4)=[[x_1,x_2],x_3] \otimes \bar{x}_4 + [x_4,[x_1,x_2]] \otimes \bar{x}_3 + [[x_3,x_4],x_1]\otimes \bar{x}_2 +[x_2,[x_3,x_4]] \otimes \bar{x}_1.\] \begin{thm}$($\cite[Proposition 1]{EW} and \cite{E}$)$\label{RM} Let $G$ be a finite $p$-group of nilpotency class $3$. With the notations above, we have \\ \centerline{$|M(G)||\gamma_2 (G)||Image(\psi_2)||Image(\psi_3)| \leq |M(G^{ab})||\frac{\gamma_2(G)}{\gamma_3(G)} \otimes \bar{G}^{ab}||\gamma_3(G)\otimes \bar{G}^{ab}|$.} \end{thm} \section{Proof of Main Theorem} In this section we prove our main theorem. Proof is divided into several parts depending on the structure of the group. We start with the following lemma which establishes the result for groups of order $p^n$ for $n \leq 5$. \begin{lemma} Let $G$ be a non-abelian $p$-group of order $p^n$ $(n \leq 5)$ with $t(G)=\log_p(|G|)+1$. Then for odd prime $p$, $G \cong \Phi_2(22), \Phi_3(211)a, \Phi_3(211)b_r, \Phi_2(2111)c,$ $\Phi_2(2111)d, \Phi_3(1^5)$ or $\Phi_7(1^5)$, and for $p=2$, $G \cong \mathbb{Z}_2^{(4)} \rtimes \mathbb{Z}_2, \mathbb{Z}_2 \times((\mathbb{Z}_4 \times \mathbb{Z}_2) \rtimes \mathbb{Z}_2), D_{16}$ or $\mathbb{Z}_4 \rtimes \mathbb{Z}_4$. \end{lemma} \begin{proof} For groups of order $p^4$ the result follows from Theorem \ref{SHHH}. For groups of order $p^5$ result follows from \cite{SHJ} and \cite[Main Theorem]{SIX}. For $p=2$, the result follows from computation in HAP \cite{HAP} package of GAP \cite {GAP}. $\Box$ \end{proof} The following lemma easily follows from \cite[Main Theorem]{PN}. \begin{lemma} There is no non-abelian $p$-group $G$ with $|G'|\geq p^4$ and $t(G) =\log_p(|G|)+1$. \end{lemma} \begin{lemma}\label{EA} Let $G$ be a non-abelian $p$-group of order $p^n$ $(n \geq 6)$ with $t(G) \leq \log_p(|G|)+1$. Then $G^{ab}$ is an elementary abelian $p$-group. \end{lemma} \begin{proof} Let $|G'|=p^k$. Suppose that $G^{ab}$ is not elementary abelian and $\bar{G}:=G/Z(G)$ is a $\delta$-generator group. Then $\delta \leq (n-k-1)$ and $|M(G^{ab})| \leq p^{\frac{1}{2}(n-k-1)(n-k-2)}$ by \cite[Lemma 2.2]{PN}. Note that $|\frac{\gamma_2(G)}{\gamma_3 (G)} \otimes \bar{G}^{ab}||\frac{\gamma_3 (G)}{\gamma_4 (G)} \otimes \bar{G}^{ab}| \cdots |\gamma_c (G)\otimes \bar{G}^{ab}|=|(\frac{\gamma_2 (G)}{\gamma_3 (G)} \oplus \frac{\gamma_3 (G)}{\gamma_4 (G)} \oplus \cdots \gamma_c (G)) \otimes\bar{G}^{ab} | \leq p^{k\delta}$ and $|$Image$(\psi_2)| \geq p^{\delta-2}$. Now it follows from \cite[Proposition 1]{EW} that \\ \centerline{$|M(G)| \leq p^{\frac{1}{2}(n-k-1)(n-k-2)+(k-1)\delta-(k-2)}$,} which gives $|M(G)| \leq p^{\frac{1}{2}n(n-3)-\frac{1}{2}(k^2-k)-n+4}$, a contradiction for $n \geq 6$. $\Box$ \end{proof} \begin{lemma}\label{EXP} Let $G$ be a non-abelian $p$-group of order $p^n$ $(n \geq 6)$ and $|G'|=p, p^2$ or $p^3$ with $t(G)\leq \log_p(|G|)+1$. Then $Z(G)$ is of exponent at most $p^2,p$ or $p$ respectively. \end{lemma} \begin{proof} Suppose that $|G'|=p$. Let the exponent of $Z(G)$ be $p^k$ $(k \geq 3)$ and $K$ be a cyclic central subgroup of order $p^k$. Then using Theorem \ref{J}, we have\\ \centerline{$|M(G)| \leq p^{-1}|M(G/K)||{(G/K)}^{ab} \otimes K| \leq p^{-1}p^{\frac{1}{2}(n-k)(n-k-1)}p^{(n-k)} \leq p^{\frac{1}{2}(n-1)(n-4)}$,} \\ which gives a contradiction. Similarly we can prove the result for $|G'|=p^2, p^3$. $\Box$ \end{proof} First we consider the groups $G$ such that $|G'|=p$. \begin{lemma} There is no non-abelian $p$-group $G$ of order $p^n$ with $|G'|=p$ and $t(G) = \log_p(|G|)+1$. \end{lemma} \begin{proof} Note that $G'$ is central subgroup of $G$. By \cite[Theorem 3.1]{MRRR}, we have $|M(G)||G'| \geq |M(G/G')|$ and by Lemma \ref{EA} $|M(G/G')|=p^{\frac{1}{2}(n-1)(n-2)}$. Therefore $|M(G)|\geq p^{\frac{1}{2}(n-1)(n-2)-1}$, which is a contradiction. $\Box$ \end{proof} Now we consider groups such that $|G'|=p^2$. \begin{lemma}\label{BH} Let $G$ be a $p$-group of order $p^n$ $(n \geq 6)$ with $|G'|=p^2$ and $t(G) \leq \log_p(|G|)+1$. If $K$ is a cyclic central subgroup of order $p$ in $G' \cap Z(G)$, then $G/K$ is isomorphic to one of the following groups: $ES(p^3) \times \mathbb{Z}_p^{(n-4)}, E(2) \times \mathbb{Z}_p^{(n-2m-3)},$ $ ES(p^{2m+1}) \times \mathbb{Z}_p^{(n-2m-2)}, D_8 \times \mathbb{Z}_2^{(n-4)}, Q_8 \times \mathbb{Z}_2^{(n-4)}$, where $ES(p^3),$ $ES(p^{2m+1})$ denotes extra special $p$-group of order $p^3$ and $p^{2m+1}$ $(m \geq 2)$ respectively, and $E(2)$ denotes central product of an extra special $p$-group of order $p^{2m+1}$ and a cyclic group of order $p^2$. \end{lemma} \begin{proof} Suppose that $G$ is a $p$-group with $|G'|=p^2$. Now consider a cyclic central subgroup $K$ of order $p$ in $G' \cap Z(G)$. Then by Theorem \ref{J} and \cite[Main Theorem]{PN} we have \centerline{$|M(G)| \leq |M(G/K)|p^{(n-3)} \leq p^{\frac{1}{2}(n-2)(n-3)+1}p^{n-3}=p^{\frac{1}{2}n(n-3)+1}$} Now using \cite[Theorem 21 and Corollary 23]{PN2} and \cite[Theorem 11]{PN4} for group $G/K$, we get our result. $\Box$ \end{proof} \begin{prop} There is no non-abelian $p$-group $G$ of order $p^n$ $(n \geq 6)$ with $|G'|=p^2, |Z(G)|=p$ and $t(G) \leq \log_p(|G|)+1$. \end{prop} \begin{proof} By Lemma \ref{EA}, $G/G'$ is elementary abelian group of order $p^{n-2}$ . Thus $G$ is $(n-2)$-generator group. We can choose generators $x,y,\beta_1, \beta_2$ $ \cdots ,\beta_{n-4}$ of $G$ such that $[x,y]=z \notin Z(G)$. Now we claim that $|$Image$(\psi_3)| \geq p^{n-4}$. If $[z,x]$ is non-trivial in $Z(G)$, then $\psi_3(x \otimes y \otimes x \otimes \beta_i)$ $i=1, \cdots ,n-4$ gives $n-4$ linearly independent elements of $\gamma_3(G) \otimes \bar{G}^{ab}$. By symmetry if $[z,y]$ is non-trivial in $Z(G)$, then we have a similar conclusion. On the other hand if both are trivial i.e., $[z,x]=[z,y]=1$, then $[z,\beta_k]$ is non-trivial in $Z(G)$ for some $\beta_k$ and $\psi_3(x \otimes y \otimes \beta_k \otimes \beta_i)(i \neq k)$ give $n-5$ linearly independent elements of $\gamma_3(G) \otimes \bar{G}^{ab}$. Hence $|$Image$(\psi_3)| \geq p^{n-5}$. Note that $|$Image$(\psi_2)| \geq p^{n-4}$. So by Theorem \ref{RM} we have \centerline{$p^2|M(G)||$Image$(\psi_2)||$Image$(\psi_3)| \leq p^{\frac{1}{2}(n-2)(n-3)}p^{2(n-2)}$.} It follows that $|M(G)| \leq p^{\frac{1}{2}n(n-3)-n+6}$, which is a contradiction for $n \geq 8$. Now if either $\psi_3(x \otimes y \otimes \beta_k \otimes x)$ or $\psi_3(x \otimes y \otimes y \otimes \beta_k)$ is non-trivial then $|$Image$(\psi_3)| \geq p^{n-4}$ and \centerline{$p^2|M(G)||$Image$(\psi_2)||$Image$(\psi_3)| \leq p^{\frac{1}{2}(n-2)(n-3)}p^{2(n-2)}$.} It follows that $|M(G)| \leq p^{\frac{1}{2}n(n-3)-n+5}$, which is a contradiction for $n \geq 7$. Otherwise suppose $\psi_3(x \otimes y \otimes \beta_k \otimes x)=\psi_3(x \otimes y \otimes y \otimes \beta_k)=1$, then $[x,y,\beta_k]=[\beta_k,x,y]=[y,\beta_k,x]$ and $p=3$. By HAP \cite{HAP} of GAP \cite{GAP} there is no group $G$ of order $3^7$ with $|G'|=3^2, |Z(G)|=3$ and $|M(G)|=3^{13}$. For $|G|=p^6$ $(p \neq 2)$, by \cite{RJ} it follows that $G$ belongs to the isoclinism class $\Phi_{22}$. In this case $|$Image$(\psi_2)| \geq p^2$ and $|$Image$(\psi_3)| \geq p^3$. Hence it follows from Theorem \ref{RM} that $|M(G)| \leq p^7$, which is not our case. For $p=2$, there is no group $G$ of order $2^6$ which satisfies the given hypothesis, follows from computation with HAP \cite{HAP} of GAP \cite{GAP}. $\Box$ \end{proof} \begin{lemma}\label{T} Let $G$ be a non-abelian $p$-group of order $p^n$ $(n \geq 6)$ with $t(G)=\log_p(|G|)+1$ and $|G'|=p^2$. If there exists a central subgroup $K$ of order $p$ such that $K \cap G'=1$, then $G/K$ is isomorphic to either $\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p$ or $(\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p) \times \mathbb{Z}_p$ and $p$ is odd. \end{lemma} \begin{proof} By Theorem \ref{J} and \cite[Main Theorem]{PN}, we have \centerline{$|M(G)| \leq |M(G/K)|p^{(n-3)} \leq p^{\frac{1}{2}(n-1)(n-4)+1+n-3}=p^{\frac{1}{2}n(n-3)}$.} Note that $|G/K| \geq p^5$ with $(G/K)'=p^2$. Now we have $|M(G/K)|=p^{\frac{1}{2}(n-1)(n-4)+1}$ if and only if $G/K \cong \mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p$ $(p \neq 2)$ by \cite[Theorem 21]{PN2} and $|M(G/K)|=p^{\frac{1}{2}(n-1)(n-4)}$ if and only if $G/K \cong (\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p) \times \mathbb{Z}_p$ $(p \neq 2)$ by \cite[Theorem 11]{PN4}. $\Box$ \end{proof} \begin{prop}\label{K} There is no non-abelian $p$-group of order $p^7$ with $G' =Z(G) \cong \mathbb{Z}_p \times \mathbb{Z}_p$ and $t(G) = \log_p(|G|)+1$. \end{prop} \begin{proof} Note that if $Z^*(G)$ contains any central subgroup of $G$, then $|M(G)| < p^{13}$, by \cite[Theorem 2.5.10]{GK}, which is not our case. So we have $Z^*(G)=1$, i.e, $G$ is capable. First we consider groups $G$ of order $p^7$ of exponent $p^2$, for odd $p$. Note that $G/G'$ is elementary abelian of order $p^5$ by Lemma \ref{EA}. So we can take a generating set $\{ \beta_1,\beta_2,\beta_3,\beta_4,\beta_5 \}$ of $G$ and $\{\eta, \gamma\}$ of $G'$. Here either $|G^p|=p$ or $G^p=G'$. We claim that $|X| \geq p^8$. Let $|G^p|=p$. Without loss of generality assume that $\eta$ is $p$-th power of some $\beta_k$, say $\beta_1$, $[\beta_i,\beta_j] \notin \gen{\eta}$ and all $\beta_k$'s $(k > 1)$ are of order $p$. Then $\langle \beta_i \otimes \beta_1^p, i \in \{1,2,3,4,5\}\rangle$ is a subspace of $X_2$ and $\langle \psi_2(\beta_k \otimes \beta_i \otimes \beta_j), k \in \{1,2,3,4,5\}, k \neq i,j \rangle$ is a subspace of $X_1$. For $G^p=G'$, without loss of generality, assume that both $\eta, \gamma$ are $p$-th power of some $\beta_{k_1},\beta_{k_2}$, say $\beta_1$ and $\beta_2$ respectively and all other $\beta_i$'s are of order $p$, then $\langle \beta_i \otimes \beta_1^p, \beta_j \otimes \beta_2^p, i \in \{1,3,4,5\}, j \in \{2,3,4,5\}\rangle$ is a subspace of $X_2$. Hence we observe that for non-abelian group $G$ of order $p^7$ of exponent $p^2$, $|X| \geq p^8$ and by Theorem \ref{BEE}, $|M(G)| < p^{13}$, a contradiction. Now consider groups of order $p^7$ of exponent $p$. By \cite{HKM} it follows that there is only one capable group \[ G =\gen{x_1,\cdots , x_5, c_1, c_2 \mid [x_2, x_1] =[x_5, x_3]= c_1, [x_3, x_1] = [x_5, x_4] = c_2}\] upto isomorphism. By Theorem \ref{BEE} we have $|M(G)|=p^9$ as $|X|=|X_1|=p^9$ which is not our case. $\Box$ \end{proof} The following result weaves the next thread in the proof of Main Theorem. \begin{thm} Let $G$ be a non-abelian $p$-group of order $p^n$ $(n \geq 6)$ with $|G'|=p^2$,$ |Z(G)| \geq p^2$ and $t(G)=\log_p(|G|)+1$. Then $G$ is isomorphic to $\Phi_{12}(1^6),\Phi_{13}(1^6),$ $\Phi_{15}(1^6)$ or $(\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p) \times \mathbb{Z}_p^{(2)}$. Moreover, $p$ is always odd. \end{thm} \begin{proof} By Lemma \ref{EXP}, $Z(G)$ is of exponent $p$. We consider two cases here.\\ {\bf Case 1}: Let $G'=Z(G) \cong \mathbb{Z}_p \times \mathbb{Z}_p \cong \gen{z} \times \gen{w}$. The isomorphism type of $G/K$ is as in Theorem \ref{BH}. It follows from these structures that there are $n-2$ generators $\{x,y,\alpha_i, i \in \{1,2,\cdots n-4\}\}$ of $G$ such that $[x,y]\in \gen{z}$ and $[\alpha_k,x] \in \gen{w}$, for some $k$. Hence $\psi_2(x \otimes y \otimes \alpha_i, i \in \{1,2,\cdots n-4\})$ and $\psi_2(\alpha_k \otimes x \otimes \alpha_i, i \in \{1,2,\cdots n-4\}, i \neq k)$ gives $(2n-9)$ linearly independent elements in $G' \otimes G/G'$. Now by Theorem \ref{BEE}, we have $|M(G)| \leq p^{\frac{1}{2}n(n-3)-n+6}$, which is possible only when $n \leq 7$. Now it only remains to consider groups of order $p^6$ and $p^7$. For groups of order $p^7$ result follows from Proposition \ref{K}. Now consider groups of order $p^6$ for odd $p$. Then $G$ belongs to the isoclinism classes $\Phi_{12}, \Phi_{13}$ or $\Phi_{15}$ of \cite{RJ}. If $G$ is of exponent $p^2$, then it is easy to see that $|X| \geq p^5$ and $|M(G)| \leq p^7$ by Theorem \ref{BEE}. Now for $\Phi_{12}(1^6),\Phi_{13}(1^6),\Phi_{15}(1^6)$ we have $|X|=|X_1|=p^4$ and using Theorem \ref{BEE} we see that all of theses groups have Schur multiplier of order $p^8$. By HAP \cite{HAP} of GAP \cite{GAP} we see that there is no such group for $p=2$. \\ { \bf Case 2}: Consider complement of case 1. In these cases the hypothesis of Lemma \ref{T} holds. We can choose a central subgroup $K$ of order $p$ such that $K \cap G'=1$ and $G/K \cong \mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p$ or $(\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p) \times \mathbb{Z}_p$. Here $|Z(G)/K|=|Z(G/K)|$. Also note that $G$ is of exponent $p$. Then it follows easily that $G \cong (\mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p) \times \mathbb{Z}_p^{(2)}$ $(p \neq 2)$. $\Box$ \end{proof} Finally we consider groups $G$ such that $|G'|=p^3$. \begin{lemma}\label{RM1} Let $G$ be a non-abelian $p$-group of order $p^n$ with $t(G)=\log_p(|G|)+1$ and $|G'|=p^3$. Then for any subgroup $K \subseteq Z(G) \cap G'$ of order $p$, $G/K \cong \mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p$ $(p \neq 2)$. In particular, $|G|=p^6$.\\ \end{lemma} \begin{proof} For $p$ odd, by Theorem \ref{J} we have $|M(G)| p \leq |M(G/K)||G/G' \otimes K|$. Since $G/G'$ is elementary abelian by Lemma \ref{EA}, we have $|M(G)| \leq |M(G/K)|p^{(n-4)}$ and by \cite[Main Theorem]{PN}, $|M(G/K)| \leq p^{\frac{1}{2}(n-1)(n-4)+1}$. Hence $|M(G)| \leq p^{\frac{1}{2}n(n-3)-1}$. Using \cite[Theorem 21]{PN2}, we get $G/K \cong \mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p$. For $p=2$, $|M(G)| < p^{\frac{1}{2}n(n-3)-1}$, which is not our case. $\Box$ \end{proof} \begin{lemma} There is no non-abelian $p$-group $G$ with $|G'|=p^3, |Z(G)|=p$ and $t(G)=\log_p(|G|)+1$. \end{lemma} \begin{proof} By the preceeding Lemma we have $G/Z(G) \cong \mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p \cong \Phi_4(1^5)$ and $|G|=p^6$. From \cite{RJ} we can see that $G$ belongs to one of the isoclinism classes $\Phi_{31}, \Phi_{32}, \Phi_{33}$. Observe that for these groups $|$Image$(\psi_2)|\geq p$ and $|$Image$(\psi_3)|\geq p$. Now using Theorem \ref{RM} we get $|M(G)| \leq p^7$, which is not our case. $\Box$ \end{proof} The following theorem now completes the proof of the Main Theorem. \begin{thm} Let $G$ be a non-abelian $p$-group of order $p^n$ with $|G'|=p^3$ and $t(G)=\log_p(|G|)+1$. Then $G \cong \Phi_{11}(1^6)$. \end{thm} \begin{proof} We claim that $Z(G) \subseteq G'$. Assume that $Z(G) \nsubseteq G'$. Then there is a central subgroup $K$ of order $p$ such that $G' \cap K=1$. Now by Theorem \ref{J} and \cite[Main Theorem]{PN} we have \centerline{$|M(G)| \leq |M(G/K)||G/G'K| \leq p^{\frac{1}{2}n(n-5)+1+(n-4)}=p^{\frac{1}{2}n(n-3)-3}$,} which is a contradiction. Hence $Z(G) \subseteq G'$. Note that $|Z(G)| \geq p^2$ by preceeding lemma. We can now choose two distinct central subgroups $K_i$ $(i=1,2)$ of order $p$. Then by Lemma \ref{RM1} we have $G/K_i \cong \mathbb{Z}_p^{(4)} \rtimes \mathbb{Z}_p$ $(i=1,2)$, which are of exponent $p$. So $G$ is of exponent $p$. Hence from \cite{RJ} it follows that $G \cong \Phi_6(1^6), \Phi_9(1^6), \Phi_{10}(1^6), \Phi_{11}(1^6), \Phi_{16}(1^6), \Phi_{17}(1^6), \Phi_{18}(1^6),$ $ \Phi_{19}(1^6),\Phi_{20}(1^6), \Phi_{21}(1^6)$ groups are of order $p^6$ of exponent $p$ with $|Z(G)| \geq p^2, |G'|=p^3$. First consider groups $G \cong \Phi_6(1^6), \Phi_9(1^6), \Phi_{10}(1^6)$. Then by a routine check we can show that $|M(G)| \leq p^6$ using \cite[Theorem 2.2.10]{GK}. Now consider the group $G =\Phi_{11}(1^6)$. Then $G$ is of class two with $G/G'$ elementary abelian. Hence by Theorem \ref{BEE} it follows that $\Phi_{11}(1^6)$ has Schur multiplier of order $p^8$ as $|X|=|X_1|=p$. For other groups $G$, observe that $|$Image$(\psi_2)|\geq p$, $|$Image$(\psi_3)|\geq p$ and hence it follows from Theorem \ref{RM} that $|M(G)| \leq p^7$. $\Box$ \end{proof} {\bf Acknowledgement}: I am grateful to my supervisor Manoj K. Yadav for his guidance, motivation and discussions. I wish to thank the Harish-Chandra Research Institute, the Dept. of Atomic Energy, Govt. of India, for providing excellent research facility. \end{document}
arXiv
The Reversible Residual Network: Backpropagation Without Storing Activations. The Reversible Residual Network: Backpropagation Without Storing Activations. Aidan N. Gomez and Mengye Ren and Raquel Urtasun and Roger B. Grosse 2017 Paper summary ameroyer Residual Networks (ResNets) have greatly advanced the state-of-the-art in Deep Learning by making it possible to train much deeper networks via the addition of skip connections. However, in order to compute gradients during the backpropagation pass, all the units' activations have to be stored during the feed-forward pass, leading to high memory requirements for these very deep networks. Instead, the authors propose a **reversible architecture** based on ResNets, in which activations at one layer can be computed from the ones of the next. Leveraging this invertibility property, they design a more efficient implementation of backpropagation, effectively trading compute power for memory storage. * **Pros (+): ** The change does not negatively impact model accuracy (for equivalent number of model parameters) and it only requires a small change in the backpropagation algorithm. * **Cons (-): ** Increased number of parameters, thus need to change the unit depth to match the "equivalent" ResNet --- # Proposed Architecture ## RevNet This paper proposes to incorporate idea from previous reversible architectures, such as NICE [1], into a standard ResNet. The resulting model is called **RevNet** and is composed of reversible blocks, inspired from *additive coupling* [1, 2]: $ \begin{array}{r|r} \texttt{RevNet Block} & \texttt{Inverse Transformation}\\ \hline \mathbf{input }\ x & \mathbf{input }\ y \\ x_1, x_2 = \mbox{split}(x) & y1, y2 = \mbox{split}(y)\\ y_1 = x_1 + \mathcal{F}(x_2) & x_2 = y_2 - \mathcal{G}(y_1) \\ y_2 = x_2 + \mathcal{G}(y_1) & x_1 = y_1 - \mathcal{F}(x_2)\\ \mathbf{output}\ y = (y_1, y_2) & \mathbf{output}\ x = (x_1, x_2) \end{array} $ where $\mathcal F$ and $\mathcal G$ are residual functions, composed of sequences of convolutions, ReLU and Batch Normalization layers, analoguous to the ones in a standard ResNet block, although operations in the reversible blocks need to have a stride of 1 to avoid information loss and preserve invertibility. Finally, for the `split` operation, the authors consider spliting the input Tensor across the channel dimension as in [1, 2]. Similarly to ResNet, the final RevNet architecture is composed of these invertible residual blocks, as well as non-reversible subsampling operations (e.g., pooling) for which activations have to be stored. However the number of such operations is much smaller than the number of residual blocks in a typical ResNet architecture. ## Backpropagation ### Standard The backpropagaton algorithm is derived from the chain rule and is used to compute the total gradients of the loss with respect to the parameters in a neural network: given a loss function $L$, we want to compute the gradients of $L$ with respect to the parameters of each layer, indexed by $n \in [1, N]$, i.e., the quantities $ \overline{\theta_{n}} = \partial L /\ \partial \theta_n$. (where $\forall x, \bar{x} = \partial L / \partial x$). We roughly summarize the algorithm in the left column of **Table 1**: In order to compute the gradients for the $n$-th block, backpropagation requires the input and output activation of this block, $y_{n - 1}$ and $y_{n}$, which have been stored, and the derivative of the loss respectively to the output, $\overline{y_{n}}$, which has been computed in the backpropagation iteration of the upper layer; Hence the name backpropagation ### RevNet Since activations are not stored in RevNet, the algorithm needs to be slightly modified, which we describe in the right column of **Table 1**. In summary, we first need to recover the input activations of the RevNet block using its invertibility. These activations will be propagated to the earlier layers for further backpropagation. Secondly, we need to compute the gradients of the loss with respect to the inputs, i.e. $\overline{y_{n - 1}} = (\overline{y_{n -1, 1}}, \overline{y_{n - 1, 2}})$, using the fact that: $ \begin{align} \overline{y_{n - 1, i}} = \overline{y_{n, 1}}\ \frac{\partial y_{n, 1}}{y_{n - 1, i}} + \overline{y_{n, 2}}\ \frac{\partial y_{n, 2}}{y_{n - 1, i}} \end{align} $ Once again, this result will be propagated further down the network. Finally, once we have computed both these quantities we can obtain the gradients with respect to the parameters of this block, $\theta_n$. $ \begin{array}{|c|l|l|} \hline & \mathbf{ResNet} & \mathbf{RevNet} \\ \hline \mathbf{Block} & y_{n} = y_{n - 1} + \mathcal F(y_{n - 1}) & y_{n - 1, 1}, y_{n - 1, 2} = \mbox{split}(y_{n - 1})\\ && y_{n, 1} = y_{n - 1, 1} + \mathcal{F}(y_{n - 1, 2})\\ && y_{n, 2} = y_{n - 1, 2} + \mathcal{G}(y_{n, 1})\\ && y_{n} = (y_{n, 1}, y_{n, 2})\\ \hline \mathbf{Params} & \theta = \theta_{\mathcal F} & \theta = (\theta_{\mathcal F}, \theta_{\mathcal G})\\ \hline \mathbf{Backprop} & \mathbf{in:}\ y_{n - 1}, y_{n}, \overline{ y_{n}} & \mathbf{in:}\ y_{n}, \overline{y_{n }}\\ & \overline{\theta_n} =\overline{y_n} \frac{\partial y_n}{\partial \theta_n} &\texttt{# recover activations} \\ &\overline{y_{n - 1}} = \overline{y_{n}}\ \frac{\partial y_{n}}{\partial y_{n-1}} &y_{n, 1}, y_{n, 2} = \mbox{split}(y_{n}) \\ &\mathbf{out:}\ \overline{\theta_n}, \overline{y_{n -1}} & y_{n - 1, 2} = y_{n, 2} - \mathcal{G}(y_{n, 1})\\ &&y_{n - 1, 1} = y_{n, 1} - \mathcal{F}(y_{n - 1, 2})\\ &&\texttt{# gradients wrt. inputs} \\ &&\overline{y_{n -1, 1}} = \overline{y_{n, 1}} + \overline{y_{n,2}} \frac{\partial \mathcal G}{\partial y_{n,1}} \\ &&\overline{y_{n -1, 2}} = \overline{y_{n, 1}} \frac{\partial \mathcal F}{\partial y_{n,2}} + \overline{y_{n,2}} \left(1 + \frac{\partial \mathcal F}{\partial y_{n,2}} \frac{\partial \mathcal G}{\partial y_{n,1}} \right) \\ &&\texttt{ gradients wrt. parameters} \\ &&\overline{\theta_{n, \mathcal G}} = \overline{y_{n, 2}} \frac{\partial \mathcal G}{\partial \theta_{n, \mathcal G}}\\ &&\overline{\theta_{n, \mathcal F}} = \overline{y_{n,1}} \frac{\partial F}{\partial \theta_{n, \mathcal F}} + \overline{y_{n, 2}} \frac{\partial F}{\partial \theta_{n, \mathcal F}} \frac{\partial \mathcal G}{\partial y_{n,1}}\\ &&\mathbf{out:}\ \overline{\theta_{n}}, \overline{y_{n -1}}, y_{n - 1}\\ \hline \end{array} $ **Table 1:** Backpropagation in the standard case and for Reversible blocks --- ## Experiments ** Computational Efficiency.** RevNets trade off memory requirements, by avoiding storing activations, against computations. Compared to other methods that focus on improving memory requirements in deep networks, RevNet provides the best trade-off: no activations have to be stored, the spatial complexity is $O(1)$. For the computation complexity, it is linear in the number of layers, i.e. $O(L)$. One small disadvantage is that RevNets introduces additional parameters, as each block is composed of two residuals, $\mathcal F$ and $\mathcal G$, and their number of channels is also halved as the input is first split into two. **Results.** In the experiments section, the author compare ResNet architectures to their RevNets "counterparts": they build a RevNet with roughly the same number of parameters by halving the number of residual units and doubling the number of channels. Interestingly, RevNets achieve **similar performances** to their ResNet counterparts, both in terms of final accuracy, and in terms of training dynamics. The authors also analyze the impact of floating errors that might occur when reconstructing activations rather than storing them, however it appears these errors are of small magnitude and do not seem to negatively impact the model. To summarize, reversible networks seems like a very promising direction to efficiently train very deep networks with memory budget constraints. --- ## References * [1] NICE: Non-linear Independent Components Estimation, Dinh et al., ICLR 2015 * [2] Density estimation using Real NVP, Dinh et al., ICLR 2017 papers.nips.cc The Reversible Residual Network: Backpropagation Without Storing Activations. Aidan N. Gomez and Mengye Ren and Raquel Urtasun and Roger B. Grosse Neural Information Processing Systems Conference - 2017 via Local dblp [link] Summary by ameroyer 7 months ago Residual Networks (ResNets) have greatly advanced the state-of-the-art in Deep Learning by making it possible to train much deeper networks via the addition of skip connections. However, in order to compute gradients during the backpropagation pass, all the units' activations have to be stored during the feed-forward pass, leading to high memory requirements for these very deep networks. Instead, the authors propose a **reversible architecture** based on ResNets, in which activations at one layer can be computed from the ones of the next. Leveraging this invertibility property, they design a more efficient implementation of backpropagation, effectively trading compute power for memory storage. * **Pros (+): ** The change does not negatively impact model accuracy (for equivalent number of model parameters) and it only requires a small change in the backpropagation algorithm. * **Cons (-): ** Increased number of parameters, thus need to change the unit depth to match the "equivalent" ResNet # Proposed Architecture ## RevNet This paper proposes to incorporate idea from previous reversible architectures, such as NICE [1], into a standard ResNet. The resulting model is called **RevNet** and is composed of reversible blocks, inspired from *additive coupling* [1, 2]: \begin{array}{r|r} \texttt{RevNet Block} & \texttt{Inverse Transformation}\\ \hline \mathbf{input }\ x & \mathbf{input }\ y \\ x_1, x_2 = \mbox{split}(x) & y1, y2 = \mbox{split}(y)\\ y_1 = x_1 + \mathcal{F}(x_2) & x_2 = y_2 - \mathcal{G}(y_1) \\ y_2 = x_2 + \mathcal{G}(y_1) & x_1 = y_1 - \mathcal{F}(x_2)\\ \mathbf{output}\ y = (y_1, y_2) & \mathbf{output}\ x = (x_1, x_2) \end{array} where $\mathcal F$ and $\mathcal G$ are residual functions, composed of sequences of convolutions, ReLU and Batch Normalization layers, analoguous to the ones in a standard ResNet block, although operations in the reversible blocks need to have a stride of 1 to avoid information loss and preserve invertibility. Finally, for the `split` operation, the authors consider spliting the input Tensor across the channel dimension as in [1, 2]. Similarly to ResNet, the final RevNet architecture is composed of these invertible residual blocks, as well as non-reversible subsampling operations (e.g., pooling) for which activations have to be stored. However the number of such operations is much smaller than the number of residual blocks in a typical ResNet architecture. ## Backpropagation ### Standard The backpropagaton algorithm is derived from the chain rule and is used to compute the total gradients of the loss with respect to the parameters in a neural network: given a loss function $L$, we want to compute the gradients of $L$ with respect to the parameters of each layer, indexed by $n \in [1, N]$, i.e., the quantities $ \overline{\theta_{n}} = \partial L /\ \partial \theta_n$. (where $\forall x, \bar{x} = \partial L / \partial x$). We roughly summarize the algorithm in the left column of **Table 1**: In order to compute the gradients for the $n$-th block, backpropagation requires the input and output activation of this block, $y_{n - 1}$ and $y_{n}$, which have been stored, and the derivative of the loss respectively to the output, $\overline{y_{n}}$, which has been computed in the backpropagation iteration of the upper layer; Hence the name backpropagation ### RevNet Since activations are not stored in RevNet, the algorithm needs to be slightly modified, which we describe in the right column of **Table 1**. In summary, we first need to recover the input activations of the RevNet block using its invertibility. These activations will be propagated to the earlier layers for further backpropagation. Secondly, we need to compute the gradients of the loss with respect to the inputs, i.e. $\overline{y_{n - 1}} = (\overline{y_{n -1, 1}}, \overline{y_{n - 1, 2}})$, using the fact that: \begin{align} \overline{y_{n - 1, i}} = \overline{y_{n, 1}}\ \frac{\partial y_{n, 1}}{y_{n - 1, i}} + \overline{y_{n, 2}}\ \frac{\partial y_{n, 2}}{y_{n - 1, i}} \end{align} Once again, this result will be propagated further down the network. Finally, once we have computed both these quantities we can obtain the gradients with respect to the parameters of this block, $\theta_n$. \begin{array}{|c|l|l|} & \mathbf{ResNet} & \mathbf{RevNet} \\ \mathbf{Block} & y_{n} = y_{n - 1} + \mathcal F(y_{n - 1}) & y_{n - 1, 1}, y_{n - 1, 2} = \mbox{split}(y_{n - 1})\\ && y_{n, 1} = y_{n - 1, 1} + \mathcal{F}(y_{n - 1, 2})\\ && y_{n, 2} = y_{n - 1, 2} + \mathcal{G}(y_{n, 1})\\ && y_{n} = (y_{n, 1}, y_{n, 2})\\ \mathbf{Params} & \theta = \theta_{\mathcal F} & \theta = (\theta_{\mathcal F}, \theta_{\mathcal G})\\ \mathbf{Backprop} & \mathbf{in:}\ y_{n - 1}, y_{n}, \overline{ y_{n}} & \mathbf{in:}\ y_{n}, \overline{y_{n }}\\ & \overline{\theta_n} =\overline{y_n} \frac{\partial y_n}{\partial \theta_n} &\texttt{# recover activations} \\ &\overline{y_{n - 1}} = \overline{y_{n}}\ \frac{\partial y_{n}}{\partial y_{n-1}} &y_{n, 1}, y_{n, 2} = \mbox{split}(y_{n}) \\ &\mathbf{out:}\ \overline{\theta_n}, \overline{y_{n -1}} & y_{n - 1, 2} = y_{n, 2} - \mathcal{G}(y_{n, 1})\\ &&y_{n - 1, 1} = y_{n, 1} - \mathcal{F}(y_{n - 1, 2})\\ &&\texttt{# gradients wrt. inputs} \\ &&\overline{y_{n -1, 1}} = \overline{y_{n, 1}} + \overline{y_{n,2}} \frac{\partial \mathcal G}{\partial y_{n,1}} \\ &&\overline{y_{n -1, 2}} = \overline{y_{n, 1}} \frac{\partial \mathcal F}{\partial y_{n,2}} + \overline{y_{n,2}} \left(1 + \frac{\partial \mathcal F}{\partial y_{n,2}} \frac{\partial \mathcal G}{\partial y_{n,1}} \right) \\ &&\texttt{ gradients wrt. parameters} \\ &&\overline{\theta_{n, \mathcal G}} = \overline{y_{n, 2}} \frac{\partial \mathcal G}{\partial \theta_{n, \mathcal G}}\\ &&\overline{\theta_{n, \mathcal F}} = \overline{y_{n,1}} \frac{\partial F}{\partial \theta_{n, \mathcal F}} + \overline{y_{n, 2}} \frac{\partial F}{\partial \theta_{n, \mathcal F}} \frac{\partial \mathcal G}{\partial y_{n,1}}\\ &&\mathbf{out:}\ \overline{\theta_{n}}, \overline{y_{n -1}}, y_{n - 1}\\ **Table 1:** Backpropagation in the standard case and for Reversible blocks ## Experiments ** Computational Efficiency.** RevNets trade off memory requirements, by avoiding storing activations, against computations. Compared to other methods that focus on improving memory requirements in deep networks, RevNet provides the best trade-off: no activations have to be stored, the spatial complexity is $O(1)$. For the computation complexity, it is linear in the number of layers, i.e. $O(L)$. One small disadvantage is that RevNets introduces additional parameters, as each block is composed of two residuals, $\mathcal F$ and $\mathcal G$, and their number of channels is also halved as the input is first split into two. **Results.** In the experiments section, the author compare ResNet architectures to their RevNets "counterparts": they build a RevNet with roughly the same number of parameters by halving the number of residual units and doubling the number of channels. Interestingly, RevNets achieve **similar performances** to their ResNet counterparts, both in terms of final accuracy, and in terms of training dynamics. The authors also analyze the impact of floating errors that might occur when reconstructing activations rather than storing them, however it appears these errors are of small magnitude and do not seem to negatively impact the model. To summarize, reversible networks seems like a very promising direction to efficiently train very deep networks with memory budget constraints. ## References * [1] NICE: Non-linear Independent Components Estimation, Dinh et al., ICLR 2015 * [2] Density estimation using Real NVP, Dinh et al., ICLR 2017
CommonCrawl
Stata Weighted Mean A nationwide audit of RA collected information on treatment choices, DAS and sociodemographic factors at baseline. Stata does this automatically, and R's weighted mean function has a useful na. A simple average calculates the center of a data set. Class Interval Arithmetic Mean is the distribution represented by relative frequency counts or proportions of observations within different class intervals. agree or disagree simply by chance. The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. This video shows the easiest way to perform mean and standard deviation analysis in Stata. To use the UDF the formula is. Actually, what total estimates is the mean of the variable(s) under investigation, weighted by the number of (weighted) cases. that perform an overall mean effect size, analog-to-the ANOVA moderator analysis, and meta- analytic regression moderator analysis, respectively. muhat is the obvious weighted sample mean, and V_db is pretty complicated; see [SVY] variance estimation for details. reg Y1 Y2 X1 X2 X3 Æ obtain the coefficient(C1) and the s. Suppose that the group variable is called group and I want to take the average of val1 by Group, excluding myself. How to use weighted in a sentence. -outreg-, written by J. Suitable speed limit is important for providing safety for road users. A weighted average is an average of factors when certain factors count more than others or are of varying degrees of importance. Sample 25114: Creating a weighted frequency in PROC TABULATE. Abstract Missing observations caused by dropouts or skipped visits present a problem in studies of longitudinal data. There is a user-written egen function for this purpose, but doing it from first principles should be easy and instructive. The final score is calculated by. The Weighted Average Item Price Report (WAIPR) and the Regional and Statewide Average Awarded Price Report (RSWAAPR) are reports produced using information from NYSDOT's Trns•Port BAMS⁄DSS. 28, is the standard deviation. The Pearson product-moment correlation coefficient, often shortened to Pearson correlation or Pearson's correlation, is a measure of the strength and direction of association that exists between two continuous variables. using sampling weights. Dec 30, 2010 · WACC or weighted average cost of capital is calculated using the cost of equity and cost of debt weighing them by respective proportions within the optimal or target capital structure of the company, i. Dec 04, 2017 · Ask a high school student who is applying to competitive colleges about their grades, and you'll likely hear about a grade point average well above 4. Two of particular importance are (1) confidence intervals on regression slopes and (2) confidence intervals on predictions for specific observations. Pearson's Correlation using Stata Introduction. 1 The validity of the findings from a meta-analysis depends on several factors, such as the completeness of the systematic review,. The sample code on the Full Code tab illustrates how to create a weighted frequency in PROC TABULATE. [ Date Prev ][ Date Next ][ Thread Prev ][ Thread Next ][ Date Index ][ Thread Index ]. Computes the weighted mean for each cell of a number or raster layers. I took the class dataset from sashelp and created two fake weights. – This document briefly summarizes Stata commands useful in ECON-4570 Econometrics and ECON-6570 Advanced Econometrics. This module should be installed from within Stata by typing "ssc install cibar". Social Science Goes R: Weighted Survey Data. Oct 02, 2012 · This feature is not available right now. I use gmm to obtain consistent standard errors by stacking the ordered-probit moment conditions and the weighted mean moment conditions. After weighting each young person does not count for 1 person any more but just for 0. The weight somehow reflects the importance of the observation and any command that supports such weights will define exactly how such weights are treated. "_GWTMEAN: Stata module containing extensions to generate to implement weighted mean," Statistical Software Components S418804, Boston College Department of Economics. 0 Stata/SE (Special Edition of Stata) or higher is required to run two macros (who2007_standard. Find the weighted average of class grades (with equal weight) 70,70,80,80,80,90:. The weighted t-test adjusts means and standard deviations to generate p-values based on the correct representation. Country weights to account for population size in statistical analysis? I am doing a longitudinal data analysis with aggregated national level data from world value survey. Clearly, a variable with a regression coefficient of zero would explain no variance. (standard) media nf. (male) stats(n mean sum sd var max min. frate: Marginal federal tax rate wrt a weighted average of the rates on the primary and secondary earners, or equal weights if both are non-workers. It is also easy to do a t-test using the svy: regress command. In contrast, a raw or arithmetic mean is a simple average of your values, using no model. Stata data analysis under the different assumptions For comparison purposes, you will first run the analysis as if this data were SRS, that is, a simple random sample with no weight adjustments for sampling design or. 0) By Ken Eng, Yin-Yu Chen, and Julie E. For the purposes of the GMAT, the weighted average situations occurs when we combine groups of different sizes and different group averages. Exploring Regression Results using Margins. Average Treatment Effect (ATE): the mean of the difference (Y 1 - Y 0). Sampling weights are needed to correct for imperfections in the sample that might lead to bias and other departures between the sample and the reference population. Personally, I like the simple non-parametric plot overlaid with the OLS regression since its clean and helps us see whether a linear approximation is a reasonable fit or not:. Stata Press 4905 Lakeway Drive College Station, TX 77845, USA 979. Data for Researchers. To finish the example, you would divide five by 36 to find the probability to be 0. oversampling of the elderly and ethnic minorities, the weighted estimates are different from the unweighted estimates for mean age and percentage of Hispanics. 96 standard errors. Calculator Use. Scores of exams may carry more weight that homework completion. A weighted mean (or weighted average) is like an ordinary mean, but the observations don't contribute equally - more emphasis is placed on some data values than others; they are weighted by a bigger or smaller amount than 1/n. The new column I wish to create is avg. To use the UDF the formula is. docx Page 8of 29 Note. Statistics Canada is the national statistical office. They are organized by module and then task. The meta-analysis was conducted using a random-effects model because of the a priori heterogeneity. 1 into a text file which is available in the course website. Geometric mean. Stata's ttest. The harmonic mean is one of the three Pythagorean means, involving in many situations where rates, ratios, geometry, trigonometry etc considered, the harmonic mean. Simple average formula. 96) to provide bars extending to +/- 1. apcfit has options that allow the user to specify the centering of each variable. Weighted average. In everyday life, you might need to calculate an average to estimate. Use the WEIGHT statement to specify a weight variable (w), and use the VAR statement as usual to specify the measurement variable (x). The command is named vwls, for variance-weighted least squares. Moment conditions define the ordered probit estimator and the subsequent weighted average used to estimate the POMs. Type help hettest or see the Stata reference manual for details. Dec 04, 2017 · Ask a high school student who is applying to competitive colleges about their grades, and you'll likely hear about a grade point average well above 4. Finite element methods are a special type of weighted average method. The fact that survey data are obtained from units selected with complex sample designs needs to be taken into account in the survey analysis: weights need to be used in analyzing survey data and variances of survey estimates need to be computed in a manner that reflects the complex sample design. Specifically they show that regression provides variance based weighted average of covariate specific differences in outcomes between treatment and control groups. The clustering and stratification do not affect the point estimate of the mean, and thus if you are interested only in the point estimate, you could use summarize with aweights since it gives the same weighted mean as svy: mean. 80 with a min of 6. Munich Personal RePEc Archive Is the mean outcome for the control group in the kernel performs the balancing t-test with the weighted covariates. For the weighted case there is no commonly accepted weighted Spearman correlation coefficient. Task 4c: How to Generate Proportions using Stata. Apr 21, 2017 · Mean and standard deviation are the part of descriptive analysis. , sections 2. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. It will be updated periodically during the semester, and will be available on the course website. Find descriptive statistics of a data set. "_GWTMEAN: Stata module containing extensions to generate to implement weighted mean," Statistical Software Components S418804, Boston College Department of Economics. This video shows the easiest way to perform mean and standard deviation analysis in Stata. As explained there, the distinction between the weighted means ANOVA and the unweighted means ANOVA becomes much more important in factorial ANOVA than it is in one-way ANOVA. MedCalc uses the Hedges-Olkin (1985) method for calculating the weighted summary Correlation coefficient under the fixed effects model, using a Fisher Z transformation of the correlation coefficients. It's usually better to ask 1 question at a time, but anyway. The other weighting options are a bit more complicated. Stata software can be used to calculate proportions and standard errors for NHANES data because the software takes into account the complex survey design of NHANES data when determining variance estimates. kwstat stands for kernel weighted statistics and is an ad-hoc method to visualize the behavior a variable yin function of another variable x. And a 40-day simple moving average would correspond roughly to an exponentially weighted moving average with a smoothing constant equal to 0. 7 in the manual, in example 4, an example of a weighted mean in a similar setting. Sung Il Jung, Olivio F. In order to do so, cibar uses Stata's twoway bar and twoway rcap. In the same folder as the Excel file, copy/paste/save the code below as a. Please enable it to continue. After weighting each young person does not count for 1 person any more but just for 0. When p = 2, the method is known as the inverse distance squared weighted interpolation. If a module or task is not listed it is because it did not have a related program. A linearly weighted moving average is a type of moving average where more recent prices are given greater weight in the calculation, and prior prices are given less weight. Average This is the arithmetic mean, and is calculated by adding a group of numbers and then dividing by the count of those numbers. (6)The index is rebased to average 100 in the base year. 1 in weighted DVOA but New England remains No. For example, the simple average of 1,2,3,4,5 is 2. See Lambert (2001) for a textbook exposi-tion. It's annoying to have to create a persistent column for each weighted numeric variable rather than do it on the fly (as we did in SQL and dplyr) during the grouping and aggregation, but the gain comes with all the automated filtering interactivity of working in Power BI. In this example, you will use Stata to combine age subgroups and generate population estimates for high blood pressure (HBP) by sex and race/ethnicity for persons 20 years and older. Some may already. Weighted Mean Calculator is an online statistics tool for data analysis programmed to calculate the Weighted Mean by giving different weights to some of the individual values. Nov 06, 2019 · A weighted average, otherwise known as a weighted mean, is a little more complicated to figure out than a regular arithmetic mean. Interval] -----+-----. For example, suppose in some parameter, suppose the male employees in a company have one average score, and the females have another average score. In order to perform meta-analyses in Stata, these routines need to be installed on your computer by downloading the relevant files from the Stata web site (www. An equally weighted index weights each stock equally regardless of its market capitalization or economic size (sales, earnings, book value). A simple average is meaningful when each data point counts equally in the average. Weighted regression can be used to correct for heteroscedasticity. Normally calculated using closing prices, the moving average can also be used with median, typical , weighted closing, and high, low or open prices as well as other indicators. we can see more clearly that the sample mean is a linear combination of the random variables X 1, X 2, , X n. The option "pweight" is described in STATA documentation: "pweights, or sampling weights, are weights that denote the inverse of the probability that the observation is included due to the sampling design. Generalized Jackknife Estimators of Weighted Average Derivatives (with Discussion and Rejoinder) with Matias Cattaneo & Richard Crump Published in Journal of the American Statistical Association , 108, 1243-1268, 2013. Mar 08, 2017 · Metrics Maven: Calculating an Exponentially Weighted Moving Average in PostgreSQL metrics maven postgresql Free 30 Day Trial In our Metrics Maven series, Compose's data scientist shares database features, tips, tricks, and code you can use to get the metrics you need from your data. Calculations done in Stata 15. Sampling weights are needed to correct for imperfections in the sample that might lead to bias and other departures between the sample and the reference population. Area-Weighted Mean Shape Index listed as AWMSI. What may help is that r(sum) , e(N) and e(df_r) are saved scalar results accessible after running commands. likely to be censored) are weighted most heavily. (statistical mean) (statistica) media nf : My golf score is an average of all my game scores. In general, a \(2\times m\)-MA is equivalent to a weighted moving average of order \(m+1\) where all observations take the weight \(1/m\), except for the first and last terms which take weights \(1/(2m)\). From within Stata, use the commands ssc install tab_chi and ssc install ipf to get the most current versions of these programs. wRC+ takes the statistic Runs Created and adjusts that number to account for important external factors -- like ballpark or era. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. The weighted average (x) is equal to the sum of the product of the weight (w i) times the data number (x i) divided by the sum of the weights: Example. Tf-idf weighting. civilian non-institutionalized population. For this calculation, we will assume that the variances in each of the two populations are equal. Using Stata for Confidence Intervals All of the confidence interval problems we have discussed so far can be solved in Stata via either (a) statistical calculator functions, where you provide Stata with the necessary summary statistics for means, standard deviations, and sample sizes; these commands end with an i, where the i. do file which specifies the survey design of your dataset. 04 (df = 3, p <. 2 The standardized mean difference. You will get what you ask for, but it is more like getting an array every time. Which one: Fixed Effects or Random Effects? : The generally accepted way of choosing between FE and RE is running a Hausman test. It will be updated periodically during the semester, and will be available on the course website. Sung Il Jung, Olivio F. Task 1c: How to Set Up a t-test in NHANES Using Stata. The Stata Journal (2003) 3, Number 4, pp. x_missing, weights(df. • We are interested in using Stata for survey data analysis • Survey data are collected from a sample of the population of interest • Each observation in the dataset represents multiple observations in the total population • Sample can be drawn in multiple ways: simple random, stratified, etc. It's usually better to ask 1 question at a time, but anyway. (GSoC Week 6) Efficient Calculation of Weighted Medians July 05, 2016 gsoc, scikit-learn, algorithm, median. It is possible to derive. The Stata Journal (2004) 4, Number 3, pp. Stata reminder: to run descriptive statistics, use summarize A simple way to get to get both the median and the mean at once is by using the detail. Weighted mean of rasters. Is this standard deviation something that shouldn't be used and is inaccurrate or could it really be correct?. Area-Weighted Mean Shape Index listed as AWMSI. Originating in the late 1960s, Jensen's Alpha (often abbreviated to Alpha) was developed to evaluate the skill of active fund managers in stock picking. Jan 25, 2011 · Mean Absolute Deviation (MAD) For n time periods where we have actual demand and forecast values: While MFE is a measure of forecast model bias, MAD indicates the absolute size of the errors. A graph is an entire image, including axes, titles, legends, etc. Synonyms for weighted. Specifically they show that regression provides variance based weighted average of covariate specific differences in outcomes between treatment and control groups. Propensity scores for the estimation of average treatment e ects in observational studies Leonardo Grilli and Carla Rampichini Dipartimento di Statistica "Giuseppe Parenti" Universit di Firenze Training Sessions on Causal Inference Bristol - June 28-29, 2011 Grilli and Rampichini (UNIFI) Propensity scores BRISTOL JUNE 2011 1 / 77. using sampling weights. Harmonic Average: The mean of a set of positive variables. The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance. Also assume that our dataset contains pop, the population of each state. Remember that each data point is multiplied by a given weight, and then divided by the total weight. It is also easy to do a t-test using the svy: regress command. Now, that the svyset has been defined you can use the Stata command, svy: mean, to generate means and standard errors. WEIGHTED STANDARD DEVIATION PURPOSE Compute the weighted standard deviation of a variable. Weighted average. For the latest version, open it from the course disk space. When we do a simple mean (or average), we give equal weight to each number. Quantile regression is a type of regression analysis used in statistics and econometrics. Weighted kappa coefficients are less accessible to intuitive understanding than is the simple unweighted coefficient, and they are accordingly more difficult to interpret. When computing a running moving average, placing the average in the middle time period makes sense: In the previous example we computed the average of the first 3 time periods and placed it next to period 3. See Lambert (2001) for a textbook exposi-tion. When analyzing survey data, it is common to want to look only a certain respondents, perhaps only women, or only respondents over age 50. This video covers how to find the weighted mean for a set of data. Instead of each data point contributing equally to the final mean, some data points contribute more "weight" than others. unfortunately i cant really calculate the weighted standard deviation and median for the following problem. MedCalc uses the Hedges-Olkin (1985) method for calculating the weighted summary Correlation coefficient under the fixed effects model, using a Fisher Z transformation of the correlation coefficients. you could test for heteroskedasticity involving one variable in the model, several or all the variables, or even variables that are not in the current model. Dec 04, 2017 · Ask a high school student who is applying to competitive colleges about their grades, and you'll likely hear about a grade point average well above 4. 088) There was no difference between unweighted SPSS p-values and unweighted Stata p-values, but weighted SPSS p-values fell under conventional levels of statistical significance that probability weighted Stata p-values did not (0. If we assume that the fundamental frequency of cardiac oscillation is the mean heart rate, then these frequencies could be converted to cycles per second (Hz). Vargas, Debra Goldman, Hedvig Hricak, Oguz Akin; Sung Il Jung 1, Olivio F. However, this time we see that the sample sizes are different, but we are still interested in. Treatment response was assessed at 3 months. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. If a module or task is not listed it is because it did not have a related program. Useful Commands in Stata z Two-Stage Least Squares The structural form: Y1 = Y2 X1 X2 X3 The reduced form: Y2 = X1 X3 X4. Activity recording is turned off. For example, a set of mean values may be combined using a weighted average in which the weights are the sample sizes on which each mean is based. Weighted mean in numpy/python. So, what is a weighted score? A weighted score or weighted grade is merely the average of a set of grades, where each set carries a different amount of importance. Close Excel and close Stata then find the. The weight variable goes on the WEIGHT statement. Example 7: Weighted Least Squares. 2 (Stata Corp, College Station, TX) by using the svy command to account for the complex sample design of the national YRBS. Stata's collapse command computes aggregate statistics such as mean, sum, and standard deviation and saves them into a data set. For example, the mean of our data is 29. The per-author age-weighted citation rate is similar to the plain AWCR, but is normalized to the number of authors for each paper. In addition, percentages are displayed. Barth, Modern Methods of Particle Size Analysis , →ISBN , page 113: When proper measurements are made, the parameters of the size distribution most often obtained -- the inverse z-average moments -- are not the usually. Stata automatic report (Sar) is an easy-to-use macro for Microsoft Word for Windows that allows a powerful integration between Stata and Word. I am working on a question that asks me to solve for the weighted average of my dependent variable (hourly wage) by using the weight of my independent variable (which is a discrete variable that has 16 categories and more than 300,000 observations). The margins command is a powerful tool for understanding a model, and this article will show you how to use it. After weighting each young person does not count for 1 person any more but just for 0. These weights are used in multivariate statistics and in a meta-analyses where each "observation" is actually the mean of a sample. Stata software can be used to calculate proportions and standard errors for NHANES data because the software takes into account the complex survey design of NHANES data when determining variance estimates. The weighted average is more complex. 80 with a min of 6. The parameterk speci-fies the number of days included in the moving average (the "observation period"),x s, the change in portfolio value on days, and , the mean change in portfolio value. If the code won't work, you probably have Excel open. Weighted variances are often used for frequency data. I show how to estimate the POMs when the weights come from an ordered probit model. wt in RCore does this and returns a covariance matrix or the correlation matrix. Microeconometrics Using Stata Revised Edition A. It equals the mass of the body multiplied by the acceleration of free fall. 529189 3399. Teaching\stata\stata version 14\stata version 14 - SPRING 2016\Stata for Categorical Data Analysis. apcfit has options that allow the user to specify the centering of each variable. extensions to generate, weights, mean. Il mio punteggio a golf è una media dei punteggi di tutte le mie partite. The Weighted Average Item Price Report (WAIPR) and the Regional and Statewide Average Awarded Price Report (RSWAAPR) are reports produced using information from NYSDOT's Trns•Port BAMS⁄DSS. In Stata terms, a plot is some specific data visualized in a specific way, for example "a scatter plot of mpg on weight. Decide which clustering method to use. Requires an ArcInfo, Spatial Analyst, or Geostatistical Analyst License. Example graphs and plots created in Stata. The limitations of traditional mean-VaR are all related to the use of a symetrical distribution function. Weighted Mean Calculator is an online statistics tool for data analysis programmed to calculate the Weighted Mean by giving different weights to some of the individual values. Pooled Variance (r) - Definition and Example Definition: Pooled variance is the weighted average for evaluating the variances of two independent variables where the mean can vary between samples but the true variance remains the same. These weights are used in multivariate statistics and in a meta-analyses where each "observation" is actually the mean of a sample. A weighted grade or score is average of a set of grades, where each grade (g) carries a different weight (w) of importance. while the formula for the weighted variance is: (EQ 2-24) where wi is the weight for the ith observation, N' is the number of non-zero weights, andxw is the weighted mean of the observations. This says take the mean of Y and subtract the slope times the mean of X. Before constructing a weighted histogram, let's review the construction of an unweighted histogram. " The developer says that the formulas "may have no. I have a weighted mean: $$\mu_{w} = \dfrac{\sum Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. For example, if you have sales data for a twenty-year period, you can calculate a five-year moving average, a four-year moving average, a three-year moving average and so on. In addition, percentages are displayed. Moving Average Indicator. Doing this will force Stata to set the working directory as the folder containing the. it Alessandra Mattei. You can browse but not post. I copied the counts of mid-year population and deaths by Age for Sweden and Kazakhstan from Table 2. Forums for Discussing Stata; General; You are not logged in. It is an average in which each quantity to be averaged is assigned a. The history of the index is multiplied by 100 and divided by the average for the twelve months of the based year, currently 2016. Control limits for a range of MADs (Pg. Computer scientists tend to think of boosting as an "ensemble" method (a weighted average of predictions of. In addition, units of measure should be included when reporting the mean. 1 would correspond roughly to a 19 day moving average. For my initial regression, I am interested in how much of an effect vehicle weight has on the mileage of vehicles, based on the auto. Use of simulations, resampling, or Pareto distributions all help in making a more accurate prediction, but they are still flawed for assets with significantly non. Admin akan mengelompokkan berdasarkan kategori fungsinya. Some Stata commands help you to obtain well-formatted output, especially tabulated results in LATEX or other formats, but they are not a complete solution nor are they friendly tools. difference in mean weight change for each of the age-gender groups is to use the Stata command serrbar , with the option scale(1. The macro requires a STATA data set. TRIVEDI Department of Economics. ado for calculating weighted average. …\stata\2017-18\stata linear regression 2018. compute average correlation that inverts the signs for correlations accordingly for preferred and disliked artists. Contact us. Oct 02, 2017 · These weights are used in multivariate statistics and in a meta-analyses where each "observation" is actually the mean of a sample. The weighted mean allows managers to calculate an accurate average for the data set, while the weighted variance gives an approximation of the spread among the data points. plicated than a simple mean or ratios, and standard er-rors are tricky even with simple weighted means. The simple inverse probability-weighted. This says take the mean of Y and subtract the slope times the mean of X. Weighted average costing is commonly used in situations where: Inventory items are so intermingled that it is impossible to assign a specific cost to an individual uni. DESCRIPTION The formula for the standard deviation is: (EQ 2-21) while the formula for the weighted standard deviation is: (EQ 2-22) where wi is the weight for the ith observation, N' is the number of non-zero weights, andxw is the weighted mean of the. Actually, what total estimates is the mean of the variable(s) under investigation, weighted by the number of (weighted) cases. ado and who2007_restriced). The primary difference between a simple moving average, weighted moving average, and the exponential moving average is the. Stata | Graphics. Arithmetic-Geometric mean. Synth for Stata runs on Stata versions 9-15 (32 and 64 bit), but contains a platform dependent plugin. Weighted least squares regression also addresses this concern but requires a number of additional assumptions. Useful Stata Commands (for Stata versions 13, 14, & 15) Kenneth L. Some Stata commands help you to obtain well-formatted output, especially tabulated results in LATEX or other formats, but they are not a complete solution nor are they friendly tools. Use of weighted correlation? Please i want to know the different situations or cases where we can use the weighted pearson or spearman correlation? For populations A and B I would average sets. For example, a shipment of 10 cases of pencils is 20 cents per case. Scores of exams may carry more weight that homework completion. When you get your report card back, however, and discover that your grade is in fact still a C, you may have a weighted score or weighted grade in play. Dec 30, 2010 · WACC or weighted average cost of capital is calculated using the cost of equity and cost of debt weighing them by respective proportions within the optimal or target capital structure of the company, i. Donati 2, Hebert A. For example, the mean of our data is 29. Average calculator Weighted average calculation. The average effect of a binary independent variable on an outcome. Working with variables in STATA. 0 or higher and the Stata LCA plugin. The local weighted mean transformation infers a polynomial at each control point using neighboring control points. Sep 28, 2019 · A weighted average where the value for each component is weighted by the square of its mass or volume. A linearly weighted moving average is a type of moving average where more recent prices are given greater weight in the calculation, and prior prices are given less weight. The mean or expected value of X is defined by E(X) = sum x k p(x k). In the cartography above, we see that Russia and China are very big, it's because they have a lot of neighbour countries, 14 to be precise. Effect Sizes Based on Means Introduction Raw (unstandardized) mean difference D Standardized mean difference, d and g Response ratios INTRODUCTION When the studies report means and standard deviations, the preferred effect size is usually the raw mean difference, the standardized mean difference, or the response ratio. In the same folder as the Excel file, copy/paste/save the code below as a. There is no svy: ttest command in Stata; however, svy: mean is a "true" estimation command and allows for the use of both the test and lincom post-estimation commands. The per-author age-weighted citation rate is similar to the plain AWCR, but is normalized to the number of authors for each paper. Oct 06, 2013 · LOWESS (Locally Weighted Scatterplot Smoothing), sometimes called LOESS (locally weighted smoothing), is a popular tool used in regression analysis that creates a smooth line through a timeplot or scatter plot to help you to see relationship between variables and foresee trends. x_missing, weights(df. The weighted average maturity, or WAM, of one of these securities is the average number of months until the loans are paid off. 1 The validity of the findings from a meta-analysis depends on several factors, such as the completeness of the systematic review,. The mapping at any location depends on a weighted average of these polynomials. Average treatment e ect. If potential outliers are not investigated and dealt with appropriately, they will likely have a negative impact on the parameter estimation and other aspects of a weighted least squares analysis. I show how to estimate the POMs when the weights come from an ordered probit model. Jun 24, 2019 · Intersection safety is a national, state and local priority. Pearson's Correlation using Stata Introduction. With a Weighted Average, one or more numbers is given a greater significance, or weight. There is an important reason for carrying out a meta-analysis, apart from the obvious one of calculating a weighted average and its associated confidence interval. (6)The index is rebased to average 100 in the base year. Publications and Data Products Publications & Products Search; Research Programs. Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function. 001), but the correlation coefficients were weaker in women than in men. 14), however, and we are also interested in the estimated coefficients. But don't forget to classify your dataset as time series by using the -tsset- command. ASGEN : A Stata module for weighted average mean 10 Oct 2017, 23:02 Thanks to Kit Baum, I have shared a new program asgen on SSC that computes weighted average mean. The program is available for free and can be downloaded from SSC by typing the following on the Stata command window: ssc install asreg asreg was primarily written for rolling/moving / sliding window regressions. The new column I wish to create is avg. These charts are used to monitor the mean of a process based on samples taken from the process at given times (hours, shifts, days, weeks, months, etc. Econometric Theory 30(1): 176-200, February 2014. In Stata there is only the possibility to obtain the weighted effect estimates using the post-estimation command 'contrast'. 0) By Ken Eng, Yin-Yu Chen, and Julie E. Weighted Mean. cibar shows graphically the differences in the mean of a variable depending on the categories of other variables. What is the Weighted Average? The weighted average scoring model applies a "weight" to the matrix questions based on responses to the first item in a side-by-side matrix. Mar 21, 2011 · Abstract. The weighted average of values is the sum of weights times values divided by the sum of the weights. Social Science Goes R: Weighted Survey Data.
CommonCrawl
Real Algebraic and Analytic Geometry. Related manuscripts and search engines for mathematical preprints. Vincent Astier and Thomas Unger: Positive cones on algebras with involution. Vincent Astier, Thomas Unger: Signatures of hermitian forms, positivity, and an answer to a question of Procesi and Schacher. Patrick Speissegger: Quasianalytic Ilyashenko algebras. Malgorzata Czapla, Wieslaw Pawlucki: Michael's Theorem for a Mapping Definable in an O-Minimal Structure on a Set of Dimension 1. Malgorzata Czapla, Wieslaw Pawlucki: Michael's Theorem for Lipschitz Cells in O-minimal Structures. Zofia Ambroży, Wiesław Pawłucki: On Implicit Function Theorem in O-Minimal Structures. Murray Marshall: Application of localization to the multivariate moment problem II. K. Kurdyka, W. Pawlucki: O-minimal version of Whitney's extension theorem. J. William Helton, Igor Klep, Christopher S. Nelson: Noncommutative polynomials nonnegative on a variety intersect a convex set. Alessandro Berarducci, Marcello Mamino: Groups definable in two orthogonal sorts. Hoang Phi Dung: Lojasiewicz-type inequalities for nonsmooth definable functions in o-minimal structures and global error bounds. Clifton F. Ealy, Jana Maříková: Model completeness of o-minimal fields with convex valuations. Alessandro Berarducci, Mário Edmundo, Marcello Mamino: Discrete Subgroups of Locally Definable Groups. Iwona Krzyżanowska and Zbigniew Szafraniec: On polynomial mappings from the plane to the plane. Krzysztof Jan Nowak: A counter-example concerning quantifier elimination in quasianalytic structures. Murray Marshall: Application of localization to the multivariate moment problem. Beata Kocel-Cynk, Wiesław Pawłucki, Anna Valette: A short geometric proof that Hausdorff limits are definable in any o-minimal structure. Krzysztof Jan Nowak: Quasianalytic structures revisited: quantifier elimination, valuation property and rectilinearization of functions. M. Dickmann, A. Petrovich: Real Semigroups, Real Spectra and Quadratic Forms over Rings. Matthias Aschenbrenner, Lou van den Dries, Joris van der Hoeven: Towards a Model Theory for Transseries. Annalisa Conversano, Anand Pillay: On Levi subgroups and the Levi decomposition for groups definable in o-minimal structures. Annalisa Conversano, Anand Pillay: Connected components of definable groups, and o-minimality II. Ehud Hrushovski, Anand Pillay: Affine Nash groups over real closed fields. Annalisa Conversano , Anand Pillay: Connected components of definable groups and o-minimality I. Vincent Astier: Elementary equivalence of lattices of open sets definable in o-minimal expansions of fields. Iwona Krzyzanowska Zbigniew Szafraniec: Polynomial mappings into a Stiefel manifold and immersions. Andreas Fischer: Approximation of o-minimal maps satisfying a Lipschitz condition. Mehdi Ghasemi, Murray Marshall, Sven Wagner: Closure of the cone of sums of 2d powers in certain weighted \ell_1-seminorm topologies. Igor Klep , Markus Schweighofer: Infeasibility certificates for linear matrix inequalities. Janusz Adamus, Serge Randriambololona: Tameness of holomorphic closure dimension in a semialgebraic set. Janusz Adamus, Serge Randriambololona, Rasul Shafikov: Tameness of complex dimension in a real analytic set. Janusz Adamus, Rasul Shafikov: On the holomorphic closure dimension of real analytic sets. Mehdi Ghasemi, Murray Marshall: Lower bounds for polynomials using geometric programming. Abdelhafed Elkhadiri: On connected components of some globally semi-analytic sets. Krzysztof Jan Nowak: Supplement to the paper "Quasianalytic perturbation of multi-parameter hyperbolic polynomials and symmetric matrices". J. William Helton, Igor Klep, Scott McCullough: The convex Positivstellensatz in a free algebra. Edoardo Ballico, Riccardo Ghiloni: The principle of moduli flexibility in Real Algebraic Geometry. M. Dickmann, F. Miraglia: Faithfully Quadratic Rings. Timothy Mellor, Marcus Tressl: Non-axiomatizability of real spectra in L∞λ. Nicolas Dutertre: On the topology of semi-algebraic functions on closed semi-algebraic sets. Krzysztof Jan Nowak: On the real algebra of quasianalytic function germs. Riccardo Ghiloni, Alessandro Tancredi: Algebraic models of symmetric Nash sets. Riccardo Ghiloni: On the Complexity of Collaring Theorem in the Lipschitz Category. Tim Netzer, Andreas Thom: Polynomials with and without determinantal representations. Krzysztof Jan Nowak: On the singular locus of sets definable in a quasianalytic structure. Aleksandra Nowel, Zbigniew Szafraniec: On the number of branches of real curve singularities. Elías Baro, Eric Jaligot, Margarita Otero: Commutators in groups definable in o-minimal structures. Andreas Fischer: Recovering o-minimal structures. Murray Marshall, Tim Netzer: Positivstellensätze for real function algebras. Tim Netzer, Andreas Thom: Tracial algebras and an embedding theorem. Krzysztof Jan Nowak: Quasianalytic perturbation of multiparameter hyperbolic polynomials and symmetric matrices. Krzysztof Jan Nowak: The Abhyankar-Jung theorem for excellent henselian subrings of formal power series. J. William Helton, Igor Klep, Scott McCullough: The matricial relaxation of a linear matrix inequality. Elzbieta Sowa: Picard-Vessiot extensions for real fields. Nicolas Dutertre: Euler characteristic and Lipschitz-Killing curvatures of closed semi-algebraic sets. Dang Tuan Hiep: Representations of non-negative polynomials via the critical ideals. Sabine Burgdorf, Igor Klep: The truncated tracial moment problem. Jana Maříková: O-minimal residue fields of o-minimal fields. Dang Tuan Hiep: Representation of non-negative polynomials via the KKT ideals. J. William Helton, Igor Klep, Scott McCullough: Analytic mappings between noncommutative pencil balls. Mehdi Ghasemi, Murray Marshall: Lower bounds for a polynomial in terms of its coefficients. Annalisa Conversano: Lie-like decompositions of groups definable in o-minimal structures. Vincent Astier, Hugo L. Mariano: Realizing profinite reduced special groups. Jean-Philippe Monnier: Very special divisors on real algebraic curves. Małgorzata Czapla: Definable Triangulations with Regularity Conditions. Małgorzata Czapla: Invariance of Regularity Conditions under Definable, Locally Lipschitz, Weakly Bi-Lipschitz Mappings. Tim Netzer: On semidefinite representations of sets. Igor Klep, Markus Schweighofer: Pure states, positive matrix polynomials and sums of hermitian squares. Ikumitsu Nagasaki, Tomohiro Kawakami, Yasuhiro Hara and Fumihiro Ushitaki: Smith homology and Borsuk-Ulam type theorems. Matthias Aschenbrenner, Andreas Fischer: Definable versions of theorems by Kirszbraun and Helly. Jose Capco: Real closed * reduced partially ordered Rings. Andreas Fischer, Murray Marshall: Extending piecewise polynomial functions in two variables. Sabine Burgdorf, Claus Scheiderer, Markus Schweighofer: Pure states, nonnegative polynomials and sums of squares. Doris Augustin: The Membership Problem for quadratic modules. Alessandro Berarducci, Marcello Mamino: Equivariant homotopy of definable groups. Andreas Fischer: A strict Positivstellensatz for definable quasianalytic rings. Andreas Fischer: Positivstellensätze for families of definable functions. Jaka Cimprič, Murray Marshall, Tim Netzer: Closures of quadratic modules. Andreas Fischer: Infinite Peano differentiable functions in polynomially bounded o-minimal structures. Nicolas Dutertre: Radial index and Poincaré-Hopf index of 1-forms on semi-analytic sets. Claus Scheiderer: Weighted sums of squares in local rings and their completions, II. Claus Scheiderer: Weighted sums of squares in local rings and their completions, I. Tim Netzer, Daniel Plaumann, Markus Schweighofer: Exposed faces of semidefinitely representable sets. Tim Netzer: Representation and Approximation of Positivity Preservers. Tomohiro kawakami: Locally definable $C^\infty G$ manifold structures of locally definable $C^r G$ manifolds. Tomohiro Kawakami: Locally definable fiber bundles. Andreas Fischer: On smooth locally o-minimal functions. Elías Baro, Margarita Otero: Locally definable homotopy. Andreas Fischer: On compositions of subanalytic functions. Igor Klep, Thomas Unger: The Procesi-Schacher conjecture and Hilbert's 17th problem for algebras with involution. Alessandro Berarducci, Marcello Mamino, Margarita Otero: Higher homotopy of groups definable in o-minimal structures. Artur Piękosz: O-minimal homotopy and generalized (co)homology. János Kollár, Frédéric Mangolte: Cremona transformations and diffeomorphisms of surfaces. J. William Helton, Igor Klep, Scott McCullough, Nick Slinglend: Noncommutative ball maps. Niels Schwartz: SV-Rings and SV-Porings. Niels Schwartz: Real closed valuation rings. Elías Baro, Margarita Otero: On o-minimal homotopy groups. Doris Augustin, Manfred Knebusch: Quadratic Modules in R[[X]]. Jaka Cimprič, Murray Marshall, Tim Netzer: On the real multidimensional rational $K$-moment problem. Iwona Karolkiewicz, Aleksandra Nowel, Zbigniew Szafraniec: An algebraic formula for the intersection number of a polynomial immersion. Tomohiro Kawakami: Relative properties of definable C^\infty manifolds with finite abelian group actions in an o-minimal expansion of R_\exp. Andreas Fischer: Algebraic models for o-minimal manifolds. Johannes Huisman, Frédéric Mangolte: Automorphisms of real rational surfaces and weighted blow-up singularities. Fabrizio Catanese, Frédéric Mangolte: Real singular Del Pezzo surfaces and 3-folds fibred by rational curves, II. Y. Peterzil, S. Starchenko: Mild Manifolds and a Non-Standard Riemann Existence Theorem. Stanisław Łojasiewicz, Maria-Angeles Zurro: Closure theorem for partially semialgebraics. F. Bihan, F. Sottile: Betti number bounds for fewnomial hypersurfaces via stratified Morse Theory. Andreas Fischer: John Functions for $o$-minimal Domains. Marcus Tressl: Bounded super real closed rings. Nicolas Dutertre: On the real Milnor fibre of some maps from R^n to R^2. Roman Wencel: A model theoretic application of Gelfond-Schneider theorem. Andreas Fischer: The Riemann mapping theorem for $o$-minimal functions. Krzysztof Jan Nowak: On two problems concerning quasianalytic Denjoy--Carleman classes. Alessandro Berarducci: Cohomology of groups in o-minimal structures: acyclicity of the infinitesimal subgroup. Krzysztof Jan Nowak: Quantifier elimination, valuation property and preparation theorem in quasianalytic geometry via transformation to normal crossings. Andreas Fischer: Peano differentiable extensions in $o$-minimal structures. Benoit Bertrand and Frédéric Bihan: Euler characteristic of real non degenerate tropical complete intersections. Igor Klep, Markus Schweighofer: Sums of hermitian squares and the BMV conjecture. Elías Baro: Normal triangulations in o-minimal structures. V. Grandjean: Tame Functions with strongly isolated singularities at infinity: a tame version of a Parusinski's Theorem. V. Grandjean: On the the total curvatures of a tame function. Johannes Huisman, Frédéric Mangolte: The group of automorphisms of a real rational surface is n-transitive. Margarita Otero, Ya'acov Peterzil: G-linear sets and torsion points in definably compact groups. Andreas Fischer: O-minimal analytic separation of sets in dimension two. Dan Bates, Frédéric Bihan, Frank Sottile: Bounds on the number of real solutions to polynomial equations. Frédéric Bihan, Frank Sottile: Gale Duality for Complete Intersections. Alessandro Berarducci, Antongiulio Fornasiero: O-minimal cohomology: finiteness and invariance results. Fabrizio Catanese, Frédéric Mangolte: Real singular Del Pezzo surfaces and threefolds fibred by rational curves, I. Krzysztof Jan Nowak: Decomposition into special cubes and its applications to quasi-subanalytic geometry. Iwona Karolkiewicz, Aleksandra Nowel, Zbigniew Szafraniec: Immersions of spheres and algebraically constructible functions. Andreas Fischer: Smooth Approximation in O-Minimal Structures. Georges Comte, Yosef Yomdin: Rotation of Trajectories of Lipschitz Vector Fields. R. Rubio, J.M. Serradilla, M.P. Vélez: Detecting real singularities of curves from a rational parametrization. M. Ansola, M.J. de la Puente: Metric invariants of tropical conics and factorization of degree–two homogeneous polynomials in three variables. M. Ansola, M.J. de la Puente: A note on tropical triangles in the plane. Andreas Fischer: Extending O-minimal Fréchet Derivatives. David Trotman, Leslie C. Wilson: (r) does not imply (n) or (npf) for definable sets in non polynomially bounded o-minimal structures. Andreas Fischer: Smooth Approximation of Definable Continuous Functions. Frédéric Bihan, J. Maurice Rojas, Frank Sottile: Sharpness of Fewnomial Bound and the Number of Components of a Fewnomial Hypersurface. Wiesław Pawłucki: Lipschitz Cell Decomposition in O-Minimal Structures. I. Tobias Kaiser, Jean-Philippe Rolin, Patrick Speissegger: Transition maps at non-resonant hyperbolic singularities are o-minimal. David Grimm, Tim Netzer, Markus Schweighofer: A note on the representation of positive polynomials with structured sparsity. Jean-Philippe Monnier: Fixed points of automorphisms of real algebraic curves. Frédéric Bihan, Frank Sottile: New Fewnomial Upper Bounds from Gale Dual Polynomial Systems. Manfred Knebusch: Positivity and convexity in rings of fractions. Michael Barr, John F. Kennison, Robert Raphael: On productively Lindelöf spaces. Lev Birbrair, Alexandre Fernandes: Metric Geometry of Complex Algebraic Surfaces with Isolated Singularities. Marcus Tressl: Heirs of box types in polynomially bounded structures. Igor Klep, Markus Schweighofer: Connes' embedding conjecture and sums of hermitian squares. Andreas Fischer: Definable Smoothing of Lipschitz Continuous Functions. Tim Netzer: An Elementary Proof of Schmüdgen's Theorem on the Moment Problem of Closed Semi-Algebraic Sets. Vincent Grandjean: Triviality at infinity of real 3-space polynomial functions with cone-like ends. Frédéric Bihan, Frédéric Mangolte: Topological types of real regular jacobian elliptic surfaces. Nicolas Dutertre: A Gauss-Bonnet formula for closed semi-algebraic sets. Guillaume Valette: Multiplicity mod 2 as a metric invariant. R. Raphael, R. Grant Woods: When the Hewitt realcompactification and the P-coreflection commute. Mouadh Akriche, Frédéric Mangolte: Nombres de Betti des surfaces elliptiques réelles. José F. Fernando, Jesús M. Ruiz, Claus Scheiderer: Sums of squares of linear forms. Nicolas Dutertre: Semi-algebraic neighborhoods of closed semi-algebraic sets. A. Dolich, Patrick Speissegger: An ordered structure of rank two related to Dulac's Problem. Krzysztof Jan Nowak: On the Euler characteristic of the links of a set determined by smooth definable functions. Carlos Andradas, M. P. Vélez: On the non reduced order spectrum of real curves: some examples and remarks. W.D. Burgess, R. Raphael: Compactifications, C(X) and ring epimorphisms. Ahmed Srhir: Algèbre $p$-adique et ses applications en géométries algébrique et analytique $p$-adiques. José F. Fernando: On the Positive Extension Property and Hilbert's 17th Problem for Real Analytic Sets. Pantelis Eleftheriou, Sergei Starchenko: Groups definable in ordered vector spaces over ordered division rings. M. Dickmann, F. Miraglia: Marshall's and Milnor's Conjectures for Preordered von Neumann Regular Rings. Luis Felipe Tabera: Tropical plane geometric constructions. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the finiteness of Pythagoras numbers of real meromorphic functions. Markus Schweighofer: Global optimization of polynomials using gradient tentacles and sums of squares. Andreas Fischer: Zero-Set Property of O-Minimal Indefinitely Peano Differentiable Functions. Roman Wencel: Weakly o-minimal non-valuational structures. Roman Wencel: Topological properties of sets definable in weakly o-minimal structures. W. Kucharz, K. Kurdyka: Stiefel-Whitney classes for coherent real analytic sheaves. W. Kucharz, K. Kurdyka: Algebraicity of global real analytic hypersurfaces. J. Bochnak, W. Kucharz: On successive minima of indefinite quadratic forms. Marcus Tressl: Super real closed rings. Andreas Fischer: Differentiability of Peano derivatives. Paweł Goldstein: Gradient flow of a harmonic function in R3. Jiawang Nie, Markus Schweighofer: On the complexity of Putinar's Positivstellensatz. J. Bochnak, W. Kucharz: Real algebraic morphisms represent few homotopy classes. Frédéric Bihan: Polynomial systems supported on circuits and dessins d'enfants. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando: On a Global Analytic Positivstellensatz. Fabrizio Broglia, Federica Pieroni: On the Real Nullstellensatz for Global Analytic Functions. Krzysztof Jan Nowak: A proof of the valuation property and preparation theorem. N. J. Fine, L. Gillman, J. Lambek: Rings of Quotients of Rings of Functions. Jana Maříková: Geometric Properties of Semilinear and Semibounded Sets. Igor Klep, Markus Schweighofer: A Nichtnegativstellensatz for polynomials in noncommuting variables. Benoit Bertrand, Frédéric Bihan, Frank Sottile: Polynomial systems with few real zeroes. Jean-Marie Lion, Patrick Speissegger: The Theorem of the Complement for nested Sub-Pfaffian Sets. Lev Birbrair, João Costa, Alexandre Fernandes, Maria Ruas: K-bi-Lipschitz Equivalence of Real Function Germs. Lev Birbrair: Lipschitz Geometry of Curves and Surfaces Definable in O-Minimal Structures. D. D'Acunto, K. Kurdyka: Effective Łojasiewicz gradient inequality for polynomials. Riccardo Ghiloni: Globalization and compactness of McCrory-Parusiński conditions. Proceedings of the RAAG Summer School Lisbon 2003: O-minimal Structures. Federica Pieroni: Sums of squares in quasianalytic Denjoy-Carleman classes. Federica Pieroni: Artin-Lang property for quasianalytic rings. Jean Philippe Monnier: Clifford Theorem for real algebraic curves. A. J. Wilkie: Lectures on "An o-minimal version of Gromov's Algebraic Reparameterization Lemma with a diophantine application". Vincent Astier: Elementary equivalence of some rings of definable functions. Benoit Bertrand: Asymptotically maximal families of hypersurfaces in toric varieties. E. Bujalance, F. J. Cirre, J. M. Gamboa, G. Gromadzki: On the number of ovals of a symmetry of a compact Riemann surface. Didier D'Acunto, Vincent Grandjean: A Gradient Inequality at infinity for tame functions. Mário J. Edmundo: On torsion points of locally definable groups in o-minimal structures. Johannes Huisman, Frédéric Mangolte: Every connected sum of lens spaces is a real component of a uniruled algebraic variety. José F. Fernando: On the Hilbert's 17th Problem for global analytic functions on dimension 3. Margarita Otero: On divisibility in definable groups. L. Alberti, G. Comte, B. Mourrain: Meshing implicit algebraic surfaces: the smooth case. Jonathan Kirby, Boris Zilber: The uniform Schanuel conjecture over the real numbers. Ma.Emilia Alonso, Dan Haran: Covers of Klein Surfaces. Ya'acov Peterzil, Anand Pillay: Generic sets in definably compact groups. Nicolas Dutertre: Curvature integrals on the real Milnor fibre. Guillaume Valette: Volume, Density And Whitney Conditions. Andreas Bernig: Gromov-Hausdorff limits in definable families. Ya'acov Peterzil, Sergei Starchenko: Complex analytic geometry and analytic-geometric categories. M. Coste, T. Lajous, H. Lombardi, M-F. Roy: Generalized Budan-Fourier theorem and virtual roots. Louis Mahé: On the Pierce-Birkhoff Conjecture in three variables. Olivier Macé, Louis Mahé: Sommes de trois carrés de fractions en deux variables. Markus Schweighofer: Certificates for nonnegativity of polynomials with zeros on compact semialgebraic sets. Igor Klep, Dejan Velušček: $n$-real valuations and the higher level version of the Krull-Baer theorem. A. J. Wilkie: Covering definable open sets by open cells. Mário J. Edmundo, Margarita Otero: Definably compact abelian groups. Luis Felipe Tabera: Tropical constructive Pappus' theorem. Niels Schwartz: About Schmüdgen's Theorem. Claus Scheiderer: Moment problem and complexity. Claus Scheiderer: Distinguished representations of non-negative polynomials. Claus Scheiderer: Sums of squares on real algebraic surfaces. Ángel L. Pérez del Pozo: Automorphism groups of compact bordered Klein surfaces with invariant subsets. Andreas Bernig: Support functions, projections and Minkowski addition of Legendrian cycles. Adam Dzedzej, Zbigniew Szafraniec: On families of trajectories of an analytic gradient vector field. Riccardo Ghiloni: Rigidity and Moduli Space in Real Algebraic Geometry. Serge Randriambololona: O-minimal structures: low arity versus generation. Andreas Fischer: Singularities of o-minimal Peano derivatives. Sérgio Alvarez, Lev Birbrair, João Costa, Alexandre Fernandes: Topological K-Equivalence of Analytic Function-Germs. L. Birbrair, A.G. Fernandes: Horn Exponents of Real Quasihomogeneous and Semi-Quasihomogeneous Surfaces. R. Raphael, R.G. Woods: On RG-Spaces and the Regularity Degree. Salma Kuhlmann, Saharon Shelah: κ-bounded Exponential-Logarithmic Power Series Fields. Daniel Richardson: Near Integral Points of Sets Definable in O-Minimal Structures. Wiesław Pawłucki: A linear extension operator for Whitney fields on closed o-minimal sets. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the Pythagoras number of real analytic surfaces. Bruce Reznick: On the absence of uniform denominators in Hilbert's 17th problem. Victoria Powers, Bruce Reznick: Polynomials positive on unbounded rectangles. D. Biljakovic, M. Kochetov, S. Kuhlmann: Primes and Irreducibles in Truncation Integer Parts of Real Closed Fields. Michael Barr, John F. Kennison, Robert Raphael: Searching For Absolute CR-Epic Spaces. José F. Fernando, José M. Gamboa: Polynomial and regular images of Rn. Victoria Powers, Bruce Reznick, Claus Scheiderer, Frank Sottile: A New Proof of Hilbert's Theorem on Ternary Quartics. Markus Schweighofer: Iterated rings of bounded elements: Erratum. Ya'acov Peterzil, Sergei Starchenko: Complex-Like Analysis in O-Minimal Structures. Alessandro Berarducci, Margarita Otero, Ya'acov Peterzil, Anand Pillay: A descending chain condition for groups definable in o-minimal structures. A. J. Wilkie: Fusing o-minimal structures. Ángel L. Pérez del Pozo: Gap sequences on Klein surfaces. Andreas Fischer: Definable Λ p -regular cell decomposition. Vincent Grandjean: On the Limit set at infinity of gradient of semialgebraic function. Riccardo Ghiloni: On the space of morphisms into generic real algebraic varieties. Mário J. Edmundo, Gareth O. Jones, Nicholas J. Peatfield: Sheaf cohomology in o-minimal structures. Didier D'Acunto, Vincent Grandjean: On gradient at infinity of real polynomials. Andreas Bernig, Alexander Lytchak: Tangent spaces and Gromov-Hausdorff limits of subanalytic spaces. W. Charles Holland, Salma Kuhlmann, Stephen H.McCleary: Lexicographic Exponentiation of Chains. Digen Zhang: A note on Prüfer extensions. Matthias Aschenbrenner, Lou van den Dries: Asymptotic Differential Algebra. Digen Zhang: Prüfer hulls of commutative rings. Digen Zhang: An elementary proof that the length of X14+X24+X34+X44 is 4. Manfred Knebusch, Digen Zhang: Convexity, valuations and Prüfer extensions in real algebra. Tobias Kaiser: Dirichlet-regularity in arbitrary $o$-minimal structures on the field IR up to dimension 4. Tobias Kaiser: Capacity in subanalytic geometry. Tobias Kaiser: Dirichlet-regularity in polynomially bounded $o$-minimal structures on IR. Andreas Bernig: Curvature tensors of singular spaces. Aleksandra Nowel: Topological invariants of analytic sets associated with Noetherian families. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the Hilbert 17th Problem for Global Analytic Functions. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the Pythagoras Numbers of Real Analytic Curves. Mário J. Edmundo: Covers of groups definable in o-minimal structures. Vincent Astier: On some sheaves of special groups. M. Hrušák, R.Raphael, R.G.Woods: On a class of pseudocompact spaces derived from ring epimorphisms. Riccardo Ghiloni: Explicit Equations and Bounds for the Nakai--Nishimura--Dubois--Efroymson Dimension Theorem. W. Domitrz, S. Janeczko, M. Zhitomirskii: Relative Poincare Lemma, Contractibility, Quasi-Homogeneity and Vector Fields Tangent to a Singular Variety. Jean-Yves Welschinger: Spinor states of real rational curves in real algebraic convex $3$-manifolds and enumerative invariants. Timothy Mellor: Imaginaries in Real Closed Valued Fields II. Timothy Mellor: Imaginaries in Real Closed Valued Fields I. Andreas Bernig: Curvature bounds on subanalytic spaces. Marcus Tressl: Computation of the z-radical in C(X). M. Dickmann, F. Miraglia: Algebraic K-theory of Special Groups. Jean-Philippe Monnier: On real generalized Jacobian varieties. Frédéric Mangolte: Real algebraic morphisms on 2-dimensional conic bundles. Ludwig Bröcker: Charakterisierung algebraischer Kurven in der komplexen projektiven Ebene. Alessandro Berarducci, Margarita Otero: An additive measure in o-minimal expansions of fields. Riccardo Ghiloni: Second Order Homological Obstructions and Global Sullivan-type Conditions on Real Algebraic Varieties. Michael Barr, R. Raphael, R.G. Woods: On CR-epic embeddings and absolute CR-epic spaces. Matthias Aschenbrenner, Lou van den Dries, Joris van der Hoeven: Differentially Algebraic Gaps. Aleksandra Nowel, Zbigniew Szafraniec: On trajectories of analytic gradient vector fields on analytic manifolds. Wiesław Pawłucki: On the algebra of functions Ck-extendable for each k finite. Igor Klep: A Kadison-Dubois representation for associative rings. Jaka Cimprič, Igor Klep: Orderings of Higher Level and Rings Of Fractions. José F. Fernando: Sums of squares in excellent henselian local rings. Salma Kuhlmann, Murray Marshall, Niels Schwartz: Positivity, sums of squares and the multi-dimensional moment problem II. Markus Schweighofer: Optimization of polynomials on compact semialgebraic sets. Alessandro Berarducci, Tamara Servi: An effective version of Wilkie's theorem of the complement and some effective o-minimality results. Mário J. Edmundo, Arthur Woerheide: Comparation theorems for o-minimal singular (co)homology. Carlos Andradas, Antonio Díaz-Cano: Some properties of global semianalytic subsets of coherent surfaces. Ludwig Bröcker: Euler integration and Euler multiplication. Artur Piękosz: K-subanalytic rectilinearization and uniformization. Hélène Pennaneac'h: Virtual and non virtual algebraic Betti numbers. José F. Fernando, Jesús M. Ruiz: On the Pythagoras numbers of real analytic set germs. Daniel Richardson, Ahmed El-Sonbaty: Counterexamples to the Uniformity Conjecture. Johannes Huisman: Real line arrangements and fundamental groups. Lou van den Dries, Patrick Speissegger: O-minimal preparation theorems. Murray Marshall: Optimization of polynomial functions. Salma Kuhlmann, Murray Marshall: Positivity, sums of squares and the multi-dimensional moment problem. Murray Marshall: Approximating positive polynomials using sums of squares. Murray Marshall: *-orderings and *-valuations on algebras of finite Gelfand-Kirillov dimension. Frédéric Chazal, Rémi Soufflet: Stability and finiteness properties of Medial Axis and Skeleton. Didier D'Acunto, Krzysztof Kurdyka: Geodesic diameter of compact real algebraic hypersurfaces. Jean-Yves Welschinger: Invariants of real symplectic 4-manifolds and lower bounds in real enumerative geometry. J. Huisman, M. Lattarulo: Imaginary automorphisms on real hyperelliptic curves. Johannes Huisman, Frédéric Mangolte: Every orientable Seifert 3-manifold is a real component of a uniruled algebraic variety. J. Bochnak, W. Kucharz: Analytic Cycles on Real Analytic Manifolds. Didier D'Acunto: Sur la topologie des fibres d'une fonction définissable dans une structure o-minimale. Henri Lombardi: Constructions cachées en algèbre abstraite (5). Principe local-global de Pfister et variantes. F. Chazal, J-M. Lion: Volumes transverses aux feuilletages definissables dans des structures o-minimales. Jean-Marie Lion, Patrick Speissegger: A geometric proof of the definability of Hausdorff limits. Jean-Philippe Monnier: Divisors on real curves. Artur Piękosz: Extending analytic $K$-subanalytic functions. Marcus Tressl: Pseudo Completions and Completion in Stages of o-minimal Structures. A. Díaz-Cano: Orderings and maximal ideals of rings of analytic functions. Mário J. Edmundo: O-minimal (co)homology and applications. Mário J. Edmundo: O-minimal cohomology with definably compact supports. Mário J. Edmundo: O-minimal cohomology and definably compact definable groups. Matthias Aschenbrenner, Lou van den Dries: Liouville Closed H-Fields. C. Andradas, A. Díaz-Cano: Closed stability index of excellent henselian local rings. José F. Fernando, Jesús M. Ruiz, Claus Scheiderer: Sums of squares in real rings. Michel Coste, Jesús M. Ruiz, Masahiro Shiota: Global Problems on Nash Functions. F. Acquistapace, F. Broglia, M. Shiota: The finiteness property and Łojasiewicz inequality for global semianalytic sets. F. Broglia, F. Pieroni: Separation of global semianalytic subsets of 2-dimensional analytic manifolds. F. Acquistapace, A. Díaz-Cano: Divisors in Global Analytic Sets. Johannes Huisman: Real hypersurfaces having many pseudo-hyperplanes. Vincent Astier, Marcus Tressl: Axiomatization of local-global principles for pp-formulas in spaces of orderings. Claus Scheiderer: Sums of squares on real algebraic curves. Maria Jesus de la Puente: Real Plane Algebraic Curves. C. Andradas, R. Rubio, M.P. Vélez: An Algorithm for convexity of semilinear sets over ordered fields. Zbigniew Szafraniec: Topological invariants of real Milnor fibres. Markus Schweighofer: On the complexity of Schmüdgen's Positivstellensatz. Didier D'Acunto, Krzysztof Kurdyka: Bounds for Gradient Trajectories of Definable Functions with Applications to Robotics and Semialgebraic Geometry. Z. Jelonek, K. Kurdyka: Quantitative generalized Bertini-Sard Theorem for smooth affine Varieties. J. Bochnak, W. Kucharz: On approximation of smooth submanifolds by nonsingular real algebraic subvarieties. J. Bochnak, W. Kucharz: A topological proof of the Grothendieck formula in real algebraic geometry. Georges Comte, Yosef Yomdin: Book: Tame Geometry with Applications in Smooth Analysis. José F. Fernando, J. M. Gamboa: Polynomial images of R^n. José F. Fernando: Analytic surface germs with minimal Pythagoras number. Markus Schweighofer: Iterated rings of bounded elements and generalizations of Schmüdgen's Positivstellensatz. Marcus Tressl: Valuation theoretic content of the Marker-Steinhorn Theorem. WWW Server: School of Mathematics, University of Manchester, UK.
CommonCrawl
\begin{document} \title[ Cluster structures and Belavin-Drinfeld classification] {Cluster structures on simple complex Lie groups and Belavin-Drinfeld classification} \author{M. Gekhtman} \address{Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556} \email{[email protected]} \author{M. Shapiro} \address{Department of Mathematics, Michigan State University, East Lansing, MI 48823} \email{[email protected]} \author{A. Vainshtein} \address{Department of Mathematics \& Department of Computer Science, University of Haifa, Haifa, Mount Carmel 31905, Israel} \email{[email protected]} \begin{abstract} We study natural cluster structures in the rings of regular functions on simple complex Lie groups and Poisson-Lie structures compatible with these cluster structures. According to our main conjecture, each class in the Belavin-Drinfeld classification of Poisson-Lie structures on ${\mathcal G}$ corresponds to a cluster structure in ${\mathcal O}({\mathcal G})$. We prove a reduction theorem explaining how different parts of the conjecture are related to each other. The conjecture is established for $SL_n$, $n<5$, and for any ${\mathcal G}$ in the case of the standard Poisson-Lie structure. \end{abstract} \subjclass[2000]{53D17, 13F60} \keywords{Poisson-Lie group, cluster algebra, Belavin-Drinfeld triple} \maketitle \section{Introduction} Since the invention of cluster algebras in 2001, a large part of research in the field has been devoted to uncovering cluster structures in rings of regular functions on various algebraic varieties arising in algebraic geometry, representation theory, and mathematical physics. Once the existence of such a structure was established, abstract features of cluster algebras were used to study geometric properties of underlying objects. Research in this direction led to many exciting results \cite{SSVZ, FoGo1, FoGo2}. It also created an impression that, given an algebraic variety, there is a unique (if at all) natural cluster structure associated with it. The main goal of the current paper is to establish the following phenomenon: in certain situations, the same ring may have \emph{multiple\/} natural cluster structures. More exactly, we engage into a systematic study of multiple cluster structures in the rings of regular functions on simple Lie groups (in what follows we will shorten that to {\it cluster structures on simple Lie groups\/}). Consistent with the philosophy advocated in \cite{GSV1,GSV2, GSV3, GSV4, GSV5, GSVb}, we will focus on compatible Poisson structures on the Lie groups, that is, on compatible Poisson-Lie structures. The notion of a Poisson bracket compatible with a cluster structure was introduced in \cite{GSV1}. It was used there to interpret cluster transformations and matrix mutations from a viewpoint of Poisson geometry. In addition, it was shown that if a Poisson algebraic variety $\left ( \mathcal{M}, {\{\cdot,\cdot\}}\right )$ possesses a coordinate chart that consists of regular functions whose logarithms have pairwise constant Poisson brackets, then one can use this chart to define a cluster structure ${\mathcal C}_{\mathcal M}$ compatible with ${\{\cdot,\cdot\}}$. Algebraic structures corresponding to ${\mathcal C}_{\mathcal M}$ (the cluster algebra and the upper cluster algebra) are closely related to the ring ${\mathcal O}({\mathcal M})$ of regular functions on $\mathcal{M}$. More precisely, under certain rather mild conditions, ${\mathcal O}({\mathcal M})$ can be obtained by tensoring one of these algebras by ${\mathbb C}$. This construction was applied in \cite[Ch.~4.3]{GSVb} to double Bruhat cells in semisimple Lie groups equipped with (the restriction of) the {\em standard\/} Poisson-Lie structure. It was shown that the resulting cluster structure coincides with the one built in \cite{CAIII}. Recall that it was proved in \cite{CAIII} that the corresponding upper cluster algebra coincides with the ring of regular functions on the double Bruhat cell. Since the open double Bruhat cell is dense in the corresponding Lie group, the corresponding fields of rational functions coincide, thus allowing to equip the field of rational functions on the Lie group with the same cluster structure. Moreover, we show below that the upper cluster algebra coincides with the ring of regular functions on the Lie group. The standard Poisson-Lie structure is a particular case of Poisson-Lie structures corresponding to quasi-triangular Lie bialgebras. Such structures are associated with solutions to the classical Yang-Baxter equation (CYBE). Their complete classification was obtained by Belavin and Drinfeld in \cite{BD}. We conjecture that any such solution gives rise to a compatible cluster structure on the Lie group, and that the properties of this structure are similar to those mentioned above. The detailed formulation of our conjectures requires some preliminary work; it is given in Section~\ref{SecMC} below. In Section~\ref{reduction} we study interrelations between the different parts of the conjecture. Currently, we have several examples supporting our conjecture: it holds for the class of the standard Poisson-Lie structure in any simple complex Lie group, and for the whole Belavin-Drinfeld classification in $SL_n$ for $n=2,3,4$. These results are described in Sections~\ref{Secgen} and~\ref{Sec34}, respectively. In Section~\ref{SecTri} we discuss the case of Poisson-Lie structures beyond those associated with solutions to CYBE. \section{Cluster structures and compatible Poisson brackets} \label{SecPrel} \subsection{} We start with the basics on cluster algebras of geometric type. The definition that we present below is not the most general one, see, e.g., \cite{FZ2, CAIII} for a detailed exposition. In what follows, we will use a notation $[i,j]$ for an interval $\{i, i+1, \ldots , j\}$ in $\mathbb{N}$ and we will denote $[1, n]$ by $[n]$. The {\em coefficient group\/} ${\mathcal P}$ is a free multiplicative abelian group of finite rank $m$ with generators $g_1,\dots, g_m$. An {\em ambient field\/} is the field ${\mathcal F}$ of rational functions in $n$ independent variables with coefficients in the field of fractions of the integer group ring ${\mathbb Z}{\mathcal P}={\mathbb Z}[g_1^{\pm1},\dots,g_m^{\pm1}]$ (here we write $x^{\pm1}$ instead of $x,x^{-1}$). A {\em seed\/} (of {\em geometric type\/}) in ${\mathcal F}$ is a pair $\Sigma=({\bf x},\widetilde{B})$, where ${\bf x}=(x_1,\dots,x_n)$ is a transcendence basis of ${\mathcal F}$ over the field of fractions of ${\mathbb Z}{\mathcal P}$ and $\widetilde{B}$ is an $n\times(n+m)$ integer matrix whose principal part $B={\widetilde{B}}([n],[n])$ is skew-symmetrizable (here and in what follows, we denote by $A(I,J)$ a submatrix of a matrix $A$ with a row set $I$ and a column set $J$). Matrices $B$ and ${\widetilde{B}}$ are called the {\it exchange matrix\/} and the {\it extended exchange matrix}, respectively. In this paper, we will only deal with the case when the exchange matrix is skew-symmetric. The $n$-tuple ${\bf x}$ is called a {\em cluster\/}, and its elements $x_1,\dots,x_n$ are called {\em cluster variables\/}. Denote $x_{n+i}=g_i$ for $i\in [m]$. We say that $\widetilde{{\bf x}}=(x_1,\dots,x_{n+m})$ is an {\em extended cluster\/}, and $x_{n+1},\dots,x_{n+m}$ are {\em stable variables\/}. It is convenient to think of ${\mathcal F}$ as of the field of rational functions in $n+m$ independent variables with rational coefficients. Given a seed as above, the {\em adjacent cluster\/} in direction $k\in [n]$ is defined by $$ {\bf x}_k=({\bf x}\setminus\{x_k\})\cup\{x'_k\}, $$ where the new cluster variable $x'_k$ is given by the {\em exchange relation} \begin{equation}\label{exchange} x_kx'_k=\prod_{\substack{1\le i\le n+m\\ b_{ki}>0}}x_i^{b_{ki}}+ \prod_{\substack{1\le i\le n+m\\ b_{ki}<0}}x_i^{-b_{ki}}; \end{equation} here, as usual, the product over the empty set is assumed to be equal to~$1$. We say that ${\widetilde{B}}'$ is obtained from ${\widetilde{B}}$ by a {\em matrix mutation\/} in direction $k$ and write ${\widetilde{B}}'=\mu_k({\widetilde{B}})$ if \[ b'_{ij}=\begin{cases} -b_{ij}, & \text{if $i=k$ or $j=k$;}\\ b_{ij}+\displaystyle\frac{|b_{ik}|b_{kj}+b_{ik}|b_{kj}|}2, &\text{otherwise.} \end{cases} \] It can be easily verified that $\mu_k(\mu_k({\widetilde{B}}))={\widetilde{B}}$. Given a seed $\Sigma=({\bf x},\widetilde{B})$, we say that a seed $\Sigma'=({\bf x}',\widetilde{B}')$ is {\em adjacent\/} to $\Sigma$ (in direction $k$) if ${\bf x}'$ is adjacent to ${\bf x}$ in direction $k$ and $\widetilde{B}'= \mu_k(\widetilde{B})$. Two seeds are {\em mutation equivalent\/} if they can be connected by a sequence of pairwise adjacent seeds. The set of all seeds mutation equivalent to $\Sigma$ is called the {\it cluster structure\/} (of geometric type) in ${\mathcal F}$ associated with $\Sigma$ and denoted by ${\mathcal C}(\Sigma)$; in what follows, we usually write ${\mathcal C}({\widetilde{B}})$, or even just ${\mathcal C}$ instead. Following \cite{FZ2, CAIII}, we associate with ${\mathcal C}({\widetilde{B}})$ two algebras of rank $n$ over the {\it ground ring\/} ${\mathbb A}$, ${\mathbb Z}\subseteq{\mathbb A} \subseteq{\mathbb Z}{\mathcal P}$: the {\em cluster algebra\/} ${\mathcal A}={\mathcal A}({\mathcal C})={\mathcal A}({\widetilde{B}})$, which is the ${\mathbb A}$-subalgebra of ${\mathcal F}$ generated by all cluster variables in all seeds in ${\mathcal C}({\widetilde{B}})$, and the {\it upper cluster algebra\/} $\overline{\A}=\overline{\A}({\mathcal C})=\overline{\A}({\widetilde{B}})$, which is the intersection of the rings of Laurent polynomials over ${\mathbb A}$ in cluster variables taken over all seeds in ${\mathcal C}({\widetilde{B}})$. The famous {\it Laurent phenomenon\/} \cite{FZ3} claims the inclusion ${\mathcal A}({\mathcal C})\subseteq\overline{\A}({\mathcal C})$. The natural choice of the ground ring for the geometric type is the polynomial ring in stable variables ${\mathbb A}={\mathbb Z}{\mathcal P}_+={\mathbb Z}[x_{n+1},\dots,x_{n+m}]$; this choice is assumed unless explicitly stated otherwise. Let $V$ be a quasi-affine variety over ${\mathbb C}$, ${\mathbb C}(V)$ be the field of rational functions on $V$, and ${\mathcal O}(V)$ be the ring of regular functions on $V$. Let ${\mathcal C}$ be a cluster structure in ${\mathcal F}$ as above. Assume that $\{f_1,\dots,f_{n+m}\}$ is a transcendence basis of ${\mathbb C}(V)$. Then the map $\varphi: x_i\mapsto f_i$, $1\le i\le n+m$, can be extended to a field isomorphism $\varphi: {\mathcal F}_{\mathbb C}\to {\mathbb C}(V)$, where ${\mathcal F}_{\mathbb C}={\mathcal F}\otimes{\mathbb C}$ is obtained from ${\mathcal F}$ by extension of scalars. The pair $({\mathcal C},\varphi)$ is called a cluster structure {\it in\/} ${\mathbb C}(V)$ (or just a cluster structure {\it on\/} $V$), $\{f_1,\dots,f_{n+m}\}$ is called an extended cluster in $({\mathcal C},\varphi)$. Sometimes we omit direct indication of $\varphi$ and say that ${\mathcal C}$ is a cluster structure on $V$. A cluster structure $({\mathcal C},\varphi)$ is called {\it regular\/} if $\varphi(x)$ is a regular function for any cluster variable $x$. The two algebras defined above have their counterparts in ${\mathcal F}_{\mathbb C}$ obtained by extension of scalars; they are denoted ${\mathcal A}_{\mathbb C}$ and $\overline{\A}_{\mathbb C}$. If, moreover, the field isomorphism $\varphi$ can be restricted to an isomorphism of ${\mathcal A}_{\mathbb C}$ (or $\overline{\A}_{\mathbb C}$) and ${\mathcal O}(V)$, we say that ${\mathcal A}_{\mathbb C}$ (or $\overline{\A}_{\mathbb C}$) is {\it naturally isomorphic\/} to ${\mathcal O}(V)$. The following statement is a weaker analog of Proposition~3.37 in \cite{GSVb}. \begin{proposition}\label{regfun} Let $V$ be a Zariski open subset in ${\mathbb C}^{n+m}$ and $({\mathcal C}={\mathcal C}({\widetilde{B}}),\varphi)$ be a cluster structure in ${\mathbb C}(V)$ with $n$ cluster and $m$ stable variables such that {\rm(i)} $\operatorname{rank}{\widetilde{B}}=n$; {\rm(ii)} there exists an extended cluster ${\widetilde{\bf x}}=(x_1,\dots,x_{n+m})$ in ${\mathcal C}$ such that $\varphi(x_i)$ is regular on $V$ for $i\in [n+m]$; {\rm(iii)} for any cluster variable $x_k'$, $k\in [n]$, obtained via the exchange relation~\eqref{exchange} applied to ${\widetilde{\bf x}}$, $\varphi(x_k')$ is regular on $V$. {\rm(iv)} for any stable variable $x_{n+i}$, $i\in [m]$, $\varphi(x_{n+i})$ vanishes at some point of $V$; {\rm(v)} each regular function on $V$ belongs to $\varphi(\overline{\A}_{\mathbb C}({\mathcal C}))$. Then $\overline{\A}_{\mathbb C}({\mathcal C})$ is naturally isomorphic to ${\mathcal O}(V)$. \end{proposition} \subsection{} Let ${\{\cdot,\cdot\}}$ be a Poisson bracket on the ambient field ${\mathcal F}$, and ${\mathcal C}$ be a cluster structure in ${\mathcal F}$. We say that the bracket and the cluster structure are {\em compatible\/} if, for any extended cluster $\widetilde{{\bf x}}=(x_1,\dots,x_{n+m})$, one has \begin{equation}\label{cpt} \{x_i,x_j\}=\omega_{ij} x_ix_j, \end{equation} where $\omega_{ij}\in{\mathbb Z}$ are constants for all $i,j\in[n+m]$. The matrix $\Omega^{\widetilde {\bf x}}=(\omega_{ij})$ is called the {\it coefficient matrix\/} of ${\{\cdot,\cdot\}}$ (in the basis $\widetilde {\bf x}$); clearly, $\Omega^{\widetilde {\bf x}}$ is skew-symmetric. A complete characterization of Poisson brackets compatible with a given cluster structure ${\mathcal C}={\mathcal C}({\widetilde{B}})$ in the case $\operatorname{rank}{\widetilde{B}}=n$ is given in \cite{GSV1}, see also \cite[Ch.~4]{GSVb}. In particular, the following statement is an immediate corollary of Theorem~1.4 in \cite{GSV1}. \begin{proposition}\label{Bomega} Let $\operatorname{rank} {\widetilde{B}}=n$, then a Poisson bracket is compatible with ${\mathcal C}({\widetilde{B}})$ if and only if its coefficient matrix $\Omega^{\widetilde{\bf x}}$ satisfies ${\widetilde{B}}\Omega^{{\widetilde{\bf x}}}=(D\; 0)$, where $D$ is a diagonal matrix. \end{proposition} Clearly, the notion of compatibility and the result stated above extend to Poisson brackets on ${\mathcal F}_{\mathbb C}$ without any changes. A different description of compatible Poisson brackets on ${\mathcal F}_{\mathbb C}$ is based on the notion of a toric action. Fix an arbitrary extended cluster ${\widetilde{\bf x}}=(x_1,\dots,x_{n+m})$ and define a {\it local toric action\/} of rank $r$ as the map ${\mathcal T}^W_{\mathbf d}:{\mathcal F}_{\mathbb C}\to {\mathcal F}_{\mathbb C}$ given on the generators of ${\mathcal F}_{\mathbb C}={\mathbb C}(x_1,\dots,x_{n+m})$ by the formula \begin{equation} {\mathcal T}^W_{\mathbf d}({\widetilde{\bf x}})=\left ( x_i \prod_{\alpha=1}^r d_\alpha^{w_{i\alpha}}\right )_{i=1}^{n+m},\qquad \mathbf d=(d_1,\dots,d_r)\in ({\mathbb C}^*)^r, \label{toricact} \end{equation} where $W=(w_{i\alpha})$ is an integer $(n+m)\times r$ {\it weight matrix\/} of full rank, and extended naturally to the whole ${\mathcal F}_{\mathbb C}$. Let ${\widetilde{\bf x}}'$ be another extended cluster, then the corresponding local toric action defined by the weight matrix $W'$ is {\it compatible\/} with the local toric action \eqref{toricact} if the following diagram is commutative for any fixed $\mathbf d\in ({\mathbb C}^*)^r$: $$ \begin{CD} {\mathcal F}_{\mathbb C}={\mathbb C}({\widetilde{\bf x}}) @>>> {\mathcal F}_{\mathbb C}={\mathbb C}({\widetilde{\bf x}}')\\ @V{\mathcal T}^W_{\mathbf d} VV @VV {\mathcal T}^{W'}_{\mathbf d}V\\ {\mathcal F}_{\mathbb C}={\mathbb C}({\widetilde{\bf x}}) @>>> {\mathcal F}_{\mathbb C}={\mathbb C}({\widetilde{\bf x}}') \end{CD} $$ (here the horizontal arrows are induced by $x_i\mapsto x'_i$ for $1\le i\le n+m$). If local toric actions at all clusters are compatible, they define a {\it global toric action\/} ${\mathcal T}_{\mathbf d}$ on ${\mathcal F}_{\mathbb C}$ called the extension of the local toric action \eqref{toricact}. Lemma~2.3 in \cite{GSV1} claims that \eqref{toricact} extends to a unique global action of $({\mathbb C}^*)^r$ if and only if ${\widetilde{B}} W = 0$. Therefore, if $\operatorname{rank}{\widetilde{B}}=n$, then the maximal possible rank of a global toric action equals $m$. Any global toric action can be obtained from a toric action of the maximal rank by setting some of $d_i$'s equal to~$1$. A description of Poisson brackets on ${\mathcal F}_{\mathbb C}$ compatible with a cluster structure ${\mathcal C}={\mathcal C}({\widetilde{B}})$ based on the notion of the global toric action is suggested in \cite{GSSV}. Given a Poisson bracket ${\{\cdot,\cdot\}}_0$ on ${\mathcal F}_{\mathbb C}$ compatible with ${\mathcal C}$, one can obtain all other compatible brackets as follows. Assume that $(\mathbb{C}^*)^m$ is equipped with a Poisson structure given by \begin{equation}\label{torbra} \{d_i, d_j\}_V = v_{ij} d_i d_j, \end{equation} where $V=(v_{ij})$ is a skew-symmetric matrix. \begin{proposition} For any $V$, there exists a Poisson structure ${\{\cdot,\cdot\}}_V^{\mathcal C}$ compatible with ${\mathcal C}$ such that the map $\left ((\mathbb{C}^*)^m \times {\mathcal F}_{\mathbb C} , {\{\cdot,\cdot\}}_V \times {\{\cdot,\cdot\}}_0 \right )\to \left ( {\mathcal F}_{\mathbb C}, {\{\cdot,\cdot\}}_V^{\mathcal C}\right )$ extended from the action $(\mathbf d,{\widetilde{\bf x}}) \mapsto {\mathcal T}_{\mathbf d}({\widetilde{\bf x}})$ is Poisson. Moreover, every compatible Poisson bracket on ${\mathcal F}_{\mathbb C}$ is a scalar multiple of ${\{\cdot,\cdot\}}_V^{\mathcal C}$ for some $V$. \label{allcompat} \end{proposition} \section{Poisson-Lie groups and the main conjecture} \label{SecMC} \subsection{} Let ${\mathcal G}$ be a Lie group equipped with a Poisson bracket ${\{\cdot,\cdot\}}$. ${\mathcal G}$ is called a {\em Poisson-Lie group\/} if the multiplication map $$ {\mathcal G}\times {\mathcal G} \ni (x,y) \mapsto x y \in {\mathcal G} $$ is Poisson. Perhaps, the most important class of Poisson-Lie groups is the one associated with classical R-matrices. Let $\mathfrak g$ be the Lie algebra of ${\mathcal G}$ equipped with a nondegenerate invariant bilinear form $(\ ,\ )$, $\mathfrak{t}\in \mathfrak g\otimes\mathfrak g$ be the corresponding Casimir element. For an arbitrary element $r=\sum_i a_i\otimes b_i\in\mathfrak g\times\mathfrak g$ denote \[ [[r,r]]=\sum_{i,j} [a_i,a_j]\otimes b_i\otimes b_j+\sum_{i,j} a_i\otimes [b_i,a_j]\otimes b_j+ \sum_{i,j} a_i\otimes a_j\otimes [ b_i,b_j] \] and $r^{21}=\sum_i b_i\otimes a_i$. A {\em classical R-matrix} is an element $r\in \mathfrak g\otimes\mathfrak g$ that satisfies {\em the classical Yang-Baxter equation (CYBE)} \begin{equation} [[r, r]] =0 \label{CYBE} \end{equation} together with the condition \begin{equation} r + r^{21} = \mathfrak{t}. \label{rskew} \end{equation} Given a solution $r$ to \eqref{CYBE}, one can construct explicitly the Poisson-Lie bracket on the Lie group ${\mathcal G}$. Choose a basis $\{I_i\}$ in $\mathfrak g$, and let $\partial^R_i$ and $\partial^L_i$ be the right and the left invariant vector fields on ${\mathcal G}$ whose values at the unit element equal $I_i$. Write $r$ as $r=\sum_{i,j} r_{ij}I_i\otimes I_j$, then the Poisson-Lie bracket on ${\mathcal G}$ is given by \begin{equation}\label{sklya} \{f_1,f_2\}=\sum_{i,j}r_{ij}\left(\partial^R_i f_1\partial^R_j f_2- \partial^L_i f_1\partial^L_j f_2\right), \end{equation} see \cite[Proposition 4.1.4]{KoSo}. This bracket is called the {\it Sklyanin bracket\/} corresponding to $r$. The classification of classical R-matrices for simple complex Lie groups was given by Belavin and Drinfeld in \cite{BD}. Let ${\mathcal G}$ be a simple complex Lie group, $\mathfrak g$ be the corresponding Lie algebra, $\mathfrak h$ be its Cartan subalgebra, $\Phi$ be the root system associated with $\mathfrak g$, $\Phi^+$ be the set of positive roots, and $\Delta\subset \Phi^+$ be the set of positive simple roots. A {\em Belavin-Drinfeld triple} $T=(\Gamma_1,\Gamma_2, \gamma)$ consists of two subsets $\Gamma_1,\Gamma_2$ of $\Delta$ and an isometry $\gamma:\Gamma_1\to\Gamma_2$ nilpotent in the following sense: for every $\alpha \in \Gamma_1$ there exists $m\in\mathbb{N}$ such that $\gamma^j(\alpha)\in \Gamma_1$ for $j=0,\ldots,m-1$, but $\gamma^m(\alpha)\notin \Gamma_1$. The isometry $\gamma$ extends in a natural way to a map between root systems generated by $\Gamma_1, \Gamma_2$. This allows one to define a partial ordering on $\Phi$: $\alpha \prec_T \beta$ if $\beta=\gamma^j(\alpha)$ for some $j\in \mathbb{N}$. Select root vectors $e_\alpha \in\mathfrak g$ satisfying $(e_{-\alpha},e_\alpha)=1$. According to the Belavin-Drinfeld classification, the following is true (see, e.g., \cite[Chap.~3]{CP}). \begin{proposition}\label{bdclass} {\rm(i)} Every classical R-matrix is equivalent {\rm(}up to an action of $\sigma\otimes\sigma$, where $\sigma$ is an automorphism of $\mathfrak g$\/{\rm)} to the one of the form \begin{equation} \label{rBD} r= r_0 + \sum_{\alpha\in \Phi^+} e_{-\alpha}\otimes e_\alpha + \sum_{\stackrel{\alpha \prec_T \beta}{\alpha,\beta\in\Phi^+}} e_{-\alpha}\wedge e_\beta. \end{equation} {\rm(ii)} $r_0\in \mathfrak h\otimes\mathfrak h$ in \eqref{rBD} satisfies \begin{equation} (\gamma(\alpha)\otimes {\operatorname {Id}} )r_0 + ({\operatorname {Id}}\otimes \alpha )r_0 = 0 \label{r01} \end{equation} for any $\alpha\in\Gamma_1$ and \begin{equation} r_0 + r_0^{21} = \mathfrak{t}_0, \label{r02} \end{equation} where $\mathfrak{t}_0$ is the $\mathfrak h\otimes\mathfrak h$-component of $\mathfrak{t}$. {\rm(iii)} Solutions $r_0$ to \eqref{r01}, \eqref{r02} form a linear space of dimension $\frac{k_T(k_T-1)}{2}$, where $k_T= | \Delta \setminus \Gamma_1 |$; more precisely, define \begin{equation} \mathfrak h_T=\{ h\in\mathfrak h \ : \ \alpha(h)=\beta(h)\ \mbox{if}\ \alpha\prec_T\beta\}, \label{htau} \end{equation} then $\operatorname{dim}\mathfrak h_T=k_T$, and if $r_0'$ is a fixed solution of \eqref{r01}, \eqref{r02}, then every other solution has a form $r_0=r_0' + s$, where $s$ is an arbitrary element of $\mathfrak h_T\wedge\mathfrak h_T$. \end{proposition} We say that two classical R-matrices that have a form \eqref{rBD} belong to the same {\em Belavin-Drinfeld class\/} if they are associated with the same Belavin-Drinfeld triple. \subsection{} Let ${\mathcal G}$ be a simple complex Lie group. Given a Belavin-Drinfeld triple $T$ for ${\mathcal G}$, define the torus $\mathcal H_T=\exp \mathfrak h_T\subset{\mathcal G}$. We conjecture that there exists a classification of regular cluster structures on ${\mathcal G}$ that is completely parallel to the Belavin-Drinfeld classification. \begin{conjecture} \label{ulti} Let ${\mathcal G}$ be a simple complex Lie group. For any Belavin-Drinfeld triple $T=(\Gamma_1,\Gamma_2,\gamma)$ there exists a cluster structure $({\mathcal C}_T,\varphi_T)$ on ${\mathcal G}$ such that {\rm (i)} the number of stable variables is $2k_T$, and the corresponding extended exchange matrix has a full rank; {\rm (ii)} $({\mathcal C}_T,\varphi_T)$ is regular, and the corresponding upper cluster algebra $\overline{\A}_{\mathbb C}({\mathcal C}_T)$ is naturally isomorphic to ${\mathcal O}({\mathcal G})$; {\rm (iii)} the global toric action of $(\mathbb{C}^*)^{2k_T}$ on ${\mathbb C}({\mathcal G})$ is generated by the action of $\mathcal H_T\times \mathcal H_T$ on ${\mathcal G}$ given by $(H_1, H_2)(X) = H_1 X H_2$; {\rm (iv)} for any solution of CYBE that belongs to the Belavin-Drinfeld class specified by $T$, the corresponding Sklyanin bracket is compatible with ${\mathcal C}_T$; {\rm (v)} a Poisson-Lie bracket on ${\mathcal G}$ is compatible with ${\mathcal C}_T$ only if it is a scalar multiple of the Sklyanin bracket associated with a solution of CYBE that belongs to the Belavin-Drinfeld class specified by $T$. \end{conjecture} \begin{remark}\label{tam} Let us explain the meaning of assertion (iii) of Conjecture \ref{ulti} in more detail. For any $H\in\mathcal H$ and any weight $\omega\in\mathfrak h^*$ put $H^\omega=e^{\omega(h)}$, whenever $H=\exp h$. Let $({\widetilde{\bf x}}, {\widetilde{B}})$ be a seed in ${\mathcal C}_T$, and $y_i=\varphi(x_i)$ for $i\in [n+m]$. Then (iii) is equivalent to the following: 1) for any $H_1, H_2 \in \mathcal H_T$ and any $X\in{\mathcal G}$, $$ y_i(H_1 X H_2)= H_1^{\eta_i} H_2^{\zeta_i} y_i(X) $$ for some weights $\eta_i,\zeta_i \in \mathfrak h_T^*$ ($i\in[n+m]$); 2) $\mbox{span}\{\eta_i\}_{i=1}^{\operatorname{dim}{\mathcal G}} = \mbox{span}\{\zeta_i\}_{i=1}^{\operatorname{dim}{\mathcal G}}=\mathfrak h^*_T$; 3) for every $i\in [\operatorname{dim}{\mathcal G}-2k_T]$, $$ \sum_{j=1}^{\operatorname{dim}{\mathcal G}} b_{ij}\eta_j = \sum_{j=1}^{\operatorname{dim}{\mathcal G}} b_{ij}\zeta_j = 0. $$ \end{remark} \section{Towards a proof of the main conjecture}\label{reduction} The goal of this sections is to prove \begin{theorem} \label{partial} Let $T=(\Gamma_1, \Gamma_2,\gamma)$ be a Belavin-Drinfeld triple and $({\mathcal C}_T,\varphi_T)$ be a cluster structure on ${\mathcal G}$. Suppose that assertions {\rm(i)} and {\rm(iii)} of Conjecture {\rm\ref{ulti}} are valid and that assertion {\rm(iv)} is valid for one particular R-matrix in the Belavin-Drinfeld class specified by $T$. Then {\rm(iv)} and {\rm(v)} are valid for the whole Belavin-Drinfeld class specified by $T$. \end{theorem} \begin{proof} We start with the following auxiliary statement. \begin{lemma} \label{Adr} Any R-matrix from the Belavin-Drinfeld class specified by $T$ is invariant under the adjoint action of $\mathcal H_T\otimes\mathcal H_T$. \end{lemma} \begin{proof} Fix an arbitrary $H\in\mathcal H_T$. The term $r_0$ in (\ref{rBD}) is clearly fixed by $\operatorname{Ad}_{H}\otimes \operatorname{Ad}_{H}$. Furthermore, \[ \operatorname{Ad}_{H}\otimes \operatorname{Ad}_{H} (e_{-\alpha} \otimes e_{\alpha}) = H^{-\alpha} e_{-\alpha}\otimes H^\alpha e_{\alpha}=e_{-\alpha} \otimes e_{\alpha}. \] Besides, for $\alpha \prec_T \beta$, \[ \operatorname{Ad}_{H}\otimes \operatorname{Ad}_{H} (e_{-\alpha}\wedge e_\beta) = H^{\beta-\alpha} e_{-\alpha}\wedge e_\beta= e_{-\alpha}\wedge e_\beta, \] since $\beta-\alpha$ annihilates $\mathfrak h_T$. \end{proof} Our plan is to invoke the construction used in Proposition~\ref{allcompat}, so the first goal is to define a Poisson structure on the torus $H_T\times \mathcal H_T$ satisfying~\eqref{torbra}. Let $V_1, V_2: \mathfrak h^*_T \to \mathfrak h_T$ be two linear skew-symmetric maps, that is $\langle \eta, V_i \zeta \rangle = - \langle \zeta, V_i \eta \rangle$ for any $\eta,\zeta\in\mathfrak h_T^*$ and $i=1,2$, where $\langle\cdot,\cdot\rangle$ stands for the natural coupling between $\mathfrak h^*_T$ and $\mathfrak h_T$. Besides, let $V_{12}$ be an arbitrary linear map $\mathfrak h^*_T \to \mathfrak h_T$. Put $$ V=\left ( \begin{array}{cc} V_1 & V_{12}\\ -V_{12}^* & V_2 \end{array} \right ) : \mathfrak h^*_T \oplus \mathfrak h^*_T \to \mathfrak h_T\oplus \mathfrak h_T. $$ Then one can define a Poisson structure ${\{\cdot,\cdot\}}_V$ on $\mathcal H_T\times\mathcal H_T$ by the formula \begin{equation} \label{HHbrack} \{\varphi_1,\varphi_2\}_V = \langle V D \varphi_1, D \varphi_2 \rangle, \end{equation} where the differential $D \varphi$ is given by $$ \langle D \varphi (H_1, H_2), \eta \oplus \zeta\rangle = \left.\frac{d}{dt}\right\vert_{t=0} \left ( \varphi(e^{t\eta}H_1, H_2e^{t\zeta})\right ). $$ In particular, the Poisson bracket of ``monomial" functions on $\mathcal H_T\times\mathcal H_T$ is given by \begin{multline*} \left \{ H_1^{\eta_1} H_2^{\eta_2} , H_1^{\zeta_1} H_2^{\zeta_2} \right \}_V =\\ \left ( \langle V_1\eta_1, \zeta_1 \rangle + \langle V_2\eta_2, \zeta_2 \rangle + \langle V_{12} \eta_1, \zeta_2 \rangle + \langle V_{12}\eta_2, \zeta_1 \rangle \right ) H_1^{\eta_1} H_2^{\eta_2} H_1^{\zeta_1} H_2^{\zeta_2}. \end{multline*} By choosing appropriate $\eta_1$, $\eta_2$, $\zeta_1$, $\zeta_2$ in the above relation we make certain that ${\{\cdot,\cdot\}}_V$ satisfies~\eqref{torbra}. Fix an R-matrix $r$ in the Belavin-Drinfeld class specified by $T$ and denote by ${\{\cdot,\cdot\}}_r$ the corresponding Sklyanin bracket. It will be convenient to rewrite formula (\ref{sklya}) for ${\{\cdot,\cdot\}}_r$ as \[ \{f_1,f_2\}_r = \langle R(d^R f_1), d^R f_2 \rangle - \langle R(d^L f_1), d^L f_2 \rangle, \] where $R: \mathfrak h^* \to \mathfrak h$ is given by $\langle R\eta, \zeta \rangle = \langle r, \eta\otimes\zeta \rangle$. We will view ${\mathcal M}=\mathcal H_T\times\mathcal H_T\times{\mathcal G}$ as a direct product of Poisson manifolds $\left ( \mathcal H_T\times\mathcal H_T,{\{\cdot,\cdot\}}_V \right )$ and $\left({\mathcal G},{\{\cdot,\cdot\}}_r \right )$. Consider the map $\mu:{\mathcal M}\ni (H_1, H_2, X) \mapsto H_1X H_2\in{\mathcal G}$. \begin{lemma} \label{rSbrack} {\rm(i)} The map $\mu$ induces a Poisson bracket ${\{\cdot,\cdot\}}_{r,V}$ on ${\mathcal G}$ given by $$ \{f_1, f_2\}_{r,V}=\{f_1, f_2\}_r+\left \langle V \left((d^R f_1 )_0\oplus (d^L f_1)_0\right), (d^R f_2)_0\oplus (d^L f_2)_0\right \rangle, $$ where $(\cdot)_0$ stands for the projection onto $\mathfrak h^*$. {\rm(ii)} The bracket ${\{\cdot,\cdot\}}_{r,V}$ is Poisson-Lie if and only if $V_{12}=0$ and $V_2=-V_1$. \end{lemma} \begin{proof} Let $f$ be a function on ${\mathcal G}$. For any fixed $(H_1,H_2)\in \mathcal H_T\times \mathcal H_T$ define the function $f^{H_1,H_2}$ on ${\mathcal G}$ via $f^{H_1,H_2}(X)=f\circ\mu(H_1,H_2,X)$. Similarly, for any fixed $X\in {\mathcal G}$ define the function $f^X$ on $\mathcal H_T\times \mathcal H_T$ via $f^X(H_1,H_2)= f\circ\mu(H_1,H_2,X)$. Given two functions $f_1,f_2$ on ${\mathcal G}$, let us compute the following Poisson bracket on ${\mathcal M}$: \begin{equation}\label{rS} \{f_1\circ\mu, f_2\circ\mu\}(H_1,H_2,X) =\\ \{f_1^{H_1, H_2}, f_2^{H_1, H_2}\}_r (X) + \{f_1^X, f_2^X\}_V (H_1, H_2). \end{equation} First observe that for a function $f$ on ${\mathcal G}$ $$ d^L f^{H_1, H_2}(X) = \operatorname{Ad}^*_{H_1} d^L f(H_1X H_2), \qquad d^R f^{H_1, H_2}(X) = \operatorname{Ad}^*_{H_2^{-1}} d^R f(H_1X H_2). $$ Since, by Lemma \ref{Adr}, $\operatorname{Ad}_H \circ R\circ \operatorname{Ad}^*_H = R$ for any $H\in \mathcal H_T$, this means that the first term in the right-hand side of \eqref{rS} is equal to $\{f_1, f_2\}_r(H_1X H_2)$. To compute the second term in \eqref{rS}, note that $$ D f^X( H_1, H_2) = \left(d^R f(H_1X H_2)\right )_0\oplus \left(d^L f(H_1X H_2)\right )_0. $$ Then it follows from \eqref{HHbrack} and \eqref{rS} that $$ \{f_1\circ\mu, f_2\circ\mu\} =\left (\{f_1, f_2\}_r+\left \langle V \left((d^R f_1)_0\oplus (d^L f_1)_0\right), (d^R f_2)_0\oplus (d^L f_2)_0\right \rangle\right )\circ\mu, $$ which proves the first claim of the lemma. Conditions on $V$ that ensure that ${\{\cdot,\cdot\}}_{r,V}$ is Poisson-Lie drop out immediately from the fact that any Poisson-Lie structure is trivial at the identity of ${\mathcal G}$. \end{proof} We can now proceed with the proof of the theorem. Assertion (i) guarantees that the toric action mentioned in assertion (iii) has the maximal rank. Assume that assertion (iii) of Conjecture~\ref{ulti} is valid. Then claims 1) and 2) of Remark~\ref{tam}, together with Lemma \ref{rSbrack}(i) and Proposition \ref{allcompat} imply that if ${\{\cdot,\cdot\}}_r$ is compatible with the cluster structure ${\mathcal C}_T$, then every other compatible structure is of the form ${\{\cdot,\cdot\}}_{r,V}$ for some choice of $V$. Since $\varphi_T({\widetilde{\bf x}})$ defines a coordinate chart on ${\mathcal G}$, we conclude that any Poisson bracket on ${\mathcal G}$ compatible with ${\mathcal C}_T$ is, in fact, a scalar multiple of ${\{\cdot,\cdot\}}_{r,V}$. Moreover, by Lemma \ref{rSbrack}(ii), ${\{\cdot,\cdot\}}_{r,V}$ is Poisson-Lie if and only if it can be written in the form (\ref{sklya}) with $r$ replaced by $r+s$, where $s$ is an arbitrary element of $\mathfrak h_T\wedge\mathfrak h_T$. But this is exactly the description of the Belavin-Drinfeld class specified by $T$. The proof is complete. \end{proof} \section{Evidence supporting the conjecture} Here we discuss several instances in which Conjecture \ref{ulti} has been verified. \subsection{The trivial Belavin-Drinfeld data}\label{Secgen} The Belavin-Drinfeld data (triple, class) is said to be {\it trivial\/} if $\Gamma_1=\Gamma_2=\varnothing$. In this case we use subscript $0$ instead of $T$, so $k_0=|\Delta|$ is the rank of ${\mathcal G}$ and $\mathcal H_0=\mathcal H$ is the Cartan subgroup in ${\mathcal G}$. \begin{theorem} \label{triv} Let ${\mathcal G}$ be a simple complex Lie group of rank $n$, then there exists a cluster structure $({\mathcal C}_0,\varphi_0)$ on ${\mathcal G}$ such that {\rm (i)} the number of stable variables is $2n$, and the corresponding extended exchange matrix has full rank; {\rm (ii)} $({\mathcal C}_0,\varphi_0)$ is regular, and the corresponding upper cluster algebra $\overline{\A}_{\mathbb C}({\mathcal C}_0)$ is naturally isomorphic to ${\mathcal O}({\mathcal G})$; {\rm (iii)} the global toric action of $(\mathbb{C}^*)^{2n}$ on ${\mathbb C}({\mathcal G})$ is generated by the action of $\mathcal H\times \mathcal H$ on ${\mathcal G}$ given by $(H_1, H_2)(X) = H_1 X H_2$; {\rm (iv)} for any solution of CYBE that belongs to the trivial Belavin-Drinfeld class, the corresponding Sklyanin bracket is compatible with ${\mathcal C}_0$; {\rm (v)} a Poisson-Lie bracket on ${\mathcal G}$ is compatible with ${\mathcal C}_0$ only if it is a scalar multiple of the Sklyanin bracket associated with a solution of CYBE that belongs to the trivial Belavin-Drinfeld class. \end{theorem} \begin{proof} By Theorem~\ref{partial}, we have to prove assertions (i)--(iii) and exhibit one bracket satisfying (iv). As we have mentioned in the Introduction, paper \cite{CAIII} suggests a construction of a cluster structure on the double Bruhat cell ${\mathcal G}^{u,v}$ for an arbitrary pair of elements $u,v$ in the Weyl group $W$ of ${\mathcal G}$. Let $u=v=w_0$ be the longest element of $W$, then the corresponding double Bruhat cell is open and dense in ${\mathcal G}$, and hence the construction in \cite{CAIII} gives rise to a cluster structure on ${\mathcal G}$. We claim that this cluster structure, which we denote by $({\mathcal C}_0,\varphi_0)$, satisfies all conditions of the theorem. We start with a brief review of the construction in \cite{CAIII}. First, following \cite{FZBruhat}, let us recall the definition of {\em generalized minors\/} in ${\mathcal G}$. Let $\mathcal N_+$ and $\mathcal N_-$ be the upper and the lower maximal unipotent subgroups of ${\mathcal G}$. For every $X$ in an open Zarisky dense subset \[ {\mathcal G}^0=\mathcal N_- \mathcal H \mathcal N_+ \] of ${\mathcal G}$ there exists a unique {\em Gauss factorization} $$ X = X_- X_0 X_+, \quad X_+ \in \mathcal N_+, \ X_- \in \mathcal N_-, \ X_0 \in \mathcal H. $$ For any $X\in {\mathcal G}^0$ and a fundamental weight $\omega_i\in \mathfrak h^*$ define \[ \Delta_i(X)=X_0^{\omega_i}; \] this function can be extended to a regular function on the whole group ${\mathcal G}$. For any pair $u,v\in W$, the corresponding {\em generalized minor\/} is a regular function on ${\mathcal G}$ given by \begin{equation}\label{genminor} \Delta_{u\omega_i,v\omega_i}(X)=\Delta_i( u^{-1}X v). \end{equation} These functions depend only on the weights $u \omega_i$ and $v\omega_i$, and do not depend on the particular choice of $u$ and $v$. The initial cluster for $({\mathcal C}_0,\varphi_0)$ can be chosen as a certain collection of generalized minors, described as follows. Consider two reduced words for $w_0$, one written in the alphabet $1, \ldots, n$ and another, in the alphabet $-1, \ldots, -n$. Let ${\bf i}= (i_l)$ be a word of length $2 l(w_0) + n = \operatorname{dim} {\mathcal G}$ defined as a shuffle of the two reduced words above appended with a string $(-n, \ldots, -1)$ on the left. For $k\in [2 l(w_0)]$ denote \[ u_{\le k}=u_{\le k}({\bf i})=\prod_{l=1,\dots,k} s_{|i_l|}^{\frac{1-{\operatorname{sign}}(i_l)}2},\qquad v_{>k}=v_{>k}({\bf i})=\prod_{l=2 l(w_0),\dots, k+1} s_{|i_l|}^{\frac{1+{\operatorname{sign}}(i_l)}2}. \] Besides, for $k\in -[n]$ set $u_{\le k}$ to be the identity and $v_{>k}$ to be equal to $w_0$. For $k\in -[n] \cup [2 l(w_0)]$ put \[ \Delta(k;{\bf i})=\Delta_{u_{\le k}\omega_{|i_{ k}|},v_{> k}\omega_{|i_{ k}|}}, \] where the right hand side is the generalized minor defined by~\eqref{genminor}. Then $\tilde {\bf x}= \left (x_{k,{\bf i}}=\Delta(k;{\bf i})\ : \ k\in -[n] \cup [2 l(w_0)] \right )$ is an extended cluster in $({\mathcal C}_0,\varphi_0)$ with $2 n = 2 \operatorname{dim} \mathcal H$ stable variables given by $ \Delta(k;{\bf i})$, $k\in - [ n]$, and $ \Delta(k_j;{\bf i})$, $j\in [n]$, where $k_j \in [2 l(w_0)]$ is the largest index such that $|i_{k_j}|=j$. The matrix $\tilde B=(b_{ij})$ for the seed associated with $\tilde {\bf x}$ can be described explicitly in terms of the word ${\bf i}$, however we will not need this description here. By Proposition 2.6 in \cite{CAIII}, it has full rank. So, assertion (i) of the theorem is proved. To prove assertion (ii), observe that ${\mathcal O}({\mathcal G}^{w_0,w_0})$ is obtained from ${\mathcal O}({\mathcal G})$ via localization at stable variables $\Delta(k;{\bf i})$ and $\Delta(k_j;{\bf i})$ defined above. Besides, by Theorem 2.10 in \cite{CAIII}, ${\mathcal O}({\mathcal G}^{w_0,w_0})$ is naturally isomorphic to the upper cluster algebra $\overline{\A}({\mathcal C}_0)$ over ${\mathbb C}{\mathcal P}$, where ${\mathcal P}$ is generated by Laurent monomials in these stable variables. The latter is obtained via localization at the same stable variables from the upper cluster algebra $\overline{\A}_{\mathbb C}({\mathcal C}_0)$, and hence (ii) follows. To establish (iii), we need to check claims 1)--3) of Remark~\ref{tam}. Let $H_1, H_2$ be two elements in $\mathcal H$. We want to compute the local toric action ${\mathcal T}^W_{H_1,H_2}$ on $\tilde {\bf x}$ generated by the action of $\mathcal H\times\mathcal H$ on ${\mathcal G}$. Clearly, $\Delta_i(H_1 X H_2)=(H_1 X_0 H_2)^{\omega_i}$, hence \begin{multline*} \Delta_{u\omega_i,v\omega_i}(H_1 X H_2)=\Delta_i\left ( (u^{-1}H_1 u) (u^{-1} X v) (v^{-1}H_2 v)\right )\\ = (u^{-1}H_1 u)^{\omega_i} (v^{-1}H_2 v)^{\omega_i} \Delta_{u\omega_i,v\omega_i}(X) = H_1^{u\omega_i} H_2^{v\omega_i} \Delta_{u\omega_i,v\omega_i}(X). \end{multline*} Thus, $$ {\mathcal T}^W_{H_1,H_2}({\widetilde{\bf x}})=\left ( x_{k,{\bf i}} H_1^{u_{\le k}\omega_{|i_k|}} H_2^{v_{>k}\omega_{|i_k|}}\right )_{k \in -[n] \cup [1, 2 l(w_0)]}, $$ where rows of $W$ are given by components of weights $u_{\le k}\omega_{|i_k|} ,\ v_{>k}\omega_{|i_k|}$ with respect to some basis in $\mathfrak h^*$, which amounts to claim 1). Claim 2) follows from the fact that, for $k \in -[n]$, exponents of $H_1$ and $H_2$ in the formula above are $\omega_1,\ldots,\omega_n$ and $w_0\omega_1,\ldots,w_0\omega_n$, respectively, and each of these two collections spans $\mathfrak h^*$. Finally claim 3) that guarantees that ${\mathcal T}^W_{H_1,H_2}$ extends to a global toric action amounts to equations $$ \sum_{k \in -[n] \cup [1, 2 l(w_0)]} b_{lk} u_{\le k}\omega_{|i_k|}= \sum_{k \in -[n] \cup [1, 2 l(w_0)]} b_{lk} v_{> k}\omega_{|i_k|} = 0 $$ for $l \in [1, 2 l(w_0)], l \ne k_j\ (j=1,\ldots,n)$. But this is precisely the statement of Lemma 4.22 in \cite{GSVb}, which proves (iii). It remains to exhibit a Poisson structure on ${\mathcal G}$ corresponding to the trivial Belavin-Drinfeld data and compatible with ${\mathcal C}_0$. An immediate modification of Theorem~4.18 in \cite{GSVb} shows that the standard Poisson-Lie structure on ${\mathcal G}$ satisfies these requirements. \end{proof} \subsection{The case of $SL_n$ for $n=2,3,4$}\label{Sec34} In this Section we prove the following result. \begin{theorem} Conjecture~{\rm\ref{ulti}} holds for complex Lie groups $SL_2$, $SL_3$ and $SL_4$. \end{theorem} \begin{proof} The case of $SL_2$ is completely covered by Theorem~\ref{triv}, since in this case $\Delta$ contains only one element, and hence the only Belavin-Drinfeld triple is the trivial one. Before we move on to the case of $SL_3$, consider the following two isomorphisms of the Belavin-Drinfeld data for $SL_n$ (here $n$ is arbitrary): the first one transposes $\Gamma_1$ and $\Gamma_2$ and reverses the direction of $\gamma$, while the second one takes each root $\alpha_j$ to $\alpha_{w_0(j)}$. Clearly, these isomorphisms correspond to the automorphisms of $SL_n$ given by $X\mapsto -X^t$ and $X\mapsto w_0Xw_0$. Since we consider R-matrices up to an action of $\sigma\otimes\sigma$, in what follows we do not distinguish between Belavin-Drinfeld triples obtained one from the other via the above isomorphisms. In the case of $SL_3$ we have $\Delta=\{\alpha_1,\alpha_2\}$, and hence, up to an isomorphism, there is only one non-trivial Belavin-Drinfeld triple: $T=(\Gamma_1=\{\alpha_2\}, \Gamma_2=\{\alpha_1\}, \gamma:\alpha_2\mapsto\alpha_1)$. In this case $k_T=1$, and hence, by Proposition~\ref{bdclass}(iii), the corresponding Belavin-Drinfeld class contains a unique R-matrix. It is called the {\em Cremmer-Gervais R-matrix\/}, and the solution to~\eqref{r01},~\eqref{r02} is given by $$ r_0 -\frac12 \mathfrak{t}_0= \frac 1 6 \left (e_{11}\wedge e_{33} - e_{11}\wedge e_{22} - e_{22}\wedge e_{33}\right ) $$ (see e.g.~\cite{GeGi}). To prove Conjecture~\ref{ulti} in this case, we once again rely on Theorem~\ref{partial}. Let us define the cluster structure $({\mathcal C}_{\text{CG}},\varphi_{\text{CG}})$ validating assertion (i) of the conjecture. Since $\operatorname{dim}{\mathcal G}=8$, the extended exchange matrix ${\widetilde{B}}_{\text{CG}}$ should have 6 rows and 8 columns. Put \[ {\widetilde{B}}_{\text{CG}}=\left(\begin{array}{rrrrrrrr} 0 & -1 & -1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & -1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & -1 & -1 & 0 \\ -1 & 1 & 0 & 0 & 1 & 1 & 0 & -1 \\ 0 & 0 & -1 & -1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & -1 & -1 & 0 & 0 & 0 \end{array}\right). \] It is easy to check that $\operatorname{rank} {\widetilde{B}}_{\text{CG}}=6$. So, to establish (i) it remains to define the field isomorphism $\varphi_{\text{CG}}$. Let $X=(x_{ij})_{i,j=1}^3$ be a matrix in $SL_3$, $\widehat{X}=(\hat x_{ij})_{i,j=1}^3$ be its adjugate matrix given by $\widehat{X}=X^{-1}\det X$. Given the initial extended cluster $(x_1,\dots,x_8)$, denote $P_i=\varphi_{\text{CG}}(x_i)$ and put \begin{align}\label{formaple} P_1=x_{11},\qquad P_2=x_{13}&, \qquad P_3=x_{21}, \notag\\ P_4=-\hat{x}_{23}, \qquad P_5=-\hat{x}_{31}&, \qquad P_6=-\hat{x}_{33},\\ P_7=x_{13}x_{31}-x_{21}x_{23}, \qquad &P_8=\hat x_{13}\hat x_{31}-\hat x_{21}\hat x_{23}. \notag \end{align} A direct computation shows that gradients of $P_i$, $1\le i\le 8$, are linearly independent at a generic point of $SL_3$, hence $(P_1,\dots,P_8)$ form a transcendence basis of ${\mathbb C}(SL_3)$, and assertion (i) is established. The proof of assertion (ii) relies on Proposition~\ref{regfun}. Since $SL_n$ (and, in particular, $SL_3$) is not a Zariski open subset of ${\mathbb C}^k$, we extend the cluster structure $({\mathcal C}_{\text{CG}},\varphi_{\text{CG}})$ to a cluster structure $({\mathcal C}_{\text{CG}}',\varphi_{\text{CG}}')$ on $GL_3$ by appending the column $(0, 0, 0, 0, -1, 1)^t$ on the right of the matrix ${\widetilde{B}}_{\text{CG}}$ and adding the function $P_9=\det X$ to the initial cluster. Conditions (i), (ii) and (iv) of Proposition~\ref{regfun} for $({\mathcal C}_{\text{CG}}',\varphi_{\text{CG}}')$ are clearly true, and condition (iii) is verified by direct computation. The ring of regular functions on $GL_3$ is generated by the matrix entries $x_{ij}$. By Theorem~3.21 in \cite{GSVb}, condition (i) implies that the upper cluster algebra coincides with the intersection of rings of Laurent polynomials in cluster variables taken over the initial cluster and all its adjacent clusters. Therefore, to check condition (v) of Proposition~\ref{regfun}, it suffices to check that every matrix entry can be written as a Laurent polynomial in each of the seven clusters mentioned above. This fact is verified by direct computation with Maple: we solve system of equations~\eqref{formaple}, as well as six similar systems, with respect to $x_i$. Since $GL_3$ is Zariski open in ${\mathbb C}^9$, $\overline{\A}_{\mathbb C}({\mathcal C}_{\text{CG}}')$ is naturally isomorphic to ${\mathcal O}(GL_3)$ by Proposition~\ref{regfun}. Now assertion (ii) for $({\mathcal C}_{\text{CG}},\varphi_{\text{CG}})$ follows from the fact that both $\overline{\A}_{\mathbb C}({\mathcal C}_{\text{CG}})$ and ${\mathcal O}(SL_3)$ are obtained from their $GL_3$-counterparts via restriction to $\det X=1$. To prove assertion (iii), we parametrize the left and the right action of $\mathcal H_{\text{CG}}$ by $\operatorname{diag}(t,1,t^{-1})$ and $\operatorname{diag}(z,1,z^{-1})$, respectively. Then \begin{multline*} \left(\begin{array}{ccc} t & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & t^{-1} \end{array}\right) \left(\begin{array}{ccc} x_{11} & x_{12}& x_{13}\\ x_{21}& x_{22}& x_{23}\\ x_{31}& x_{32} & x_{33} \end{array}\right) \left(\begin{array}{ccc} z & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & z^{-1} \end{array}\right)=\\ \left(\begin{array}{ccc} tzx_{11} & tx_{12}& tz^{-1}x_{13}\\ zx_{21}& x_{22}& z^{-1}x_{23}\\ t^{-1}z x_{31}& t^{-1}x_{32} & t^{-1}z^{-1}x_{33} \end{array}\right), \end{multline*} and hence condition 1) of Remark~\ref{tam} holds with $1$-dimensional vectors $\eta_i$, $\zeta_i$ given by \begin{align*} \eta_1=1,\quad \eta_2=1, \quad \eta_3=0, \quad &\eta_4=1,\quad \eta_5=-1, \quad \eta_6=1, \quad \eta_7=0,\quad \eta_8=0,\\ \zeta_1=1,\quad \zeta_2=-1, \quad \zeta_3=1, \quad &\zeta_4=0,\quad \zeta_5=1, \quad \zeta_6=1, \quad \zeta_7=0,\quad \zeta_8=0. \end{align*} Conditions 2) and 3) are now verified via direct computation. Finally, let us check that assertion (iv) holds for the Cremmer-Gervais bracket. A direct computation shows that this bracket in the basis $(P_1,\dots,P_8)$ satisfies \eqref{cpt}, and that the corresponding coefficient matrix is given by \[ 3\Omega= \left(\begin{array}{rrrrrrrr} 0 & -2 & -2 & -1 & -1 & 0 & -3 & -3 \\ 2 & 0 & 0 & 0 & 0 & 1 & -2 & -1 \\ 2 & 0 & 0 & 0 & 0 & 1 & 1 & -1 \\ 1 & 0 & 0 & 0 & 0 & 2 & -1 & -2 \\ 1 & 0 & 0 & 0 & 0 & 2 & -1 & 1 \\ 0 & -1 & -1 & -2 & -2 & 0 & -3 & -3 \\ 3 & 2 & -1 & 1 & 1 & 3 & 0 & 0 \\ 3 & 1 & 1 & 2 & -1 & 3 & 0 & 0 \end{array}\right). \] A direct check shows that ${\widetilde{B}}_{\text{CG}}\Omega=(-I\; 0)$, hence the Cremmer-Gervais bracket is compatible with ${\mathcal C}_{\text{CG}}$ by Proposition~\ref{Bomega}. \begin{remark}\label{reallife} Although we started the presentation above by specifying ${\widetilde{B}}$ and ${\widetilde{\bf x}}$, to construct the cluster structure $({\mathcal C}_{\text{CG}},\varphi_{\text{CG}})$ we had to act the other way around. We started with the extension of the Cremmer-Gervais bracket to $GL_3$ and tried to find a regular basis in ${\mathbb C}(GL_3)$ in which this bracket is diagonal quadratic (that is, satisfies~\eqref{cpt}). Since $\det X$ is a Casimir function for the extended bracket, it was included in the basis from the very beginning as a stable variable. Once such a basis is built, the exchange matrix of the cluster structure ${\mathcal C}_{\text{CG}}^\circ$ on $GL_3$ is restored via Proposition~\ref{Bomega}. The cluster structure on $SL_3$ is obtained via restriction to the hypersurface $\det X=1$, which amounts to removal of the corresponding column of the exchange matrix. \end{remark} Let us proceed with the case of $SL_4$. Here we have, up to isomorphisms, the following possibilities: Case 1: $\Gamma_1=\Gamma_2=\varnothing$ (standard R-matrices); Case 2: $\Gamma_1=\{\alpha_2, \alpha_3\}$, $\Gamma_2=\{\alpha_1, \alpha_2\}$, $\gamma(\alpha_2)=\alpha_1$, $\gamma(\alpha_3)=\alpha_2$ (Cremmer-Gervais R-matrix); Case 3: $\Gamma_1=\{\alpha_1\}$, $\Gamma_2=\{\alpha_3\}$, $\gamma(\alpha_1)=\alpha_3$; Case 4: $\Gamma_1=\{\alpha_1\}$, $\Gamma_2=\{\alpha_2\}$, $\gamma(\alpha_1)=\alpha_2$. The first case is covered by Theorem~\ref{triv}. In the remaining cases we proceed in accordance with Remark~\ref{reallife}. {\it Case 2}. Here $k_T=1$, and hence the corresponding Belavin-Drinfeld class contains a unique R-matrix. It is called the {\em Cremmer-Gervais R-matrix\/}, and the solution to~\eqref{r01},~\eqref{r02} is given by $$ r_0 -\frac12 \mathfrak{t}_0= \frac14\left(e_{11}\wedge e_{44}-e_{11}\wedge e_{22}-e_{22}\wedge e_{33}-e_{33}\wedge e_{44}\right). $$ The basis in ${\mathbb C}(SL_4)$ that makes the Cremmer-Gervais bracket diagonal quadratic is given by \begin{align*} &P_1= -x_{21}, \qquad P_2=x_{31},\qquad P_3=x_{24},\\ &P_4=\hat x_{31},\qquad P_5=\hat x_{24},\qquad P_6=\hat x_{34},\\ &P_7=\left|\begin{array}{cc} x_{11}& x_{14}\\ x_{21} & x_{24}\end{array}\right|,\qquad P_8=\left|\begin{array}{cc} x_{21}& x_{24}\\ x_{31} & x_{34}\end{array}\right|,\\ &P_9=\left|\begin{array}{cc} x_{21}& x_{14}\\ x_{31} & x_{24}\end{array}\right|,\qquad P_{10}=\left|\begin{array}{cc} x_{21}& x_{22}\\ x_{31} & x_{32}\end{array}\right|,\\ &P_{11}=-\left|\begin{array}{cc} \hat x_{31}& \hat x_{24}\\ \hat x_{41} & \hat x_{34}\end{array}\right|,\qquad P_{12}=-\left|\begin{array}{ccc} x_{21}& x_{22} & x_{14}\\ x_{31} & x_{32} & x_{24}\\ x_{41} & x_{42} & x_{34} \end{array}\right|,\\ &P_{13}=\sum_{i=1}^3 \hat x_{i+1,1} \left|\begin{array}{cc} x_{1i}& x_{14}\\ x_{2i} & x_{24}\end{array}\right|,\qquad P_{14}=-\sum_{i=1}^3 \hat x_{i4}\left|\begin{array}{ccc} x_{21}& x_{2, i+1} & x_{14}\\ x_{31} & x_{3,i+1} & x_{24}\\ x_{41} & x_{4,i+1} & x_{34} \end{array}\right|,\\ &P_{15}=-\sum_{i=1}^3 \hat x_{i+1,1}\left|\begin{array}{ccc} x_{21}& x_{1i} & x_{14}\\ x_{31} & x_{2i} & x_{24}\\ x_{41} & x_{3i} & x_{34} \end{array}\right|, \end{align*} where $X=(x_{ij})_{i,j=1}^4$ is a matrix in $SL_4$ and $\widehat{X}=(\hat x_{ij})_{i,j=1}^4$ is its adjugate matrix. The coefficient matrix of the Cremmer-Gervais bracket in this basis is given by \begin{multline*} 4\Omega=\\ \left(\begin{array}{rrrrrrrrrrrrrrr} 0 & -3 & -3 & -1 & -1 & 0 & 0 & -2 & -3 & 0 & -1 & -2 & -2 & -4 & -4 \\ 3 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & -1 & 2 & 1 & -1 & 1 & -2 & -2 \\ 3 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 3 & 2 & 1 & 3 & 1 & 2 & 2 \\ 1 & 0 & 0 & 0 & 0 & 3 & 2 & 0 & 1 & 2 & 3 & 1 & 3 & 2 & 2 \\ 1 & 0 & 0 & 0 & 0 & 3 & 2 & 0 & 1 & 2 & -1 & 1 & -1 & -2 & -2 \\ 0 & -1 & -1 & -3 & -3 & 0 & 0 & -2 & -1 & 0 & -3 & -2 & -2 & -4 & -4 \\ 0 & -2 & -2 & -2 & -2 & 0 & 0 & -4 & -2 & 0 & -2 & 0 & -4 & -4 & -4 \\ 2 & 0 & 0 & 0 & 0 & 2 & 4 & 0 & 2 & 4 & 2 & 2 & 2 & 0 & 0 \\ 3 & 1 & -3 & -1 & -1 & 1 & 2 & -2 & 0 & 2 & 0 & 1 & -1 & -2 & -2 \\ 0 & -2 & -2 & -2 & -2 & 0 & 0 & -4 & -2 & 0 & -2 & -4 & 0 & -4 & -4 \\ 1 & -1 & -1 & -3 & 1 & 3 & 2 & -2 & 0 & 2 & 0 & -1 & 1 & -2 & -2 \\ 2 & 1 & -3 & -1 & -1 & 2 & 0 & -2 & -1 & 4 & 1 & 0 & 0 & 0 & 0 \\ 2 & -1 & -1 & -3 & 1 & 2 & 4 & -2 & 1 & 0 & -1 & 0 & 0 & 0 & 0 \\ 4 & 2 & -2 & -2 & 2 & 4 & 4 & 0 & 2 & 4 & 2 & 0 & 0 & 0 & 0 \\ 4 & 2 & -2 & -2 & 2 & 4 & 4 & 0 & 2 & 4 & 2 & 0 & 0 & 0 & 0 \end{array}\right). \end{multline*} A basis in ${\mathbb C}(GL_4)$ is obtained by adding $P_{16}=\det X$ to the above basis. Since $\det X$ is a Casimir function, the corresponding coefficient matrix $\Omega^\circ$ is obtained from $\Omega$ by adding a zero column on the right and a zero row at the bottom. By assertion (i) of Conjecture~\ref{ulti}, the cluster structure ${\mathcal C}_{\text{CG}}$ we are looking for should have 2 stable variables; their images under $\varphi_{\text{CG}}$ are polynomials $P_{14}$ and $P_{15}$; recall that $P_{16}$ is the image of the third stable variable that exists in ${\mathcal C}_{\text{CG}}^\circ$, but not in ${\mathcal C}_{\text{CG}}$. Therefore, the exchange matrix of ${\mathcal C}_{\text{CG}}^\circ$ is a $13\times 16$ matrix. It is given by \begin{multline*} {\widetilde{B}}_{\text{CG}}^\circ=\\ \left( \begin{array}{rrrrrrrrrrrrrrrr} 0 \!&\! 1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ -1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ -1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \\ 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \\ 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \\ 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! -1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! -1 \!&\! -1 \!&\! 0 \!&\! 1 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! -1 \!&\! 1 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \end{array}\right). \end{multline*} A direct check shows that ${\widetilde{B}}_{\text{CG}}^\circ\Omega^\circ=(I\; 0)$, hence the Cremmer-Gervais bracket is compatible with ${\mathcal C}_{\text{CG}}^\circ$ by Proposition~\ref{Bomega}. The exchange matrix ${\widetilde{B}}_{\text{CG}}$ for ${\mathcal C}_{\text{CG}}$ is obtained from ${\widetilde{B}}_{\text{CG}}^\circ$ by deletion of the rightmost column. The compatibility is verified in the same way as before. Assertion (ii) is proved exactly as in the case of $SL_3$, with the help of Proposition~\ref{regfun}; a straightforward computation shows that all assumptions of this Propositions are valid. To prove assertion (iii), we parametrize the left and the right action of $\mathcal H_{\text{CG}}$ by $\operatorname{diag}(t^3,t,t^{-1},t^{-3})$ and $\operatorname{diag}(z^3,z,z^{-1},z^{-3})$, respectively. Then condition 1) of Remark~\ref{tam} holds with $1$-dimensional vectors $\eta_i$, $\zeta_i$ given by \begin{align*} \eta_1=1,\quad \eta_2=-1, \quad \eta_3=1, \quad &\eta_4=-3,\quad \eta_5=3, \quad \eta_6=3, \quad \eta_7=4,\quad \eta_8=0,\\ \eta_9=2,\quad \eta_{10}=0, \quad \eta_{11}=0, \quad &\eta_{12}=-1,\quad \eta_{13}=1, \quad \eta_{14}=2, \quad \eta_{15}=-2,\\ \zeta_1=3,\quad \zeta_2=3, \quad \zeta_3=-3, \quad &\zeta_4=1,\quad \zeta_5=-1, \quad \zeta_6=1, \quad \zeta_7=0,\quad \zeta_8=0,\\ \zeta_9=0,\quad \zeta_{10}=4, \quad \zeta_{11}=2, \quad &\zeta_{12}=1,\quad \zeta_{13}=-1, \quad \zeta_{14}=-2, \quad \zeta_{15}=2. \end{align*} Conditions 2) and 3) are now verified via direct computation. {\it Case 3}. Here $k_T=2$, and hence the corresponding Belavin-Drinfeld class contains a 1-parameter family of R-matrices. It is convenient to take the solution to~\eqref{r01},~\eqref{r02} given by $$ r_0 -\frac12 \mathfrak{t}_0= \frac14 \left( e_{11}\wedge e_{22}-e_{22}\wedge e_{33}-e_{33}\wedge e_{44} + 2 e_{22}\wedge e_{44} - e_{11}\wedge e_{44} \right). $$ The basis in ${\mathbb C}(SL_4)$ that makes the corresponding bracket diagonal quadratic is given by \begin{align*} &P_1= x_{12}, \quad P_2=x_{13},\quad P_3=x_{41},\quad P_4=-x_{42},\quad P_5=-\hat x_{12},\quad P_6=-\hat x_{13},\\ &P_7=\hat x_{41},\quad P_8=\hat x_{42},\quad P_9=-\left|\begin{array}{cc} x_{32}& x_{33}\\ x_{42} & x_{43}\end{array}\right|,\quad P_{10}=\left|\begin{array}{cc} x_{13}& x_{14}\\ x_{43} & x_{44}\end{array}\right|,\\ &P_{11}=-\left|\begin{array}{cc} x_{12}& x_{13}\\ x_{42} & x_{43}\end{array}\right|,\quad P_{12}=\left|\begin{array}{cc} x_{13}& x_{14}\\ x_{23} & x_{24}\end{array}\right|,\quad P_{13}=\left|\begin{array}{cc} x_{31}& x_{32}\\ x_{41} & x_{42}\end{array}\right|,\\ &P_{14}=\left|\begin{array}{cc} x_{13}& x_{14}\\ x_{41} & x_{42}\end{array}\right|, \quad P_{15}=\left|\begin{array}{cc} \hat x_{13}& \hat x_{14}\\ \hat x_{41} & \hat x_{42}\end{array}\right|. \end{align*} The coefficient matrix of the bracket in this basis is given by \begin{multline*} 4\Omega=\\ \left(\begin{array}{rrrrrrrrrrrrrrr} 0 & -1 & 0 & -3 & -2 & -1 & 0 & 1 & -2 & -4 & 0 & -2 & -2 & -4 & 0 \\ 1 & 0 & -1 & -2 & -1 & 0 & -3 & -2& -4 & 0 & -2 & 2 & -2 & -2 & -2 \\ 0 & 1 & 0 & -3 & 0 & -3 & -2 & 1 & -2 & 0 & -2 & 0 & 0 & 2 & -2 \\ 3 & 2 & 3 & 0 & 1 & -2 & 1 & 4 & 2 & -2 & 2 & -2 & 2 & 2 & 2 \\ 2 & 1 & 0 & -1 & 0 & 1 & 0 & 3 & 0 & -2 & 0 & 2 & 2 & 0 & 4 \\ 1 & 0 & 3 & 2 & -1 & 0 & 1 & 2 & 0 & 0 & 2 & -2 & 2 & 2 & 2 \\ 0 & 3 & 2 & -1 & 0 & -1 & 0 & 3 & 0 & 2 & 2 & 0 & 0 & 2 & -2 \\ -1 & 2 & -1 & -4 & -3 & -2 & -3 & 0 & -2 & -2 & -2 & 2 & -2 & -2 & -2 \\ 2 & 4 & 2 & -2 & 0 & 0 & 0 & 2 & 0 & 0 & 2 & 2 & 2 & 2 & 2 \\ 4 & 0 & 0 & 2 & 2 & 0 & -2 & 2 & 0 & 0 & 2 & 2 & 2 & 2 & 2 \\ 0 & 2 & 2 & -2 & 0 & -2 & -2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & 0 \\ 2 & -2 & 0 & 2 & -2 & 2 & 0 & -2& -2 & -2 & 0 & 0 & 0 & 0 & 0 \\ 2 & 2 & 0 & -2 & -2 & -2 & 0 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & 0 \\ 4 & 2 & -2 & -2 & 0 & -2 & -2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & 2 & -2 & -4 & -2 & 2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & 0 \end{array}\right). \end{multline*} A basis in ${\mathbb C}(GL_4)$ is obtained by adding $P_{16}=\det X$ to the above basis. Since $\det X$ is a Casimir function, the corresponding coefficient matrix $\Omega^\circ$ is obtained from $\Omega$ by adding a zero column on the right and a zero row at the bottom. By assertion (i) of Conjecture~\ref{ulti}, the cluster structure ${\mathcal C}_{1\mapsto3}$ we are looking for should have 4 stable variables; their images under $\varphi_{1\mapsto3}$ are polynomials $P_{12}$, $P_{13}$, $P_{14}$ and $P_{15}$; recall that $P_{16}$ is the image of the fifth stable variable that exists in ${\mathcal C}_{1\mapsto3}^\circ$, but not in ${\mathcal C}_{1\mapsto3}$. Therefore, the exchange matrix of ${\mathcal C}_{1\mapsto3}^\circ$ is a $11\times 16$ matrix. It is given by \begin{multline*} {\widetilde{B}}_{1\mapsto3}^\circ=\\ \left( \begin{array}{rrrrrrrrrrrrrrrr} 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ -1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\!-1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \\ 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \\ -1 \!&\! 0 \!&\!-1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\!-1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\!-1 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\!-1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \\ 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 1 \!&\! -1 \!&\! 0 \!&\! -1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \end{array}\right). \end{multline*} A direct check shows that ${\widetilde{B}}_{1\mapsto3}^\circ\Omega^\circ=(-I\; 0)$, hence the bracket defined above is compatible with ${\mathcal C}_{1\mapsto3}^\circ$ by Proposition~\ref{Bomega}. The exchange matrix ${\widetilde{B}}_{1\mapsto3}$ for ${\mathcal C}_{1\mapsto3}$ is obtained from ${\widetilde{B}}_{1\mapsto3}^\circ$ by deletion of the rightmost column. The compatibility is verified in the same way as before. Assertion (ii) is proved exactly as in the previous case. To prove assertion (iii), we parametrize the left and the right action of $\mathcal H_{1\mapsto3}$ by $\operatorname{diag}(t,w,w^{-1},t^{-1})$ and $\operatorname{diag}(z,u,u^{-1},z^{-1})$, respectively. Then condition 1) of Remark~\ref{tam} holds with $2$-dimensional vectors $\eta_i$, $\zeta_i$ given by \begin{align*} &\eta_1=(1,0),\quad \eta_2=(1,0),\quad \eta_3=(-1,0), \quad \eta_4=(-1,0),\quad \eta_5=(0,-1), \\ &\eta_6=(0,1), \quad \eta_7=(-1,0),\quad \eta_8=(0,-1),\quad \eta_9=(-1,-1),\quad \eta_{10}=(0,0),\\ &\eta_{11}=(0,0), \quad \eta_{12}=(1,1),\quad \eta_{13}=(-1,-1), \quad \eta_{14}=(0,0), \quad \eta_{15}=(0,0),\\ &\zeta_1=(0,1),\quad \zeta_2=(0,-1),\quad \zeta_3=(1,0), \quad \zeta_4=(0,1),\quad \zeta_5=(-1,0), \\ &\zeta_6=(-1,0), \quad \zeta_7=(1,0),\quad \zeta_8=(1,0),\quad \zeta_9=(0,0),\quad \zeta_{10}=(-1,-1),\\ &\zeta_{11}=(0,0), \quad \zeta_{12}=(-1,-1),\quad \zeta_{13}=(1,1), \quad \zeta_{14}=(0,0), \quad \zeta_{15}=(0,0). \end{align*} Conditions 2) and 3) are now verified via direct computation. {\it Case 4}. Here $k_T=2$, and hence the corresponding Belavin-Drinfeld class contains a 1-parameter family of R-matrices. It is convenient to take the solution to~\eqref{r01},~\eqref{r02} given by $$ r_0 -\frac12 \mathfrak{t}_0=\frac14 \left(e_{11}\wedge e_{22}+e_{22}\wedge e_{33}+e_{33}\wedge e_{44}-e_{11}\wedge e_{44}\right). $$ The basis in ${\mathbb C}(SL_4)$ that makes the corresponding bracket diagonal quadratic is given by \begin{align*} &P_1= -x_{12}, \qquad P_2=x_{42},\qquad P_3=-x_{41},\\ & P_4=-\hat x_{41},\qquad P_5=\hat x_{42},\qquad P_6=-\hat x_{12},\\ &P_7=x_{12}x_{42}-x_{13}x_{41},\qquad P_8=\left|\begin{array}{cc} x_{12}& x_{13}\\ x_{42} & x_{43}\end{array}\right|,\\ &P_9=\left|\begin{array}{cc} x_{11}& x_{12}\\ x_{41} & x_{42}\end{array}\right|,\qquad P_{10}=\left|\begin{array}{cc} x_{12}& x_{13}\\ x_{32} & x_{33}\end{array}\right|,\\ &P_{11}=x_{41}\left|\begin{array}{cc} x_{13}& x_{14}\\ x_{33} & x_{34}\end{array}\right| -x_{42}\left|\begin{array}{cc} x_{12}& x_{14}\\ x_{32} & x_{34}\end{array}\right|,\\ &P_{12}=x_{14},\qquad P_{13}=\hat x_{14},\qquad P_{14}=\left|\begin{array}{cc} x_{21}& x_{23}\\ x_{31} & x_{33}\end{array}\right|, \\ &P_{15}=\hat x_{41}\left(x_{41}\left|\begin{array}{cc} x_{13}& x_{14}\\ x_{23} & x_{24}\end{array}\right| -x_{42}\left|\begin{array}{cc} x_{12}& x_{14}\\ x_{22} & x_{24}\end{array}\right|\right)+\\ &\qquad\qquad\qquad\qquad\hat x_{42}\left(x_{41}\left|\begin{array}{cc} x_{13}& x_{14}\\ x_{33} & x_{34}\end{array}\right| -x_{42}\left|\begin{array}{cc} x_{12}& x_{14}\\ x_{32} & x_{34}\end{array}\right|\right). \end{align*} The coefficient matrix of the bracket in this basis is given by \begin{multline*} 4\Omega=\\ \left(\begin{array}{rrrrrrrrrrrrrrr} 0 & -3 & 0 & -2 & -1 & -2 & -3 & -2 & 0 & -1 & -3 & -2 & 0 & -2 & -4 \\ 3 & 0 & 3 & -1 & 0 & -1 & 3 & 0 & 2 & 1 & 2 & 1 & 1 & 0 & 2 \\ 0 & -3 & 0 & -2 & -1 & -2 & 1 & -2 & 0 & -1 & 1 & 2 & 0 & -2 & 0 \\ 2 & 1 & 2 & 0 & 3 & 0 & 3 & 2 & 4 & 1 & 1 & 0 &-2 & 2 & 0 \\ 1 & 0 & 1 & -3 & 0 & -3 & 1 & 0 & 2 & -1 & -2 & -1 &-1 & 0 & -2 \\ 2 & 1 & 2 & 0 & 3 & 0 & 3 & 2 & 4 & 1 & 1 & 0 & 2 & 2 & 4 \\ 3 & -3 & -1 & -3 & -1 & -3 & 0 & -2 & 2 & 0 & -1 & -1 & 1 & -2 & -2 \\ 2 & 0 & 2 & -2 & 0 & -2 & 2 & 0 & 4 & 2 & 0 & -2 & 2 & 0 & 0 \\ 0 & -2 & 0 & -4 & -2 & -4 & -2 & -4 & 0 & -2 & -2 & 0 & 0 & -4 & -4 \\ 1 & -1 & 1 & -1 & 1 & -1 & 0 & -2 & 2 & 0 & -3 & -3 &-1 & 2 & -2 \\ 3 &-2 & -1 & -1 & 2 & -1 & 1 & 0 & 2 & 3 & 0 & 1 & 1 & 0 & 2 \\ 2 & -1 & -2 & 0 & 1 & 0 & 1 & 2 & 0 & 3 & -1 & 0 & 2 & -2 & 0 \\ 0 & -1 & 0 & 2 & 1 & -2 & -1 & -2 & 0 & 1 & -1 & -2 & 0 & 2 & 0 \\ 2 & 0 & 2 & -2 & 0 & -2 & 2 & 0 & 4 & -2 & 0 & 2 & 0 & 0 & 0 \\ 4 & -2 & 0 & 0 & 2 & -4 & 2 & 0 & 4 & 2 & -2 & 0 & 0 & 0 & 0 \end{array}\right). \end{multline*} A basis in ${\mathbb C}(GL_4)$ is obtained by adding $P_{16}=\det X$ to the above basis. Since $\det X$ is a Casimir function, the corresponding coefficient matrix $\Omega^\circ$ is obtained from $\Omega$ by adding a zero column on the right and a zero row at the bottom. By assertion (i) of Conjecture~\ref{ulti}, the cluster structure ${\mathcal C}_{1\mapsto2}$ we are looking for should have 4 stable variables; their images under $\varphi_{1\mapsto2}$ are polynomials $P_{12}$, $P_{13}$, $P_{14}$ and $P_{15}$; recall that $P_{16}$ is the image of the fifth stable variable that exists in ${\mathcal C}_{1\mapsto2}^\circ$, but not in ${\mathcal C}_{1\mapsto2}$. Therefore, the exchange matrix of ${\mathcal C}_{1\mapsto2}^\circ$ is a $11\times 16$ matrix. It is given by \begin{multline*} {\widetilde{B}}_{1\mapsto2}^\circ=\\ \left( \begin{array}{rrrrrrrrrrrrrrrr} 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ -1 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! -1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! -1 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 1 \\ -1 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! -1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! -1 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \\ 0 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! 0 \!&\! -1 \!&\! 0 \!&\! 0 \!&\! 1 \!&\! 0 \!&\! -1 \!&\! 0 \end{array}\right). \end{multline*} A direct check shows that ${\widetilde{B}}_{1\mapsto2}^\circ\Omega^\circ=(I\; 0)$, hence the bracket defined above is compatible with ${\mathcal C}_{1\mapsto2}^\circ$ by Proposition~\ref{Bomega}. The exchange matrix ${\widetilde{B}}_{1\mapsto2}$ for ${\mathcal C}_{1\mapsto2}$ is obtained from ${\widetilde{B}}_{1\mapsto2}^\circ$ by deletion of the rightmost column. The compatibility is verified in the same way as before. Assertion (ii) is proved exactly as in the previous case. To prove assertion (iii), we parametrize the left and the right action of $\mathcal H_{1\mapsto2}$ by $\operatorname{diag}(t,w,t^{-1}w^2,w^{-3})$ and $\operatorname{diag}(z,u,z^{-1}u^2,u^{-3})$, respectively. Then condition 1) of Remark~\ref{tam} holds with $2$-dimensional vectors $\eta_i$, $\zeta_i$ given by \begin{align*} &\eta_1=(1,0),\quad \eta_2=(0,-3),\quad \eta_3=(0,-3), \quad \eta_4=(-1,0),\quad \eta_5=(0,-1), \\ &\eta_6=(0,-1), \quad \eta_7=(1,-3),\quad \eta_8=(1,-3),\quad \eta_9=(1,-3),\quad \eta_{10}=(0,2),\\ &\eta_{11}=(0,-1), \quad \eta_{12}=(1,0),\quad \eta_{13}=(0,3), \quad \eta_{14}=(-1,-1), \quad \eta_{15}=(0,-2),\\ &\zeta_1=(0,1),\quad \zeta_2=(0,1),\quad \zeta_3=(1,0), \quad \zeta_4=(0,3),\quad \zeta_5=(0,3), \\ &\zeta_6=(-1,0), \quad \zeta_7=(0,2),\quad \zeta_8=(-1,3),\quad \zeta_9=(1,1),\quad \zeta_{10}=(-1,3),\\ &\zeta_{11}=(0,-1), \quad \zeta_{12}=(0,-3),\quad \zeta_{13}=(-1,0), \quad \zeta_{14}=(1,1), \quad \zeta_{15}=(0,2). \end{align*} Conditions 2) and 3) are now verified via direct computation. \end{proof} \section{The case of triangular Lie bialgebras}\label{SecTri} We conclude with an example that shows that Conjecture \ref{ulti} is not valid in the case of skew-symmetric R-matrices, that is, R-matrices $r$ that satisfy (\ref{rBD}) together with the condition $$ r + r^{21}=0. $$ Consider the simplest skew-symmetric R-matrix in $sl_2$: $$ r=\left (\begin{array}{cc} 1& 0\\0 &-1\end{array}\right) \wedge \left (\begin{array}{cc} 0& 1\\0 & 0\end{array}\right). $$ Let $X=(x_{ij})_{i,j=1}^2$ denote an element of $SL_2$. Choose functions $y_1=x_{11}$, $y_2=x_{21}$, $y_3=x_{11}-x_{22}$ as coordinates on $SL_2$. Then a direct calculation using (\ref{sklya}) shows that the Poisson-Lie bracket corresponding to $r$ has the form $$ \{y_1, y_2\} = y_2^2, \qquad \{y_1, y_3\} = y_2 y_3, \qquad \{y_2, y_3\} = 0. $$ Select a new coordinate system on the open dense set $\{x_{21} \ne 0\}$: $z_1=y_1$, $z_2=-1/y_2$, $z_3= y_3/y_2$, then the Poisson algebra above becomes: $$ \{z_1, z_2\} = 1, \qquad \{z_1, z_3\} = \{z_2, z_3\} = 0. $$ It is easy to see that both collections $z_1, z_2, z_3$ and $y_1, y_2, y_3$ generate the set of rational functions on $SL_2$. However, we claim that there is no triple of independent rational functions $p_i(z_1,z_2,z_3)$ such that $\{p_i,p_j\} = c_{ij} p_ip_j$ for some constants $c_{ij}$, $i,j=1,2,3$. Indeed, the independence implies that at least one of the constants $c_{ij}$, say, $c_{12}$, is nonzero. View $p_1$ and $p_2$ as ratios of two polynomials in $z_1$ with the difference of degrees of the numerator and denominator equal to $\delta_1$ and $\delta_2$, respectively. Then the difference of degrees of the numerator and denominator of $\{p_1,p_2\} $ viewed as a rational function of $z_1$ is at most $\delta_1 + \delta_2 -1$, and thus $\{p_1,p_2\} $ cannot be a nonzero multiple of $p_1p_2$. This means, in particular, that the Poisson structure associated with the R-matrix above cannot be compatible with any cluster structure in the field of rational functions on $SL_2$. \section*{Acknowledgments} M.~G.~was supported in part by NSF Grant DMS \#0801204. M.~S.~was supported in part by NSF Grants DMS \#0800671 and PHY \#0555346. A.~V.~was supported in part by ISF Grant \#1032/08. This paper was partially written during the joint stay of all authors at MFO Oberwolfach in August 2010 within the framework of Research in Pairs programme and the visit of the third author to Institut des Hautes \'Etudes Scientifiques in September--October 2010. We are grateful to these institutions for warm hospitality and excellent working conditions. \end{document}
arXiv
Age and life expectancy clocks based on machine learning analysis of mouse frailty Unsupervised learning of aging principles from longitudinal data Konstantin Avchaciov, Marina P. Antoch, … Peter O. Fedichev Biological age in healthy elderly predicts aging-related diseases including dementia Julia W. Wu, Amber Yaqub, … Jaap Goudsmit Artificial intelligence in longevity medicine Alex Zhavoronkov, Evelyne Bischof & Kai-Fu Lee A computational solution for bolstering reliability of epigenetic clocks: implications for clinical trials and longitudinal tracking Albert T. Higgins-Chen, Kyra L. Thrush, … Morgan E. Levine Personal aging markers and ageotypes revealed by deep longitudinal profiling Sara Ahadi, Wenyu Zhou, … Michael Snyder From discoveries in ageing research to therapeutics for healthy ageing Judith Campisi, Pankaj Kapahi, … Eric Verdin Systems biology analysis of lung fibrosis-related genes in the bleomycin mouse model Dmitri Toren, Hagai Yanai, … Vadim E Fraifeld A pan-tissue DNA-methylation epigenetic clock based on deep learning Lucas Paulo de Lima Camillo, Louis R. Lapierre & Ritambhara Singh A new gene set identifies senescent cells and predicts senescence-associated pathways across tissues Dominik Saul, Robyn Laura Kosinsky, … Sundeep Khosla Michael B. Schultz1 na1, Alice E. Kane1,2 na1, Sarah J. Mitchell3, Michael R. MacArthur ORCID: orcid.org/0000-0003-3593-69953, Elisa Warner ORCID: orcid.org/0000-0001-6694-27014, David S. Vogel5, James R. Mitchell3, Susan E. Howlett ORCID: orcid.org/0000-0001-5351-63086, Michael S. Bonkowski1,7 & David A. Sinclair ORCID: orcid.org/0000-0002-9936-436X1,8 Computational models A Publisher Correction to this article was published on 08 October 2020 The identification of genes and interventions that slow or reverse aging is hampered by the lack of non-invasive metrics that can predict the life expectancy of pre-clinical models. Frailty Indices (FIs) in mice are composite measures of health that are cost-effective and non-invasive, but whether they can accurately predict health and lifespan is not known. Here, mouse FIs are scored longitudinally until death and machine learning is employed to develop two clocks. A random forest regression is trained on FI components for chronological age to generate the FRIGHT (Frailty Inferred Geriatric Health Timeline) clock, a strong predictor of chronological age. A second model is trained on remaining lifespan to generate the AFRAID (Analysis of Frailty and Death) clock, which accurately predicts life expectancy and the efficacy of a lifespan-extending intervention up to a year in advance. Adoption of these clocks should accelerate the identification of longevity genes and aging interventions. Aging is a biological process that causes physical and physiological deficits over time, culminating in organ failure and death. For species that experience aging, which includes nearly all animals, its presentation is not uniform; individuals age at different rates and in different ways. Biological age is an increasingly utilized concept that aims to more accurately reflect aging in an individual than the conventional chronological age. Biological measures that accurately predict health and longevity would greatly expedite studies aimed at identifying genetic and pharmacological disease and aging interventions. Any useful biometric or biomarker for biological age should track with chronological age and should serve as a better predictor of remaining longevity and other age-associated outcomes than does chronological age alone, even at an age when most of a population is still alive. In addition, its measurement should be non-invasive to allow for repeated measurements without altering the health or lifespan of the animal measured1. In humans, biometrics and biomarkers that meet at least some of these requirements include physiological measurements such as grip strength or gait2,3, measures of the immune system4,5, telomere length6, advanced glycosylation end-products7, levels of cellular senescence8, and DNA methylation clocks9. DNA methylation clocks have been adapted for mice but unfortunately these clocks are currently expensive, time consuming, and require the extraction of blood or tissue. Frailty index (FI) assessments in humans are strong predictors of mortality and morbidity, outperforming other measures of biological age including DNA methylation clocks10,11. FIs quantify the accumulation of up to 70 health-related deficits, including laboratory test results, symptoms, diseases, and standard measures such as activities of daily living12,13. The number of deficits an individual shows is divided by the number of items measured to give a number between 0 and 1, in which a higher number indicates a greater degree of frailty. The FI has been recently reverse-translated into an assessment tool for mice which includes 31 non-invasive items across a range of systems14. The mouse FI is strongly associated with chronological age14,15, correlated with mortality and other age-related outcomes16,17, and is sensitive to lifespan-altering interventions18. However, the power of the mouse FI to model biological age or predict life expectancy for an individual animal has not yet been explored. In this study, we track frailty longitudinally in a cohort of aging male mice from 21 months of age until their natural deaths and employ machine learning algorithms to build two clocks: FRIGHT age, designed to model chronological age, and the AFRAID clock, which is modeled to predict life expectancy. FRIGHT age reflects apparent chronological age better than FI alone, while the AFRAID clock predicts life expectancy at multiple ages. These clocks are then tested for their predictive power on cohorts of mice treated with interventions known to extend healthspan or lifespan, enalapril and methionine restriction. They accurately predict increased healthspan and lifespan, demonstrating that an assessment of non-invasive biometrics in interventional studies can greatly accelerate the pace of discovery. Frailty correlates with and is predictive of age We measured FI scores (Supplementary Fig. 1) approximately every 6 weeks in a population of naturally aging male C57BL/6Nia mice (n = 60) until the end of their lives. These mice had a normal lifespan, with a median survival of 31 months and a maximum (90th percentile) of 36 months (Fig. 1a and Supplementary Fig. 2). As expected, FI scores increased with age from 21 to 36 months at the population level (Fig. 1b). At the individual level, frailty trajectories displayed significant variance, representative of the variability in how individuals experience aging even within a population of inbred animals (Fig. 1c). As FI score was well correlated with chronological age, we sought to determine the degree to which FI score could model chronological and biological age. We performed a linear regression on FI score for age with a training dataset and evaluated its accuracy on a testing dataset (Fig. 1d–e). FI score was able to predict chronological age with a median error of 1.8 months, a mean error of 1.9 months, and an r2 value of 0.642 (p = 3.4e−38). We hypothesized that the error may be representative of biological age, with healthier individuals having a predicted age younger than their true age. We calculated this difference between predicted age and true age, termed delta age, and used remaining time until death as our primary outcome to compare with. For some individual age groups (24, 34.5, and 36 months), delta age did indeed have a negative correlation with survival, with biologically younger mice (those with a negative delta age) living longer at each individual age than biologically older mice (those with a positive delta age) (Fig. 1f and Table 1). For other groups this correlation is a trend, and more power may detect an association (Table 1). This suggests that the FI score is able to detect variation in predicted chronological age for mice of the same actual age, and this may represent biological age. Fig. 1: Frailty correlates with and is predictive of age in mice. a Kaplan–Meier survival curve for male C57BL/6 mice (n = 60) assessed longitudinally for Frailty Index (FI) (indicated by arrows). b Box and whisker plots displaying median FI scores for mice from 21 to 36 months of age. Colors indicate different ages (n = 24, 27, 20, 29, 43, 36, 32, 25, 18, 11, 6). Box plots represent median, lower and upper quartiles, and 95 percentile. c FI score trajectories for each individual mouse from 21 months until death. d Univariate regression of FI score for chronological age on a training dataset, and e a testing dataset. For training and testing datasets, data were randomly divided 50:50, separated by mouse rather than by assessment, n = 106 datapoints for training and n = 165 for testing. Correlation determined by Pearson correlation coefficients. f Residuals of the regression (delta age), plotted against survival for individual ages (as demonstrated by different colors). Regression lines are only graphed for ages where there is an r2 value >0.1. Source data are provided as a Source Data file. Table 1 Correlation between survival and delta age at individual ages. Individual frailty items vary in their correlation with age While a simple linear regression on overall frailty score was somewhat predictive of age, we hypothesized that by differentially weighting individual metrics, we could build a more predictive model, as has been done with various CpG sites to build methylation clocks9. To this end, we calculated the correlation between each individual FI item and chronological age (Table 2). Some parameters, such as tail stiffening, breathing rate/depth, gait disorders, hearing loss, kyphosis, and tremor, are strongly correlated (r2 > 0.35, p < 1e−30) with age (Fig. 2), while others show very weak or no correlation with age (Table 2 and Supplementary Fig. 3). The fact that some parameters were very well correlated and others poorly correlated suggested that by weighting items we could build an improved model for biological age prediction. Table 2 Correlation coefficients (r2) and p values for frailty items with chronological age. Fig. 2: Individual FI items vary in their correlation with age. Mean scores across all mice (black line) for the top nine individual items of the Frailty Index (FI) that were correlated with chronological age. Colors indicate proportion of mice at each age with each score (0, blue; 0.5, orange, 1, red). Source data are provided as a Source Data file. Multivariate regressions of frailty items to predict age We compared FI score as a single variable and four types of multivariate linear regression models to predict chronological age: simple least-squares regression, elastic net regression, random forest regression, and the Klemera–Doubal biological age estimation method (Eq. (1))19. We employed the bootstrap method on the training dataset to compare models. Only frailty items that had a significant, even if weak, correlation with age (p < 0.05) were included in the analysis (21 items, see Table 2). The multivariate models, particularly elastic net, the random forest, and the Klemera–Doubal methods (KDMs), were superior to FI as a single variable, with lower median error (p < 0.0001, F = 49.46, d.f. = 499) and mean error (p < 0.0001, F = 68.37, d.f. = 499, Supplementary Fig. 4a), higher r2 values (p < 0.0001, F = 57.1, d.f. = 499), and smaller p values (p < 0.0001, F = 26.29, d.f. = 499) when compared with one-way ANOVA. For further analysis, we selected the random forest regression model as it had the lowest median error (Fig. 3a–c). Random forest models can also represent complex interactions among variables, which linear regressions cannot do, and may perform better in datasets where the number of features approaches or exceeds the number of observations20. We term the outcome of this model FRIGHT age for Frailty Inferred Geriatric Health Timeline. Fig. 3: Multivariate regressions of individual FI items to predict age (FRIGHT age). a–c Median error, r2 values and p values for univariate regression of Frailty Index (FI) score, and multivariate regressions of the individual FI items using either simple least squares (SLS), elastic net (ELN), the Klemera–Doubal method (KDM), or random forest regression (RFR) for chronological age in the mouse training set. All models were tested with bootstrapping with replacement repeated 100 times, and each bootstrapping incidence is plotted as a separate point. ****p < 0.0001 and ***p < 0.001 compared to FI model with one-way ANOVA. Error bars represent standard error of the mean. d, e Random forest regression of the individual FI items for chronological age on training and testing datasets (data was randomly divided 50:50, separated by mouse rather than by assessment, n = 106 datapoints for training and n = 165 for testing). This model is termed FRIGHT (Frailty Inferred Geriatric Health Timeline) age. Correlation determined by Pearson correlation coefficients. f Importances of top eight items included in the FRIGHT age model. g Residuals of the regression (delta age) plotted against survival for individual ages (as demonstrated by different colors). Regression lines are only graphed for ages where there is an r2 value >0.1. Source data are provided as a Source Data file. When assessed on the testing dataset, FRIGHT age had a strong correlation with chronological age, with a median error of 1.3 months, a mean error of 1.6 months, and an r2 value of 0.748 (p = 1.1e−50) (Fig. 3d, e). The items that were the largest contributors to FRIGHT age included breathing rate, tail stiffening, kyphosis, and total weight change (Fig. 3f). While FRIGHT age was superior to the FI score at predicting chronological age (Fig. 3a–c), the error from the predictions (delta age) were not well correlated with mortality (Fig. 3g). For the majority of individual age groups the r2 values of the correlation between FRIGHT age and survival were <0.1, indicating poor correlation (Table 1). Interestingly, the correlations were stronger for mice aged 34 months or greater, indicating that perhaps FRIGHT age is predictive of mortality only in the oldest mice (Table 1). This may be because the individual parameters that correlate well with chronological age are not necessarily the same as those that correlate well with mortality at all ages. Thus FRIGHT age has value as a predictor of apparent chronological age (e.g. this mouse looks 30 months old) but it is not yet clear whether it can serve as a predictor of other age-related outcomes. Multivariate regressions of frailty items to predict lifespan As FRIGHT age was not predictive of mortality at most ages, we sought to build a model based on individual FI items to better predict life expectancy. We began by calculating the correlation between each individual parameter and survival (number of days from date of FI assessment to date of death). Chronological age was the best predictor of mortality (r2 = 0.35, p = 1.9e−27), followed by FI score (r2 = 0.31, p = 2.7e−23), tremor, body condition score, and gait disorders (Table 3). However, many of these individual parameters appeared to be better predictors than they were, as a result of their covariance with chronological age. Their correlation with survival was largely only for mice of different ages, and not of the same age. Table 3 Correlation coefficients (r2) and p values for frailty items with life expectancy. To build a model to predict mortality, we trained a regression using FI as a single variable, and multivariate regressions using the FI items and chronological age with the simple least squares, elastic net, and random forest methods. All frailty items plus chronological age were included as variables in this analysis (32 items, see Table 3). As before, we compared these models using bootstrapping on the training set, and one-way ANOVA with Dunnett's post hoc test of r2 value, p value, median, and mean error (Fig. 4a–c and Supplementary Fig. 4). For prediction of survival, the elastic net and random forest regression models were the superior models, with higher r2 values (p < 0.0001, F = 36.62, d.f. = 399), lower p values (p < 0.0001, F = 32.65, d.f. = 399), and median errors (p < 0.0001, F = 73.55, d.f. = 399) than FI score alone (Fig. 4a–e and Supplementary Fig. 4). Similar results were obtained when chronological age was replaced with FRIGHT age, demonstrating that life expectency can be accurately predicted with frailty measures alone (Supplementary Fig. 4c–f). We selected the random forest regression model (with chronological age) for further analysis, and we termed the outcome of this model the AFRAID clock for Analysis of Frailty and Death. The most important variables in the model were total weight loss, chronological age, and tremor, followed by distended abdomen, recent weight loss, and menace reflex (Fig. 4f). In the testing dataset, the AFRAID clock was well correlated with survival (r2 = 0.505, median error = 1.7 months, mean error = 2.3 months, p = 1.1e−26) (Fig. 4e). The AFRAID clock was also correlated with survival at individual ages (Fig. 4g) with r2 > 0.3 and p value <0.05 at 24, 30, and 34.5 months of age (Table 1). Plotting the survival curves of mice with the lowest and highest AFRAID clock scores at given ages, as determined by the top and bottom quartiles, demonstrated a clear association with mortality risk for all age groups (Fig. 4h–k). These results suggest that the AFRAID clock may be useful for comparing the lifespan effects of interventional studies in mice many months before their death. Fig. 4: Multivariate regressions of FI items to predict life expectancy (AFRAID clock). a–c Median error, r2 values, and p values for univariate regression of Frailty Index (FI) score, and multivariate regressions of the individual FI items using either simple least squares (SLS), elastic net (ELN), or random forest regression (RFR) for life expectancy in the mouse training set. All models were tested with bootstrapping with replacement repeated 100 times, and each bootstrapping incidence is plotted as a separate point. ****p < 0.0001 and ***p < 0.001 compared to FI model with one-way ANOVA. Error bars represent standard error of the mean. d, e Random forest regression of the individual FI items for life expectancy on training and testing datasets (data was randomly divided 50:50, separated by mouse rather than by assessment, n = 106 datapoints for training and n = 165 for testing), plotted against actual survival. This model is termed the AFRAID (Analysis of Frailty and Death) clock. Correlation determined by Pearson correlation coefficients. f Importances of top 15 items included in the AFRAID clock. g AFRAID clock scores plotted against actual survival for individual mouse age groups (as demonstrated by different colors) in the testing dataset. Regression lines are only graphed for ages where there is an r2 value >0.1. h–k Kaplan–Meier curves of the bottom (red lines) and top (green lines) quartiles of AFRAID clock scores for mice over 1–2 assessments at 24–26, 27–29, 30–32, and 33–35 months of age. *p < 0.05 compared with two-sided log-rank test. Exact p values, respectively: 0.032, 0.015, 0.026, and 0.034. Source data are provided as a Source Data file. Effect of interventions on FRIGHT age and AFRAID clock One ultimate utility for biological age models would be to serve as early biomarkers for the effects of interventional treatments, which are expected to extend or reduce healthspan and lifespan. A recently published study measured FI in 23-month-old male C57BL/6 mice treated with the angiotensin-converting enzyme (ACE) inhibitor enalapril (n = 21) from 16 months of age, or age-matched controls (n = 13)21. As previously published, enalapril reduced the average FI score compared to control-treated mice (Fig. 5a). When FRIGHT age was calculated for these mice, the enalapril-treated mice appeared to be a month younger than the control mice (control 27.8 ± 1.1 months; enalapril 26.8 ± 1.4 months, p = 0.046, t = 2.1, d.f. = 32) (Fig. 5b). When the data were converted to a prediction of survival with the AFRAID clock, the enalapril-treated mice were not predicted to live longer (control 5.9 ± 0.7 months; enalapril 6.2 ± 0.9 months, p = 0.29, t = 1.09, d.f. = 32) (Fig. 5c). This is interesting in light of the fact that enalapril has been shown to improve health, but not maximum lifespan, in mice21,22. Fig. 5: Response of FRIGHT age and AFRAID clock to interventions. a–c Frailty Index (FI) score, FRIGHT (Frailty Inferred Geriatric Health Timeline) age and AFRAID (Analysis of Frailty and Death) clock for male 23-month-old C57BL/6 mice treated with enalapril-containing food (280 mg/kg) or control diet from 16 months of age. Data reanalyzed from previously published work21. Control n = 13, Enalapril n = 21. Exact p values, respectively: 0.001, 0.046, 0.29. d–f FI score, FRIGHT age, and AFRAID clock for male 27-month-old C57BL/6 mice treated with either a control diet (0.45% methionine) or methionine-restricted diet (0.1% methionine, MetR) from 21 months of age. *p value <0.05, **p < 0.01, and ***p < 0.001 compared with independent two-sided t-tests. Control n = 11, MetR n = 13. Exact p values, respectively, 0.001, 0.039, 0.006. Error bars represent standard error of the mean. Source data are provided as a Source Data file. Methionine restriction is a robust intervention that extends the healthspan and lifespan of C57Bl/6 mice23,24,25. We placed mice on a methionine restriction (0.1% methionine, n = 13) or control (n = 11) diet, from 21 months of age. We assessed frailty at 27 months of age and calculated FI, FRIGHT age and AFRAID clock. The methionine-restricted mice had significantly lower FI scores (control 0.37 ± 0.30; MR 0.30 ± 0.04, p = 0.0009, t = 3.8, d.f. = 22) (Fig. 5d), as well as a FRIGHT age 0.7 months younger than control-fed mice (control 29.8 ± 0.9 months; MR 29.1 ± 0.6 months, p = 0.039, t = 2.19, d.f. = 22) (Fig. 5e). Using the AFRAID clock, the methionine-restricted mice were predicted to live 1.3 months longer than controls (control 3.0 ± 1.0 months; enalapril 4.3 ± 1.0 months, p = 0.006, t = 3.02, d.f. = 22) (Fig. 5f). These analyses demonstrate that the FRIGHT age and AFRAID clock models are responsive to healthspan and lifespan-extending interventions. This is the first study to measure the clinical FI longitudinally in a population of naturally aging mice that were tracked until their natural deaths in order to predict healthspan and lifespan. We show that the FI is not only correlated with but is also predictive of both age and survival in mice, and we have used components of the FI to generate two clocks: FRIGHT age, which models apparent chronological age better than the FI itself, and the AFRAID clock, which predicts life expectancy with greater accuracy than the FI. In essence, FRIGHT age is an estimation of how old a mouse appears to be, and the AFRAID clock is a prediction of how long a mouse has until it dies (a death clock). Finally, FRIGHT age and the AFRAID clock were shown to be sensitive to two healthspan or lifespan-increasing interventions: enalapril treatment and dietary methionine restriction. The major advantage of the FI, and our models of the FI items, as aging biometrics is their ease of use. FI is quick and essentially free to assess, requires no specialized equipment or training, and has no negative impact on the health of the animals. We encourage future longevity studies to incorporate periodic frailty assessments as a routine measure into their protocols. This will help further determine the utility of frailty itself, as well as our FRIGHT age and AFRAID clock models, for predicting outcomes of interest, and may eventually be used as a screening tool to decide whether to continue expensive interventional longevity studies after a short duration. Additionally, use of these non-invasive frailty measures in longevity studies will enable researchers to detect not only possible changes in lifespan, but also healthspan, arguably a more important outcome. We have created a website that automatically calculates and graphs FRIGHT age and AFRAID scores based on uploaded FI data, along with additional details of how to assses the frailty items in mice including a video demonstration (http://frailtyclocks.sinclairlab.org/) (Supplementary Fig. 6). Code for our clock calculators is also available on github (https://github.com/SinclairLab/frailty). DNA methylation clocks are also promising biomarkers of biological age. In humans, these clocks are highly correlated with chronological age, and are able to predict, at the population level, mortality risk and risk of age-related diseases11,26,27,28,29,30,31. Methylation clocks have also been developed for mice, and shown to correlate with chronological age, and respond to lifespan-increasing interventions such as calorie restriction32,33,34,35, but their association with mortality has not yet been explored. However, the major drawback of these mouse clocks is that they require repeated invasive blood collections and time-consuming and expensive data acquisition and analysis procedures. This is the first time, to our knowledge, that frailty has been used to predict individual life expectancy in either humans or mice. In mice, frailty has previously been associated with mortality17,36 but not used to predict lifespan. Mortality measures in mice that have focused on prediction, have either concentrated on the acute prediction of death such as in the context of sepsis37,38, focused on only a few measures resulting in low or moderate correlations with survival39,40,41,42,43,44, or used short-lived mouse strains5. The AFRAID clock, which was modeled in the commonly used C57BL/6 mouse strain and includes 33 variables, is able to predict mortality with a median error of 53 days across multiple ages. The real value of a biological age measure for mice, however, is in predicting how long individual mice of the same chronological age will live. The AFRAID clock was also able to predict mortality at specific ages, even as early as 24 months (approximately 6 months before the average lifespan, and 12 months before maximum lifespan without intervention). Additionally, when chronological age was replaced by FRIGHT age (predicted chronological age) to build a survival model similar to the AFRAID clock, we saw a similar accuracy of lifespan prediction (Supplementary Fig. 4), indicating that life expectancy can be accurately predicted from FI items alone, without using chronological age as a variable. This ability to predict expected lifespan in mice of the same chronological age provides exciting evidence that the AFRAID clock could be used in interventional longevity studies to understand whether an intervention is working to delay aging at an earlier time point than death. Indeed, we show in the current study that treatment with the ACE inhibitor enalapril reduced FRIGHT age compared to controls but did not change the AFRAID clock. Enalapril is known to increase healthspan but not lifespan22, indicating the value of these measures in detecting healthspan improvements even in the absence of an increase in lifespan. The dietary intervention of methionine restriction is known to increase healthspan and lifespan23,24,25, and we saw reduced FRIGHT age and increased AFRAID clock scores in methionine-restricted mice at 27 months compared to controls. This means that had this been a longevity study, these measures would have given an indication of the lifespan outcomes less than halfway through the predicted study timeframe. In the methionine restriction experiment, the predicted age values for this independent cohort were slightly higher than their true values, likely as a result of different baseline variability in frailty in different facilities. Similar effects have been seen with the mouse DNA methylation clocks33,35. Even so, there were still clear differences detected between groups, indicating both the importance of comparing results to controls within studies, and the ultility of these clocks even for independent mouse cohorts in different facilities. Studies in humans have used the FI to determine increased risk of mortality within specific time periods45,46,47,48, but not to predict individual life expectancies, as we have done here for mice. In theory the AFRAID clock could be easily adapted to predict mortality from human FI data. This has likely not been done as of yet, as it would require a large dataset that includes longitudinal assessments of FI items with mortality follow-up. This type of study is rare, particularly in an aging population. Even large cohort studies such as NHANES do not include enough people aged over 80 to allow for their specific ages to be released due to risk of identification. It would be interesting in future research to apply machine learning algorithms such as those used in the current study to predict individual life expectancy using FI data in humans. We explored a range of regression techniques in the current paper. Simple linear and elastic net regressions are easily applied and interpreted, but are limited by being parametric and only considering linear relationships between variables, which reduce their predictive power for our data. The KDM, which was developed specifically to predict biological age by combining linear regressions of individual biomarkers19, has been shown to predict human mortality risk49,50. Here, we applied this method to mice and saw some improved prediction over simple linear regression. For our final models, we used random forest algorithms, which are robust to outliers and noise, and allow for complex non-parametric modeling20. There are some limitations of these complex models, however, including a lack of interpretability of the weighting and interactions of the variables. Some previous studies have also used machine learning approaches for the development of aging biomarkers, including deep neural networks of standard blood biomarkers51,52 and deep learning of brain imaging data53, with promising results54,55. These have been exclusively humans studies, and our findings suggest that future studies exploring biological age biomarkers in mice could benefit from incorporating machine learning approaches such as neural networks or gradient boosting machine algorithms. The aim of all three frailty metrics presented here, FI score, FRIGHT age, and the AFRAID clock, are robust methods for the appraisal of biological age. True biological age, however defined, is related to but separate from both chronological age and mortality, and without a clear biomarker with which to compare these three metrics, an assessment of their relative value is difficult. In one sense, FRIGHT age is the best because it tracks most closely with chronological age, with the variation in FRIGHT age (delta age; predicted−true age) representing biological age. An intervention that slows aging would likely suppress all aspects of aging including those that do not impact life expectancy (e.g. hair graying) and FRIGHT age would detect such changes. It is limited, however, by its lack of sensitivity in predicting mortality. In another sense, the AFRAID clock is the superior metric because an increase in life expectancy, median and maximum, is the current benchmark for the success of an aging intervention. One could also argue that overall unweighted FI is the best metric. While it is not best at predicting either chronological age or mortality, it is better than either FRIGHT age or AFRAID clock at predicting both. The best approach may be to employ all three estimates. The predictive power of these models for both age and lifespan could be improved by the inclusion of larger n values (especially at the older ages), the assessment of frailty from ages younger than 21 months, and more complex modeling of the longitudinal aspects of our data. In the current study, we have used standard fixed-time predictive models treating each time point for each mouse as independent data, as there is currently no standard method for predicting outcomes at the level of the individual from data collected longitudinally56,57. Future studies could apply dynamic prediction approaches from the clinical biostatistics literature such as joint modeling57,58 to develop models based on repeated measures of markers from the same mice. The models discussed in this study could also benefit from the incorporation of additional input variables, especially from relatively non-invasive molecular and physiological biomarkers or biometrics. Much can be inferred from tallying gross physiological deficits as has been done here with the mouse FI. These deficits, however, have cellular and molecular origins which may add predictive value at much earlier time points if they can be identified. FIs based on deficits in laboratory measures such as blood tests can detect health deficits before they are clinically apparent in both humans and mice15,59. Furthermore, this study used only male mice, and given the known sex differences in frailty, lifespan, and responses to aging interventions15,60,61,62, it will be important to validate these models in female mice. Ideal future studies will model biological age markers, not to predict chronological age or mortality alone, but rather a more complex composite measure of age-associated outcomes. Indeed, DNA methylation clocks that are trained on a surrogate biomarker and biometrics for mortality including blood markers and plasma proteins plus gender and chronological age31,63 seem to have greater predictive power than those modeled on chronological age or mortality alone64,65. Future studies could develop a models based on the frailty items assessed here but modeled to predict a composite outcome including physiological measures in addition to chronological age. Still, even after the development of such composite clocks, the metrics described here—FI, FRIGHT age, and the AFRAID clock—will serve as rapid, non-invasive means to assess biological age and life expectancy, accelerating and augmenting studies to identify interventions that improve healthspan and lifespan. All experiments were conducted according to the protocols approved by the Institutional Animal Care and Use Committee (Harvard Medical School). Aged males C57BL/6Nia mice were ordered from the National Institute on Aging (NIA, Bethesda, MD), and housed at Harvard Medical School in ventilated caging with a 12:12 light cycle, at 71 °F with 45–50% humidity. Mice were group housed (3–4 mice per cage) at the start of the experiment, although over the period of the experiment mice died and mice were left singly housed. A cohort of mice (n = 28) were injected with AAV vectors containing GFP as a control group for a separate longevity experiment at 21 months of age. This did not affect their frailty or longevity in comparison to the rest of the mice (n = 32), which were untreated (Supplementary Fig. 1). A total of 60 mice was used, which is consistent with other mouse longevity studies66,67. Both sets of animals had normal median (967 and 922 days) and 90th percentile (1078 and 1104 days) lifespans, slightly surpassing those cited by Jackson Labs (median 878 days, maximum 1200 days)68,69, demonstrating that the mice were maintained and aged in healthy conditions. Mice were only euthanized if determined to be moribund (likely to die in the next 48 h) by an experienced researcher or a veterinarian based on exhibiting at least two of the following: inability to eat or drink, severe lethargy or persistent recumbence, severe balance or gait disturbance, rapid weight loss (>20% in one week), an ulcerated or bleeding tumor, and dyspnea or cyanosis. In these rare cases (n = 4, or 6.7%), the date of euthanasia was taken as the best estimate of death. Mouse frailty assessment Frailty was assessed longitudinally by the same researcher (A.E.K.), as modified from the original mouse clinical FI14. Malocclusions and body temperature were not assessed in the current study, so an FI of 29 total items was used. Individual FI parameters are listed in Supplementary Fig. 1. Briefly, mice were scored either 0, 0.5, or 1 for the degree of deficit they showed in each of these items with 0 representing no deficit, 0.5 representing a mild deficit, and 1 representing a severe deficit. For regression analyses, prediction variables were added to represent body weight change: total percent weight change, from 21 months of age; recent percent weight change, from 1 month before the assessment; and threshold recent weight change—mice received a score for this item if they gained more than 8% or lost more than 10% of their body weight from the previous month. For more details including images and video, see http://frailtyclocks.sinclairlab.org/. FI scoresheet for automated data entry (Supplementary Fig. 1g) is available online (https://github.com/SinclairLab/frailty). Intervention studies Data from enalapril-treated mice were reanalyzed from previously published work21. Briefly, male C57BL/6 mice purchased from Charles River mice were treated with control or enalapril food (30 mg/kg/day) from 16 months of age and assessed for the FI at 23 months of age. For the methionine restriction study, male C57BL/6Nia mice were obtained from the NIA at 19 months of age and fed either a control diet (0.45% methionine) or methionine-restricted diet (0.1% methionine) from 21 months of age. Custom mouse diets were formulated at research diets (New Brunswick, NJ) (catalog #'s A17101101 and A19022001). Mice were assessed for the FI at 27 months of age. Modeling and statistics All analysis was done in Python version 3.6.x (jupyter (5.0.0), scikit-learn (0.19.0), pandas (0.20.1), numpy (1.14.0), scipy (1.0.0), seaborn (0.8.1)) or GraphPad Prism 6.0. Each time point of frailty assessment for each mouse is treated as independent. Training and testing datasets were randomly split 50:50 and were separated by mouse rather than by assessment resulting in n = 106 FI assessments (across 30 mice) for the training set and n = 165 assessments (across 30 mice) for the testing set. There were 7859 total datapoints included in the models, as calculated by 271 (106 + 165) assessments multiplied by 29 frailty items. Missing frailty data (18 individual datapoints out of 7859 total datapoints) were replaced by the median value for that item for that age group. Items included in the chronological age models were frailty assessment items with a significant (p < 0.05) correlation with age (21 items, Table 2). Items included in the lifepan models included all frailty items plus chronological age (32 items, Table 3). All models were assessed with bootstrapping with replacement, repeated 100 times. In each of those 100 iterations, the training set is divided into sub-training and validation sets, and the results on the validation sets are averaged over the 100 iterations. We held out the testing set for only reporting the final accuracy of the chosen model to prevent overfitting. The fit of the models was determined with the r2 value which determines the proportion of the variance in our predicted outcome that is explained by the model, the median residual/error which represented the median difference between the actual and predicted outcome values, and the p value of the regressions. Median and mean error, r2 and p values were compared across measures of FRIGHT age or AFRAID clock (Figs. 3a–c and 4a–c and Supplementary Fig. 4) with one-way ANOVA and Dunnett's post hoc test. Kaplan–Meier survival curves of the highest and lowest quartiles of AFRAID clock scores (Fig. 4) were compared with the log-rank test. FI, FRIGHT age, and AFRAID clock scores across intervention and control groups (Fig. 5) were compared with independent samples two-sided t-tests. For all statistics, p values less than 0.05 were considered significant. All data are presented as mean ± SD, except error bars on figures indicate standard error of the mean. For some graphs (Figs. 1d, e, 3d, e and Supplementary Fig. 2B), datapoints were jittered by up to ±0.5 months to improve data visualization. Least squared and elastic net regressions were performed using algorithms provided in the Scikit-learn package70 in Python. Least-squared regression was performed using the standard LinearRegression algorithm (copy_X = True; fit_intercept=True; n_jobs=None; normalize=False). Elastic net was performed with the ElasticNet algorithm with coefficients restrained as positive for FRIGHT age and negative for AFRAID score. Hyperparameters (FRIGHT: alpha = 0.2, l1_ratio = 0.9; AFRAID: alpha = 1.0, l1_ratio = 0.1) were chosen using bootstrapping. (All other hyperparameters were set to default: copy_X = True; fit_intercept=True; max_iter=100,000; normalize=False; precompute=False; selection=cyclic; tol=0.0001.) Standard, rather than survival analysis-oriented, versions of these regression algorithms were used as we have no censored data in our dataset, and we are treating our longitudinal datapoints as independent. We calculated Klemera–Doubal biological age of each mouse using the methods first described by Klemera and Doubal19 and later demonstrated by Levine49 and Belsky et al.71. The KDM uses multiple linear regression but improves upon this by reducing multicollinearity between biological variables, which are intrinsically correlated. The KDM method consists of m regressions of age against each of m predictors. A basic biological age is then predicted based on the following equation (1): $${\mathrm{{BA}}}_E = \frac{{\mathop {\sum }\nolimits_{j = 1}^m (x_j - q_j)(\frac{{k_j}}{{s_j^2}})}}{{\mathop {\sum }\nolimits_{j = 1}^m \left( {\frac{{k_j}}{{s_j^2}}} \right)^2}},$$ where kj, qj, and sj represent the slope, intercept, and root mean square error of each of the m regressions, respectively. While Klemera and Doubal further suggest using chronological age as a corrective term to limit the bounds of each predicted value, we used the version of the algorithm without age as, for the purposes of this study, we wanted to demonstrate the utility of the variables alone as predictors of age without knowledge of the true chronological age of the mouse. Random forests are a type of machine learning algorithm which combines many decision trees into one regression outcome20. Compared to least squared and elastic net regressions, random forests have the advantage of being non-parametric and detecting non-linear relationships. Random forest modeling was performed using the Scikit-learn RandomForestRegressor algorithm70. Models were made with 1000 trees, and the minimum number of samples required for a branch split was limited to prevent overfitting as determined through bootstrapping (FRIGHT: min_samples_leaf=9; AFRAID: min_samples_leaf=6). (All other parameters were set to default: bootstrap=True; criterion=mse; max_depth=None; max_features=auto; max_leaf_nodes=None; min_impurity_decrease=0.0; min_impurity_split=None; min_samples_split=2; min_weight_fraction_leaf=0.0; n_jobs=None; oob_score=False.) We also computed and plotted the feature importance for each of the items with the highest value for this outcome. Feature importance is the amount the error of the model increases when this item is excluded from the model. Two example trees are shown in Supplementary Fig. 5. The source data underlying all figures are provided as a Source Data File. Data are available at https://github.com/SinclairLab/frailty. Any remaining data supporting the findings of the study will be available from the corresponding author upon reasonable request Code is available at https://github.com/SinclairLab/frailty. An amendment to this paper has been published and can be accessed via a link at the top of the paper. Butler, R. N. et al. Aging: the reality: biomarkers of aging: from primitive organisms to humans. J. Gerontol. A Biol. Sci. Med. Sci. 59, B560–B567 (2004). Rantanen, T. et al. Muscle strength and body mass index as long-term predictors of mortality in initially healthy men. J. Gerontol. A Biol. Sci. Med. Sci. 55, 168–173 (2000). Bittner, V. et al. Prediction of mortality and morbidity with a 6-minute walk test in patients with left ventricular dysfunction. JAMA 270, 1702–1707 (1993). Alpert, A. et al. A clinically meaningful metric of immune age derived from high-dimensional longitudinal monitoring. Nat. Med. 25, 487–495 (2019). Martínez de Toda, I., Vida, C., Sanz San Miguel, L. & De la Fuente, M. When will my mouse die? Life span prediction based on immune function, redox and behavioural parameters in female mice at the adult age. Mech. Ageing Dev. 182, 111125 (2019). Mather, K. A., Jorm, A. F., Parslow, R. A. & Christensen, H. Is telomere length a biomarker of aging? A review. J. Gerontol. A Biol. Sci. Med. Sci. 66 A, 202–213 (2011). Krištić, J. et al. Glycans are a novel biomarker of chronological and biological ages. J. Gerontol. A Biol. Sci. Med. Sci. 69, 779–789 (2014). Wang, A. S. & Dreesen, O. Biomarkers of cellular senescence and skin aging. Front. Genet. 9, 1–14 (2018). Horvath, S. DNA methylation age of human tissues and cell types. Genome Biol. 14, R115 (2013). Kim, S., Myers, L., Wyckoff, J., Cherry, K. E. & Jazwinski, S. M. The frailty index outperforms DNA methylation age and its derivatives as an indicator of biological age. GeroScience 39, 83–92 (2017). Horvath, S. & Raj, K. DNA methylation-based biomarkers and the epigenetic clock theory of ageing. Nat. Rev. Genet. 19, 371–384 (2018). Searle, S. D., Mitnitski, A., Gahbauer, E. A., Gill, T. M. & Rockwood, K. A standard procedure for creating a frailty index. BMC Geriatr. 8, 24 (2008). Mitnitski, A. B., Mogilner, A. J. & Rockwood, K. Accumulation of deficits as a proxy measure of aging. Sci. World J. 1, 323–336 (2001). Whitehead, J. C. et al. A clinical frailty index in aging mice: comparisons with frailty index data in humans. J. Gerontol. Biol. Sci. Med. Sci. 69, 621–632 (2014). Kane, A. E., Keller, K. M., Heinze-Milne, S., Grandy, S. A. & Howlett, S. E. A murine frailty index based on clinical and laboratory measurements: links between frailty and pro-inflammatory cytokines differ in a sex-specific manner. J. Gerontol. A Biol. Sci. Med. Sci. 74, 275–282 (2019). Feridooni, H. A. A. et al. The impact of age and frailty on ventricular structure and function in C57BL/6J mice. J. Physiol. 595, 3721–3742 (2017). Rockwood, K. et al. A frailty index based on deficit accumulation quantifies mortality risk in humans and in mice. Sci. Rep. 7, 43068 (2017). Article ADS CAS PubMed PubMed Central Google Scholar Kane, A. et al. Impact of longevity interventions on a validated mouse clinical frailty index. J. Gerontol. A Biol. Sci. Med. Sci. 71, 333–339 (2016). Klemera, P. & Doubal, S. A new approach to the concept and computation of biological age. Mech. Ageing Dev. 127, 240–248 (2006). Breiman, L. Random forests. Mach. Learn. 45, 5–32 (2001). Article MATH Google Scholar Keller, K., Kane, A., Heinze-Milne, S., Grandy, S. A. & Howlett, S. E. Chronic treatment with the ACE inhibitor enalapril attenuates the development of frailty and differentially modifies pro- and anti-inflammatory cytokines in aging male and female C57BL/6 mice. J. Gerontol. A 74, 1149–1157 (2019). Harrison, D. E. et al. Rapamycin fed late in life extends lifespan in genetically heterogeneous mice. Nature 460, 392–395 (2009). Miller, R. A. et al. Methionine-deficient diet extends mouse lifespan, slows immune and lens aging, alters glucose, T4, IGF-I and insulin levels, and increases hepatocyte MIF levels and stress resistance. Aging Cell 4, 119–125 (2005). Orentreich, N., Matias, J., DeFelice, A. & Zimmerman, J. Low methionine ingestion by rats extends life span. J. Nutr. 123, 269–274 (1993). Sun, L., Sadighi Akha, A. A., Miller, R. A. & Harper, J. M. Life-span extension in mice by preweaning food restriction and by methionine restriction in middle age. J. Gerontol. A Biol. Sci. Med. Sci. 64, 711–722 (2009). Horvath, S. & Levine, A. J. HIV-1 infection accelerates age according to the epigenetic clock. J. Infect. Dis. 212, 1563–1573 (2015). Horvath, S. et al. Accelerated epigenetic aging in Down syndrome. Aging Cell 14, 491–495 (2015). Horvath, S. et al. An epigenetic clock analysis of race/ethnicity, sex, and coronary heart disease. Genome Biol. 17, 0–22 (2016). Quach, A. et al. Epigenetic clock analysis of diet, exercise, education, and lifestyle factors. Aging (Albany NY) 9, 419–446 (2017). Maierhofer, A. et al. Accelerated epigenetic aging in Werner syndrome. Aging (Albany NY) 9, 1143–1152 (2017). Levine, M. E. et al. An epigenetic biomarker of aging for lifespan and healthspan. Aging (Albany NY) 10, 573–591 (2018). Meer, M. V., Podolskiy, D. I., Tyshkovskiy, A. & Gladyshev, V. N. A whole lifespan mouse multi-tissue DNA methylation clock. Elife 7, 1–16 (2018). Petkovich, D. A. et al. Using DNA methylation profiling to evaluate biological age and longevity interventions. Cell Metab. 25, 954–960.e6 (2017). Cole, J. J. et al. Diverse interventions that extend mouse lifespan suppress shared age-associated epigenetic changes at critical gene regulatory regions. Genome Biol. 18, 1–16 (2017). Stubbs, T. M. et al. Multi-tissue DNA methylation age predictor in mouse. Genome Biol. 18, 1–14 (2017). Baumann, C. W., Kwak, D. & Thompson, L. D. V. Assessing onset, prevalence and survival in mice using a frailty phenotype. Aging (Albany NY) 10, 4042–4053 (2018). Trammell, R. A. & Toth, L. A. Markers for predicting death as an outcome for mice used in infectious disease research. Comp. Med. 61, 492–498 (2011). Ray, M. A., Johnston, N. A., Verhulst, S., Trammell, R. A. & Toth, L. A. Identification of markers for imminent death in mice used in longevity and aging research. J. Am. Assoc. Lab. Anim. Sci. 49, 282–288 (2010). Ingram, D. K., Archer, J. R., Harrison, D. E. & Reynolds, M. A. Physiological and behavioral correlates of lifespan in aged C57BL/6J mice. Exp. Gerontol. 17, 295–303 (1982). Miller, R. A. Biomarkers of aging: prediction of longevity by using age-sensitive T-cell subset determinations in a middle-aged, genetically heterogeneous mouse population. J. Gerontol. A Biol. Sci. Med. Sci. 56, 180–186 (2001). Miller, R. A., Harper, J. M., Galecki, A. & Burke, D. T. Big mice die young: early life body weight predicts longevity in genetically heterogeneous mice. Aging Cell 1, 22–29 (2002). Harper, J. M., Wolf, N., Galecki, A. T., Pinkosky, S. L. & Miller, R. A. Hormone levels and cataract scores as sex-specific, mid-life predictors of longevity in genetically heterogeneous mice. Mech. Ageing Dev. 124, 801–810 (2003). Fahlström, A., Zeberg, H. & Ulfhake, B. Changes in behaviors of male C57BL/6J mice across adult life span and effects of dietary restriction. Age (Omaha) 34, 1435–1452 (2012). Swindell, W., Harper, J. & Miller, R. How long will my mouse live? Machine learning approaches for the prediction of mouse lifespan. J. Gerontol. A Biol. Sci. Med. Sci. 63, 895–906 (2008). Song, X., Mitnitski, A. & Rockwood, K. Prevalence and 10-Year outcomes of frailty in older adults in relation to deficit accumulation. J. Am. Geriatr. Soc. 58, 681–687 (2010). Kane, A. E., Gregson, E., Theou, O., Rockwood, K. & Howlett, S. E. The association between frailty, the metabolic syndrome, and mortality over the lifespan. GeroScience 39, 221–229 (2017). Blodgett, J., Theou, O., Kirkland, S., Andreou, P. & Rockwood, K. Frailty in NHANES: comparing the frailty index and phenotype. Arch. Gerontol. Geriatr. 60, 464–470 (2015). Hoogendijk, E. O. et al. Development and validation of a frailty index in the Longitudinal Aging Study Amsterdam. Aging Clin. Exp. Res. 29, 927–933 (2017). Levine, M. E. Modeling the rate of senescence: can estimated biological age predict mortality more accurately than chronological age? J. Gerontol. A Biol. Sci. Med. Sci. 68, 667–674 (2013). Levine, M. E. & Crimmins, E. M. A comparison of methods for assessing mortality risk. Am. J. Hum. Biol. 26, 768–776 (2014). Putin, E., Mamoshina, P., Aliper, A., Korzinkin, M. & Moskalev, A. Deep biomarkers of human aging: Application of deep neural networks to biomarker development. Aging 8, 1021–1030 (2016). Mamoshina, P. et al. Population specific biomarkers of human aging: a big data study using South Korean, Canadian, and Eastern European patient populations. J. Gerontol. A Biol. Sci. Med. Sci. 73, 1482–1490 (2018). Cole, J. H. et al. Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. Neuroimage 163, 115–124 (2017). Zhavoronkov, A., Li, R., Ma, C. & Mamoshina, P. Deep biomarkers of aging and longevity: from research to applications. Aging (Albany NY) 11, 10771–10780 (2019). Gialluisi, A., Di Castelnuovo, A., Donati, M. B., de Gaetano, G. & Iacoviello, L. Machine learning approaches for the estimation of biological aging: the road ahead for population studies. Front. Med. 6, 1–7 (2019). Welten, M. et al. Repeatedly measured predictors: a comparison of methods for prediction modeling. Diagnostic Progn. Res. 2, 1–10 (2018). Furgal, A. K. C., Sen, A. & Taylor, J. M. G. Review and comparison of computational approaches for joint longitudinal and time-to-event models. Int. Stat. Rev. 87, 393–418 (2019). MathSciNet PubMed PubMed Central Google Scholar Li, K. & Luo, S. Dynamic predictions in Bayesian functional joint models for longitudinal and time-to-event data: An application to Alzheimer's disease. Stat. Methods Med. Res. 28, 327–342 (2019). Article MathSciNet PubMed Google Scholar Howlett, S. E., Rockwood, M. R. H., Mitnitski, A. & Rockwood, K. Standard laboratory tests to identify older adults at increased risk of death. BMC Med. 12, 171 (2014). Gordon, E. H. & Hubbard, R. E. The pathophysiology of frailty: why sex is so important. J. Am. Med. Dir. Assoc. 19, 4–5 (2018). Austad, S. N. & Fischer, K. E. Perspective sex differences in lifespan. Cell Metab. 23, 1022–1033 (2016). Austad, S. N. & Bartke, A. Sex differences in longevity and in responses to anti-aging interventions: a mini-review. Gerontology 62, 40–46 (2016). Lu, A. T. et al. DNA methylation GrimAge strongly predicts lifespan and healthspan. Aging (Albany NY) 11, 303–327 (2019). Zhang, Y. et al. DNA methylation signatures in peripheral blood strongly predict all-cause mortality. Nat. Commun. 8, 1–11 (2017). Ackert‐Bicknell, C. L. et al. Aging research using mouse models. Curr. Protoc.Mouse Biol, 5, 95–133 (2015). Mitchell, S. J., et al. Daily fasting improves health and survival in male mice independent of diet composition and calories. Cell metab. 29, 221–228 (2019). Festing, M. in Inbred Strains in Biomedical Research Ch, 7, 137–266 (Palgrave, London, 1979). Kunstyr, I. & Leuenberger, H. G. W. Gerontological data of C57BL/6J Mice. I. Sex Differences in survival curves. J. Gerontol. 30, 157–162 (1975). Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011). MathSciNet MATH Google Scholar Belsky, D. W. et al. Eleven telomere, epigenetic clock, and biomarker-composite quantifications of biological aging: do they measure the same thing? Am. J. Epidemiol. 187, 1220–1230 (2018). We would like to thank Alexander Colville, Doyle Lokitiyakul, and Yiming Cai for their help in carrying out the longevity study, and Maeve MacNamara and Daniel Vera for their help in setting up the website. This work was supported by the Glenn Foundation for Medical Research and grants from the NIH (R37 AG028730, R01 AG019719, R01 DK100263, R01 DK090629-08), and Epigenetics Seed Grant (601139_2018) from Department of Genetics, Harvard Medical School. A.E.K. is supported by an NHMRC CJ Martin biomedical fellowship (GNT1122542). Grants to S.E.H. from the Canadian Institutes for Health Research (PGT 162462) and the Heart and Stroke Foundation of Canada (G-19-0026260). E.W. is supported by an NIH Grant (5T32GM070449). Grant to J.R.M. from the NIH (2R56AG036712-06A1). These authors contributed equally: Michael B. Schultz, Alice E. Kane. Blavatnik Institute, Department of Genetics, Paul F. Glenn Center for Biology of Aging Research at Harvard Medical School, Boston, MA, USA Michael B. Schultz, Alice E. Kane, Michael S. Bonkowski & David A. Sinclair Charles Perkins Centre, The University of Sydney, Sydney, NSW, Australia Alice E. Kane Department of Genetics and Complex Diseases, Harvard T.H. Chan School of Public Health, Boston, MA, USA Sarah J. Mitchell, Michael R. MacArthur & James R. Mitchell Department of Computational Medicine & Bioinformatics, University of Michigan, Ann Arbor, MI, USA Elisa Warner Voloridge Investment Management, LLC and VoLo Foundation, Jupiter, FL, USA David S. Vogel Departments of Pharmacology and Medicine (Geriatric Medicine), Dalhousie University, Halifax, NS, Canada Susan E. Howlett Department of Dermatology, The Feinberg School of Medicine, Northwestern University, Chicago, IL, USA Michael S. Bonkowski Department of Pharmacology, School of Medical Sciences, The University of New South Wales, Sydney, NSW, Australia Michael B. Schultz Sarah J. Mitchell Michael R. MacArthur James R. Mitchell M.B.S. and A.E.K. did most data analysis and wrote the manuscript; M.B.S. and M.S.B. conceived of and implemented the mouse lifespan study; A.E.K. did the frailty assessments; E.W. did the Klemera–Doubal analysis; S.J.M., M.R.M., and J.R.M. provided the methionine restriction data; S.E.H. provided the enalapril treatment data; D.S.V., J.R.M., S.J.M., M.R.M., and D.A.S. provided insight on study design and analysis; all authors read and edited the manuscript. Correspondence to David A. Sinclair. D.A.S. is a founder, equity owner, advisor to, director of, consultant to, investor in and/or inventor on patents licensed to Vium, Jupiter Orphan Therapeutics, Cohbar, Galilei Biosciences, GlaxoSmithKline, OvaScience, EMD Millipore, Wellomics, Inside Tracker, Caudalie, Bayer Crop Science, Longwood Fund, Zymo Research, Immetas, and EdenRoc Sciences (and affiliates Arc-Bio, Dovetail Genomics, Claret Bioscience, Revere Biosensors, UpRNA and MetroBiotech, Liberty Biosecurity); Life Biosciences (and affiliates Selphagy, Senolytic Therapeutics, Spotlight Biosciences, Animal Biosciences, Iduna, Continuum Biosciences, Jumpstart Fertility (an NAD booster company), and Lua Communications); Iduna is a cellular reprogramming company, partially owned by Life Biosciences. D.A.S sits on the board of directors of both companies. D.A.S. is an inventor on a patent application filed by Mayo Clinic and Harvard Medical School that has been licensed to Elysium Health; his personal share is directed to the Sinclair lab. For more information see https://genetics.med.harvard.edu/sinclair-test/people/sinclair-other.php. M.S.B. is a stockholder for MetroBiotech and Animal Biosciences, a division of Lifebiosciences. Other authors have no conflicts to declare. Peer review information Nature Communications thanks Kenneth Seldeen, Anne-Ulrike Trendelenburg, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Schultz, M.B., Kane, A.E., Mitchell, S.J. et al. Age and life expectancy clocks based on machine learning analysis of mouse frailty. Nat Commun 11, 4618 (2020). https://doi.org/10.1038/s41467-020-18446-0 FGF21 is required for protein restriction to extend lifespan and improve metabolic health in male mice Cristal M. Hill Diana C. Albarado Christopher D. Morrison Cell-type-specific aging clocks to quantify aging and rejuvenation in neurogenic regions of the brain Matthew T. Buckley Eric D. Sun Anne Brunet Nature Aging (2022) Age-dependent impact of two exercise training regimens on genomic and metabolic remodeling in skeletal muscle and liver of male mice Michel Bernier Ignacio Navas Enamorado Rafael de Cabo npj Aging (2022) The degree of frailty as a translational measure of health in aging Andrew D. Rutenberg Kenneth Rockwood Determination of Biological Age: Geriatric Assessment vs Biological Biomarkers Lucas W. M. Diebel Current Oncology Reports (2021)
CommonCrawl
Cannabidiol rather than Cannabis sativa extracts inhibit cell growth and induce apoptosis in cervical cancer cells Sindiswa T. Lukhele1 & Lesetja R. Motadi1 Cervical cancer remains a global health related issue among females of Sub-Saharan Africa, with over half a million new cases reported each year. Different therapeutic regimens have been suggested in various regions of Africa, however, over a quarter of a million women die of cervical cancer, annually. This makes it the most lethal cancer amongst black women and calls for urgent therapeutic strategies. In this study we compare the anti-proliferative effects of crude extract of Cannabis sativa and its main compound cannabidiol on different cervical cancer cell lines. To achieve our aim, phytochemical screening, MTT assay, cell growth analysis, flow cytometry, morphology analysis, Western blot, caspase 3/7 assay, and ATP measurement assay were conducted. Results obtained indicate that both cannabidiol and Cannabis sativa extracts were able to halt cell proliferation in all cell lines at varying concentrations. They further revealed that apoptosis was induced by cannabidiol as shown by increased subG0/G1 and apoptosis through annexin V. Apoptosis was confirmed by overexpression of p53, caspase 3 and bax. Apoptosis induction was further confirmed by morphological changes, an increase in Caspase 3/7 and a decrease in the ATP levels. In conclusion, these data suggest that cannabidiol rather than Cannabis sativa crude extracts prevent cell growth and induce cell death in cervical cancer cell lines. Cannabis sativa is a dioecious plant that belongs to the Cannabaceae family and it originates from Central and Eastern Asia [11, 28]. It is widely distributed in countries including Morocco, South Africa, United States of America, Brazil, India, and parts of Europe [14, 28]. Cannabis sativa grows annually in tropical and warm regions around the world [11]. Different ethnic groups around the world use Cannabis sativa for smoking, preparing concoctions to treat diseases, and for various cultural purposes [17]. According to [28], it is composed of chemical constituents including cannabinoids, nitrogenous compounds, flavonoid glycosides, steroids, terpenes, hydrocarbons, non-cannabinoid phenols, vitamins, amino acids, proteins, sugars and other related compounds. Cannabinoids are a family of naturally occurring compounds highly abundant in Cannabis sativa plant [1, 6, 14, 24]. Screening of Cannabis sativa has led to isolation of at least 66 types of cannabinoid compounds [1, 14, 30]. These compounds are almost structurally similar or possess identical pharmacological activities and offer various potential applications including the ability to inhibit cell growth, proliferation and inflammation [22]. One such compound is cannabidiol (CBD), which is among the top three most widely studied compounds, following delta-9-tetrahydrocannabinol (Δ9-THC) [14]. It has been found to be effective against a variety of disorders including neurodegerative disorders, autoimmune diseases, and cancer [24, 25]. In a research study conducted by [26], it was found that CBD inhibited cell proliferation and induces apoptosis in a series of human breast cancer cell lines including MCF-10A, MDA-MB-231, MCF-7, SK-BR- 3, and ZR-7-1 and further studies found it to possess similar characteristics in PC-3 prostate cancer cell line [25]. However, to allow us to further our studies in clinical trials a range of cancers in vitro should be tested to give us a clear mechanism before we can proceed. Cannabis sativa in particular cannabidiol, we propose it plays important role in helping the body fight cancer through inhibition of pain and cell growth. Therefore, the aim of this study was to evaluate the cytotoxic and anti-proliferative properties of Cannabis sativa and its isolate, cannabidiol in cervical cancer cell lines. An aggressive HeLa, a metastatic ME-180 and a primary SiHa cell lines were purchased from ATCC (USA, MD). Camptothecin was supplied by Calbiochem® and cannabidiol was purchased from Sigma-Aldrich and used as a standard reference. Plant collection and preparation of extracts Fresh leaves, stem and roots of Cannabis sativa were collected from Nhlazatshe 2, in Mpumalanga province. Air dried C. sativa plant material was powdered and soaked for 3 days in n-hexane, ethanol and n-butanol, separately. Extracts were filtered using Whatman filter paper and dried. Dimethyl sulfoxide was added to dried extracts to give a final concentration of a 100 mg/ml. Different concentrations (50, 100, and 150 μg/ml) of C. sativa extracts were prepared from the stock and used in treating cells during MTT assay. HPLC-Mass spectrophotometry was performed to verify the presence of cannabidiol in our extracts. The plant was identified by forensic specialist in a forensic laboratory in Pretoria. The laboratory number 201213/2009 and the voucher number is CAS239/02/2009. HeLa, ME-180 and SiHa were cultured in Dulbecco's Modified Eagle Media (DMEM) supplemented with 10 % Fetal Bovine Serum (FBS) (Highveld biological,) and 1 % penicillin/streptomycin (Sigma, USA). Cells were maintained at 37 °C under 5 % of carbon dioxide (CO2) and 95 % relative humidity. After every third day of the week, old media was removed and replaced with fresh media, to promote growth until the cells reach a confluence of ~70–80 %. MTT assay Ninety microlitres of HeLa and SiHa cells were seeded into 96-well plates at 5×103 cells per well and incubated overnight at 37 °C under 5 % CO2 and 95 % relative humidity to promote cell attachment at the bottom of the plate. Media was changed and the cells were treated with Cannabis sativa plant extracts at various concentrations (0, 50, 100, and 150 μg/ml (w/v)) for 24 h. After 24 h, the cells were treated with 10 μl of (5 mg/ml) MTT reagent (3-[4, 5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide) for 4 h at 37 °C under 5 % CO2. Ninety microlitres of DMSO was added into each well including wells containing media only and serves as a blank, to dissolve formazan crystals. Camptothecin and DMSO were included as controls. Optical density was measured using a micro plate reader (Bio-Rad) at 570 nm to determine the percentage of viable cells and account for cell death induced according to the outlined equation below: $$ \%\ \mathrm{Cell}\ \mathrm{viability}=\frac{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{treated}\ \mathrm{cells}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{blank}}{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{untreated}\ \mathrm{cells}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{blank}}\times 100 $$ Cell growth analysis Before seeding cells, a 100 μl of media was added to the 16 well E-plate and placed in the incubator to record background readings. A blank with media only was included to rule the possibility of media having a negative effect on the cells. In each well of the E-plate, 1×104 cells were seeded and left in the incubator for 30 min to allow the cells to adhere to the bottom of the plate. The E-plate was placed and locked in the RTCA machine and experiment allowed to run for 22 h prior to the addition of the test compounds. Cells were treated with various concentrations (0, 50, 100, 150 μg/ml) of C. sativa hexane extract. Following treatment, the experiment was allowed to run for a further 22 h. Camptothecin (0.3 μM (v/v)) and DMSO (0.1 % (v/v)) were used as controls for comparative purposes. Procedure was repeated for C. sativa butanol extract. Apoptosis assay Cells were washed twice with 100 μl of cold Biolegend's cell staining buffer followed by resuspension in 100 μl of Annexin V binding buffer. A 100 μl of cell suspension was transferred into 15 ml falcon tube and 5 μl of FITC Annexin V and 10 μl of Propidium iodide solution (PI) were added into untreated and treated cell suspension. The cells were gently vortexed and incubated at room temperature (25 °C) in the dark for 15 min. After 15 min, 400 μl of Annexin V binding buffer was added to the cells. The stained cells were analysed using FACSCalibur (BD Biosciences, USA). Five hundred microliter of 1×104 cells was added onto a 6-well plate containing coverslips. The plate was incubated overnight to allow the cells to attach. Following attachment, media was removed and cells were washed twice with PBS, prior to incubation with IC50 of Cannabis sativa extracts for 24 h. After 24 h, media was removed and cells were washed twice with PBS. Four percent (4 %) was added into each well and the plate incubated for 20 min at room temperature, to allow efficient fixation of cells. Cells were washed twice with PBS and once with 0.1 % BSA wash buffer and further stained with DAPI and Annexin V/FITC for 5 min. BX-63 Olympus microscope (Germany) was used to visualize the cells. Mitochondrial assay (ATP detection) Twenty five microlitres of 1×104 cells per well were plated in a white 96-well luminometer plate overnight. Cells were treated with 25 μl of IC50 concentrations of Cannabis sativa crude extracts and cannabidiol dissolved in a glucose free media supplemented with 10 mM galactose. The plate was incubated at 37° in a humidified and CO2-supplemented incubator for a period of 24 h. Fifty microlitres of ATP detection reagent was added to each well and the plate further incubated for 30 min. Luminescence was measured using GLOMAX (Promega, USA). The assay was conducted in duplicates and ATP levels were reported as a mean of Relative Light Units (RLU). The following formula was used to calculate the ATP levels in RLU: $$ \mathbf{R}\mathbf{L}\mathbf{U}=\mathbf{Luminescence}\left(\mathbf{sample}\right)-\mathbf{Luminescence}\left(\mathbf{blank}\right) $$ Caspase 3/7 activity A hundred microliters of 1×104 cells were plated overnight on a 96-well luminometer plate and allowed to attach overnight. The next day, cells were treated with 0.3 μM camptothecin and the IC50 concentrations of Cannabis sativa crude extracts and further incubated for a period of 24 h. Caspase-Glo 3/7 assay was performed according to manufacturer's protocol (Promega, USA). Briefly, following treatment, media was replaced with caspase glo 3/7 reagent mixed with a substrate at a ratio of 1:1 v/v of DMEM: Caspase-glo 3/7 reagent and was incubated for 2 h at 37 °C in 5 % CO2. Luminescence was quantified using GLOMAX from Promega (USA). The assay was conducted in duplicates and caspase 3/7 activity was reported as a mean of Relative Light Units (RLU). The following formula was used to calculate caspase 3/7 activity in RLU: $$ \mathbf{R}\mathbf{L}\mathbf{U}=\mathbf{Luminiscence}\left(\mathbf{sample}\right)-\mathbf{Luminiscence}\left(\mathbf{blank}\right) $$ Cells were harvested with 2 ml of 0.05 % trypsin-EDTA. Ten millilitres of media was added to the cells to inactivate trypsin and the cell suspension was centrifuged at 1500 rpm for 10 min. The supernatant was discarded and pellet was re-suspended twice in 1 ml PBS. Cell suspension was centrifuged at 5000 rpm for 2–5 min and PBS was discarded. Seven hundred microlitres of pre-chilled absolute ethanol was added to the cell suspension followed by storage at −20 °C for 30 min, to allow efficient permeabilization and fixing of the cells. After 30 min, cells were centrifuged at 5000 rpm for 5 min to remove ethanol. The pellet was washed twice with PBS and centrifuged at 5000 rpm to remove PBS. Five hundred microlitres of FxCycle™ PI/RNase Staining solution (Life technologies, USA) was added to the cells and vortexed for 30 s (sec). The cells were analysed with FACSCalibur (BD Biosciences, USA). Following 24 h of treatment with IC50 concentrations, cells were lysed using RIPA buffer (50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1 % NP-40, 0.1 % SDS, 2 mM EDTA). Protein content was measured by the BCA assay and equal amounts were electrophoresed in SDS polyacrylamide gel and then transferred onto nitrocellulose membranes. Membranes were subsequently immunoblotted with Anti-mouse monoclonal p53, Bcl-2, Bax, RBBP6, Caspase-3 and -9 antibodies were used at 1:500–1000 dilutions as primary antibodies, while a goat anti-mouse horseradish peroxidise-conjugated horse IgG (Santa Cruz, USA) were used at a 1:2000 dilution as a secondary antibody. The membranes were developed using Chemiluminescence detection kit (Santa Cruz Biotechnology, CA). The membranes were imaged using a Biorad ChemiDoc MP. Experiments were performed in duplicates. Statistical analysis of the graphical data was expressed as the mean standard deviation. The p-value was analysed in comparison to the untreated using Students t-Test wherein p < 0.05 was considered as significant. Effect of Cannabis sativa and cannabidiol on SiHa, HeLa, and ME-180 cells To determine half maximal inhibitory concentration (IC50) for both Cannabis sativa and cannabidiol, MTT assay was used. Camptothecin, as our positive control, significantly reduced cell viability in SiHa (40.36 %), HeLa (47.19 %), and ME-180 cells (32.25 %), respectively. As shown in Fig. 1a and d, the IC50 was cell type dependent rather than time dependent with SiHa at less than 50 μg/ml in both butanol (56 %) and hexane (48.9 %). Similarly IC50 in HeLa was at 100 μg/ml at p < 0.001 (Fig. 1b). while ME-180 cells treated with butanolic extract exhibited an IC50 of a 100 μg/ml, reducing viability to 48.6 % (Fig. 1c) and hexane extracts IC50 was at 50 μg/ml with 54 % death (Fig. 1f). This was not the case in cannabidiol with SiHa (51 %) and HeLa (50) IC50 at a much lower dose (3.2 μg\ml) while ME-180 cells (56 %) at 1.5 μg\ml when compared to Cannabis sativa extracts (p < 0.001) (Fig. 1g–i). Dimethyl sulfoxide (DMSO) was included as a vehicle control and it inhibited between 4 and 11 %. Whereas ethanol exhibited between 7.3 and 7.8 % since cannabidiol was alcohol extracted. Representative cell viability bar graphs of cervical cancer cell lines. MTT assay was conducted to determine IC50 following incubation of SiHa, HeLa, and ME-180 cells with different doses of butanol extract (a, b, c), hexane extract (d, e, f), and cannabidiol extract (g, h, i) for a period of 24 h. Data was expressed as mean value ± standard deviation (SD). The level of significance was determined using Students t-Test with nsrepresenting p > 0.05, ***represents p < 0.001, **represents p < 0.01, and *represents p < 0.05 xCELLigence analysis of the cell growth pattern after treatment of cervical cancer cells with Cannabis sativa extracts and cannabidiol. SiHa (a, d, g), HeLa (b, e, h), and ME-180 (c, f, i) cells were seeded for a period of 22–24 h, followed by treatment with IC50 concentration of butanol (a, b, c), hexane (d, e, f), and cannabidiol (g, h, i) Effect of Cannabis sativa extracts and cannabidiol on cell growth of cervical cancer cells The IC50 obtained during MTT assay was tested for their ability to alter cell viability in real time. An impedance based system was employed to evaluate the effect of Cannabis sativa and cannabidiol on SiHa, HeLa, and ME-180. Cells were seeded in an E-plate and allowed to attach. Cells were further treated with IC50 for a period of 22–24 h, depending on their doubling time. Continuous changes in the impedance were measured and displayed as cell index (CI). Little can be read from xCELLigence except that cannabidiol in all cell lines has shown to reduce cell index while the plant extract had mixed results sometimes showing reduction on the other hand remained unchanged (Fig. 3). Suggesting that cannabidiol is the most effective compound. Apoptosis assessment following treatment of cervical cancer cells with IC50 concentrations of Cannabis sativa extract and cannabidiol. These bar graphs are a representative of apoptosis induction in SiHa (a and d), HeLa (b and e), and ME-180 (c and f) cells. Cells were treated with IC50 of Cannabis sativa extracts and cannabidiol for a period of 24 h and further stained with Annexin-V/PI. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated Morphological analysis and assessment of apoptosis in SiHa cells stained with DAPI and Annexin V dye. Cells were incubated with IC50 of Cannabis sativa extracts for a period of 24 h. Cells were stained with Annexin V and counterstained with DAPI. BX63-fluorescence confocal microscopy was used to visualize the cells Morphological analysis and assessment of apoptosis in HeLa cells stained with DAPI and Annexin V dye. Cells were incubated with IC50 of Cannabis sativa extracts for a period of 24 h. Cells were stained with Annexin V and counterstained with DAPI. BX63-fluorescence confocal microscopy was used to visualize the cells Cannabis sativa extracts and cannabidiol induce apoptosis in cervical cancer cells Flow cytometry revealed a significant increase in SiHa cells undergoing apoptosis during treatment with butanol (from 2 to 28.5 %) and hexane (from 2 to 17.2 %) as compared to camptothecin with 30.4 %. In HeLa cells, apoptosis was increased to 31.9 % in butanol extract and only 15.3 % in hexane cells (Fig. 3b). A similar events was observed following treatment of ME-180 cells with butanol extract were 44.8 % apoptosis was recorded and 43.2 % in hexane treated cells (Fig. 3c). Cannabidiol was also tested for its ability to induce apoptosis in all three cell lines. The results further confirmed that the type of cell death induced was apoptosis. Figure shows that cannabidiol induced early apoptosis in all three cell lines. Cannabidiol was more effective in inducing apoptosis In comparison to both extracts of Cannabis sativa. In SiHa cells cannabidiol induced 51.3 % apoptosis (Fig. 3d), 43.3 % in HeLa and 28.6 % in ME-180 cell lines (Fig. 3f). Effect of Cannabis sativa extracts and cannabidiol on the morphology of SiHa and HeLa cells To characterise the cell death type following treatment with our test compounds, cell were stained with DAPI and Annixin V to show if apoptosis was taking place. Treatment of SiHa and HeLa cells with IC50 of both butanol and hexane extracts confirmed the type of cell death as apoptosis since they picked a green colour from Annexin V that bind on phosphotidyl molecules that appear in early stages of apoptosis. Similar results were also observed in cannabidiol treated cells. Another feature that is a representative of cell death is the change in morphology. Morphological appearance of live cells displayed a round blue nuclei following staining with DAPI. Exposure of SiHa and HeLa cells to IC50 of Cannabis sativa extracts caused a change in morphology coupled with an uptake of annexin V. Loss of shape, nuclear fragmentation, reduction in cell size and blebbing of the cell membrane were among the observed morphological features implicated to be associated with apoptosis (Fig. 6). Caspase 3/7 activity after treatment of SiHa, HeLa, and ME-180 cells with IC50 of Cannabis sativa extract and cannabidiol. Cells were treated with IC50 of Cannabis sativa and cannabidiol extracts for a period of 24 h. Caspase 3/7 reagent was added to the treated cells for 1 h. Luminescence–was measured using GLOMAX instrument in RLU. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01, and *p < 0.05 representing the level of significance in comparison to the untreated Bar graphs representing changes in the ATP levels following treatment of cervical cancer cells with Cannabis sativa and cannabidiol. Cells were treated with IC50 of both Cannabis sativa extracts and cannabidiol for a period of 2–24 h. Untreated and camptothecin were included as controls for comparative purposes. The level of significance was determined using Students t-Test with ***p < 0.001, **p < 0.01, *p < 0.05, and nsp > 0.05 in comparison to the untreated Effect of Cannabis sativa extracts and cannabidiol on the ATP levels Since Adenosine 5'-triphosphate acts as a biomarker for cell proliferation and cell death, an ATP assay was conducted. This was done in order to determine whether Cannabis sativa and cannabidiol deplete ATP levels in cervical cancer cells. SiHa, HeLa, and ME-180 cells were treated at different time points, between 2 and 24 h. ATP levels were first detected after 2 h. In general, ATP depletion was cell type dependent. In HeLa cells treated with the crude extracts from butanol and hexane, ATP was significantly reduced by 74 % (from 627621 to 164208 RLU) and 78 % (from 627621 to 133693 RLU) respectively. While with SiHa there was reduction of 31 % (from 4719589 to 3221245 RLU) and 22.5 % (4719589 to 3655730 RLU) respectively (figure). Whereas in ME-180 there was no change between treatments and untreated. Similar results were observed in cannabidiol treated cells. At 2 h, treatment with IC50 led to a reduction in ATP levels by ~ 61 % (from 4704419 to 1802508 RLU), 93 % (from 627621 to 40371 RLU), and 8 % (from 798688 to 734039 RLU) in SiHa, HeLa, and ME-180 cells respectively (Figure). A prolonged incubation period (24 h) of cells with IC50 led to a further decrease in the ATP levels by ~66 % (from 4486150 to 1497648 RLU), 97 % (from 601694 to 13426 RLU), and 8.5 % (from 790757 to 723039 RLU) in SiHa, HeLa, and ME-180 cells respectively. This could mean that cannabidiol depletes ATP levels more than Cannabis sativa extracts and might be the main compound responsible for cell death in cancer cells treated with Cannabis sativa. Effect of Cannabis sativa and cannabidiol on caspase 3/7 activity of SiHa, HeLa, and ME-180 cells As shown in Fig. 8a, b and c, we observed an increase in caspase 3/7 activity all three cell lines following treatment with 0.3 μM of camptothecin. Similar results were observed in crude extract treated cells by 25 % (SiHa) and 40 % (HeLa) in butanol extract and 50 % (SiHa) and 100 % (HeLa) in hexane treated. There was no significant change in ME-180 cells (Figure). When cells were treated with cannabidiol, caspase 3/7 activity increased in all three cell lines. SiHa cells so an increase from 200000 RLU to 2500000 while HeLa increased to 900000 from 800000 RLU. ME-180 was fairly increased also to 230000 from 200000 RLU all increase were significant and in line with other increase in apoptosis as shown in Annexin V. Representative bar graph of the cervical cancer cell cycle before and after treatment with Cannabis sativa extracts and cannabidiol. Cells were harvested and treated with camptothecin and the IC50 concentrations of Cannabis sativa extracts and cannabidiol. Bar graph a and d represents SiHa cells, b and e represents HeLa cells, c and f represents ME-180 cells. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01, *p < 0.05, and nsp > 0.05 representing the level of significance in comparison to the untreated Effect of Cannabis sativa extracts and cannabidiol on cell cycle progression We further assessed the effects of Cannabis sativa extracts and cannabidiol on cell cycle progression using flow cytometry. Flow cytometry showed that in the presence of Cannabis sativa crude extracts and camptothecin, SiHa cells exhibited a significant increase (p < 0.001) in sub-G0 population with a decrease in G0/G1, S, and G2/M phase. In butanol, sub-G0 phase was increased from 4.2 to 20.1 %, while decreasing the G0/G1 (from 64.0 to 48.7 %), S-phase (from 9.3 to 6.5 %), and G2/M (from 18.5 to 17 %) in SiHa population (Fig. 9a) while in hexane treated sub-G0 phase was at 39.1 % compared to 4.2 % in untreated, with a decrease in G0/G1 (from 64.0 to 30.4 %), S (from 9.3 to 6.5 %), and G2/M (from 18.5 to 13.8 %) population (Fig. 9a). In HeLa cells, butanol extracts reduced G0/G1 to 54.9 % while the S-phase and G2/M significantly increased to 18.4 and 25.7 % while with hexane there was increase in the G2/M phase (20.3 %) and a decrease in the S-phase (8.1 %). In ME-180 there was insignificant increase in all cell cycle stages. Each cell line responded differently to cannabidiol treatment. Almost 42.2 % of SiHa cells were observed in the sub-G0 (p < 0.001) while there was reduction in cells in the G0/G1 phase, from 57.9 to 42.8 % (Fig. 9d). A similar trend was observed in HeLa cells but much lower sub-G0 (from 5.1 to 17.4 %) and S phase (from 4.8 to 11.2 %) (Fig. 9e). A similar event was observed during treatment of ME-180 cells. Cannabidiol significantly increased sub-G0 in ME-180 cells to 34.3 % (Fig. 9f). From this data, we can conclude that cannabidiol induced cell death without cell cycle arrest. Western blot analysis of the protein expression before and after 24 h treatment with IC50 of Cannabis sativa extracts and cannabidiol. SiHa (a and d), HeLa (b and e), and ME-180 (c and f) cells were treated for a period of 24 h and protein lysates were separated using SDS-PAGE gel. Untreated protein was used as a control. Antibodies against pro-apoptotic proteins (p53 and Bax) and anti-apoptotic proteins (Bcl-2 and RBBP6), Initiator caspase-9 and effecter caspase-3 were included to elucidate apoptosis induction A densitometry analysis SiHa protein was performed using ImageJ quantification software to measure the relative band intensity. CPT represents camptothecin. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated represent the western blot analysis of SiHa and HeLa cells. The genes analyzed are p53 and RBBP6 including caspases. Equal amount of protein (conc) was loaded in each well. Note that the darker the bands increased expression of the gene A densitometry analysis HeLa protein was performed using ImageJ quantification software to measure the relative band intensity. CPT represents camptothecin. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated A densitometry analysis ME-180 protein was performed using ImageJ quantification software to measure the relative band intensity. CPT represents camptothecin. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated Effect of Cannabis sativa extracts and cannabidiol on the expression of upstream and downstream target proteins From the apoptosis experiments conducted, it is clear that the mode of cell death induced by cannabidiol and extract of Cannabis Savita was that of apoptosis. However, we needed to confirm whether the type of apoptosis induced is it p53 dependent or independent as it is well known that p53 is mutated in many cancers. Protein expression of RBBP6, Bcl-2, Bax and p53were performed and results recorded. In butanol extract p53 was significantly increased in SiHa and HeLa cells while remaining unchanged in ME-180. Similar results were observed in hexane treated cells. In all cell lines the level of p53 negative regulator in cancer development was reduced by all treatment. Following treatment of cervical cancer cells, Bax protein was up-modulated and Bcl-2 was down-modulated. Western blot analysis revealed that cannabidiol effectively caused an increase in the expression of pro-apoptosis proteins, p53 and Bax, while simultaneously decreasing the anti-apoptosis proteins, RBBP6 and Bcl-2 in all three cervical cancer cell lines (SiHa, HeLa, and ME-180 cells). Caspases play an effective role in the execution of apoptosis, an effector caspase-9 and executor capsase-3 were included in our western blot to check if they played a role in inducing apoptosis. In all Cannabis sativa extracts, caspase-3 and caspase-9 were upregulated in all cell lines. Similar results were also observed in cannabidiol treated cells with upregulation of both caspase-3 and -9. Cervical cancer remains a burden for women of Sub-Saharan Africa. Half a million new cases of cervical cancer and a quarter of a million deaths are reported annually due to lack of effective treatment [12]. Currently, the recommended therapeutic regimens include chemotherapy, radiation therapy, and surgery. However, they present several limitations including side effects or ineffectiveness [2]. Therefore, it is important to search for new novel therapeutic agents that are naturally synthesized and cheaper, but still remain effective. Medicinal plants have been used for decades for health benefits and to treat several different diseases [22]. In South Africa, over 80 % of the population are still dependent on medicinal plants to maintain mental and physical health [27]. However, some of the medicinal plants used by these individuals are not known to be effective and their safety is still unclear. It is therefore important to scientifically evaluate and validate their efficacy and safety. In the present study, cervical cancer cell lines (SiHa, HeLa, and ME-180) were exposed to different concentrations of Cannabis sativa extracts and that of its compound, cannabidiol, with the aim of investigating their anti-proliferative activity. We first determined whether Cannabis sativa extracts and cannabidiol possess anti-proliferative effects using MTT assay. MTT assay determines IC50, which represents the half maximal concentration that induces 50 % cell death. Cannabis sativa extracts were able to reduce cell viability and increase cell death in SiHa, HeLa, and ME-180 cells. These results correlate with the findings obtained by [23], whereby they reported reduced cell proliferation in colorectal cancer cell lines following treatment with Cannabis sativa. According to [7, 24, 25] Cannabis sativa extracts rich in cannabidiol were able to induce cell death in prostate cancer cell lines LNCaP, DU145, and PC3 at low doses (20–70 μg/ml). It was suggested that cannabidiol might be responsible for the reported activities. Therefore, in this study, cannabidiol was included as a reference standard in order to determine whether the reported pharmacological activities displayed by Cannabis sativa extracts might have been due to the presence of this compound. For positive extract inhibitory activity, Camptothecin was included as a positive control. Camptothecin functions as an inhibitor of a topoisomerase I enzyme that regulates winding of DNA strands [19, 20]. This in turn causes DNA strands to break in the S-phase of the cell cycle [20]. A study conducted by [19], exhibited the ability of camptothecin to be cytotoxic against MCF-7 breast cancer cell line and also induce apoptosis as a mode of cell death at 0.25 μM. We also observed a similar cytotoxic pattern, whereby camptothecin induced cell death in HeLa, SiHa, and ME-180 cells, however, at a much higher concentration. xCELLigence continuously monitors cell growth, adhesion, and morphology in real-time in the presence of a toxic substance. Upon treatment of SiHa and HeLa cells with IC50 of butanol extract, we noted that there was little to no inhibitory effect observed on cell growth. The growth curve continued in its exponential growth in all cells including the treated, untreated and 0.1 % DM'SO. However, at a similar IC50 of 100 μg/ml, a reduction in cell viability was observed following treatment of HeLa cells with hexane extract. On the other hand, ME-180 cells responded after a period of 2 h following treatment with the IC50 of butanol and hexane extract. In comparison to butanol and hexane extracts, cannabidiol reduced the cell index of ME-180 cells after 2 h of treatment, signalling growth inhibition. Differences in the findings could be attributable to the fact that both methods have different principles and mechanism of action. MTT assay is an end-point method that is based on the reduction of tetrazolium salt into formazan crystals by mitochondrial succinate dehydrogenase enzyme. Mitochondrial succinate dehydrogenase is only active in live cells with an intact metabolism [8, 13]. Induction of cell death by Cannabis sativa crude extracts decreases the activity of the enzyme following treatment of HeLa, SiHa, and ME-180 cervical cancer cell lines. On the other hand, xCELLigence system is a continuous method that relies on the use of E-plates engraved with gold microelectrodes at the bottom of the plate. The xCELLigence system is based on the changes in impedance influenced by cell number, size and attachment [13]. Therefore, we concluded that it was possible that dead cells might have been attached at the bottom of the E-plate after treatment. Cell death can be characterized by a decrease in the energy levels as a result of dysfunction of the mitochondria [8]. Therefore, to evaluate the effect of treatment on the energy content of the cells, we conducted mitochondrial assay. We only used IC50 as indicated by MTT assay only. ATP acts as determinant of both cell death and cell proliferation [15]. Exposure of SiHa, HeLa, and ME-180 cells to the IC50 of Cannabis sativa extracts caused a reduction in the ATP levels. Treatment of cells with cannabidiol either slightly or severely depleted the ATP levels. According to [16], a reduction of the ATP levels compromises the status of cell and often leads to cell death either by apoptosis or necrosis, while an increase is indicative of cell proliferation. Therefore, we concluded that the reduction of ATP might have been as a result of cell death induction since the cells ATP production recovered. Following confirmation that Cannabis sativa and cannabidiol have anti-proliferative activity, we had to verify whether both treatments have the ability to induce cell cycle arrest in all three cell lines. This method uses a PI stain and flow cytometry to measure the relative amount of DNA present in the cells. In this study, propidium iodide (PI) was used to stain cells. Propidium iodide can only intercalate into the DNA of fixed and permeabilized cells with a compromised plasma membrane or cells in the late stage of apoptosis. Viable cells with an intact plasma membrane cannot uptake the dye. The intensity of stained cells correlates with the amount of DNA within the cells. HeLa, SiHa, and ME-180 cervical cancer cells were stained with PI and analysed using flow cytometry. Treatment of SiHa cells with butanol and hexane extracts led to the accumulation of cells in the cell death phase (sub-G0 phase), without cell cycle arrest. When compared to the S-phase and G2/M phase of untreated cells, exposure of HeLa cells to Cannabis sativa butanol extract resulted in the accumulation of cells in the S-phase of the cell cycle and slight cell death induction. And thus, according to [3], signals DNA synthesis and cell cycle proliferation. A decrease in the S-phase and an increase in the G2/M phase of HeLa cells following treatment with hexane extract, suggests a blockage of mitosis and an induction of cell cycle arrest. Interesting to note was that, treatment of ME-180 cells with both extracts led to an increase of cells coupled by an increase in the S-phase population which favours replication and duplication of DNA. This was not the case following treatment of cells with cannabidiol. Cannabidiol resulted in the accumulation of cells in the cell death phase of the cell cycle. SiHa, and HeLa, and ME-180 cells were committed to the cell death phase. In summary, Cannabis sativa induces cell death with or without cell cycle arrest while cannabidiol induces cell death without cell cycle arrest. Apoptosis plays a major role in determining cell survival. Annexin V/FITC and PI were used to stain the cells to be able to distinguish between viable, apoptotic and necrotic cells. Annexin V/ FITC can only bind to phosphatidylserine residues exposed on the surface of the cell membrane while PI intercalates into the nucleus and binds to the fragmented DNA. Viable cells cannot uptake both dyes due to the presence of an intact cell membrane. Since treatment caused the accumulation of cells in the sub-G0 phase, also known as the cell death phase, and the severe depletion of ATP levels by cannabidiol, we further conducted an apoptosis assay. Treatment of all three cell lines with camptothecin, IC50 of Cannabis sativa and cannabidiol exhibited the type of induced cell death as apoptosis. Sharma et al. [25] also showed a similar pattern of cell death, whereby treatment of a prostate cancer cell lines with Cannabis sativa resulted in the induction of apoptosis. Apoptosis is characterized by morphological changes and biochemical features which include condensation of chromatin, convolution of nuclear and cellular outlines, nuclear fragmentation, formation of apoptotic blebs within the plasma membrane, cell shrinkage due to the leakage of organelles in the cytoplasm as well as the presence of green stained cells at either late or early apoptosis [5, 17, 28]. Annexin V/FITC and DAPI were used to visualize the cells under a fluorescence confocal microscopy. According to [18], an uptake of Annexin V/FITC suggests the induction of apoptosis, since it can only bind to externalized PS residues. This also proves that during cell growth analysis, SiHa and HeLa cells were undergoing cell death while still attached to the surface of the flask. Apoptosis is known to occur via two pathways, the death receptor pathway and the mitochondrial pathway [30]. Cannabis sativa isolates including cannabidiol have been implicated in apoptosis induction via the death receptor pathway, by binding to Fas receptor or through an activated of Bax triggered by the synthesis of ceramide in the cells [4]. However, not much has been reported on the induction of apoptosis via activation of p53 by Cannabis sativa. Our focus in this study was also to identify downstream molecular effect of extracts. One such important gene is p53 which acts as a transcription factor for a number of target genes [29]. Under normal conditions, p53 levels are maintained through constant degradation MDM2 and its monomers [29]. RBBP6 is one of the monomers that helps degrade p53, due the presence of Ring finger domain that promotes the interaction of both proteins [14]. In response to stress stimuli such as DNA damage, hypoxia, UV light, and radiation light, p53 becomes activated and causes MDM2 expression to decrease [10]. Mutation of p53, implicated to be associated with 50 % of all human cancers, promote the tumorigenesis. Bax and Bcl-2 form part of the proteins that regulate apoptosis via the mitochondria [21]. Following activation, p53 translocates into the cytosol and triggers the oligomerization of Bcl-2 with BAD, resulting in the inhibition of Bcl-2 activity [17]. This in turn allows Bax protein to be translocated to the mitochondria and participate in the release of cytochrome c through poration of the outer mitochondrial membrane [9, 17]. An imbalance between Bax and Bcl-2 has been linked to the development and progression of tumours through the resistance of apoptosis [17]. It is therefore crucial to design drugs that would effectively target these genes involved in the execution of apoptosis via the mitochondrial pathway. Camptothecin, hexane extract, and cannabidiol effectively up-modulated the expression of p53 in all three cell lines, leading to a decrease in RBBP6 protein expression. Apart from SiHa and HeLa, butanol extract failed to up-modulate p53 in ME-180 cells. Interesting to note is that butanol extract reduced the expression of RBBP6 protein in ME-180 cells. The mechanism behind failure of butanol to up-modulate p53 while down-modulating RBBP6 is unclear. However, we came to a conclusion that butanol induces apoptosis independently of p53. We further demonstrated that Cannabis sativa extracts, cannabidiol, and camptothecin were able to down-modulate the expression of Bcl-2 protein and up-modulate Bax expression. Caspases play an effective role in the execution of apoptosis either through the extrinsic or intrinsic pathway [9]. In this study, we wanted to validate whether caspase-9 and caspase-3 were involved in the initiation and execution of apoptosis. We demonstrated the ability of Cannabis sativa to initiate apoptosis by activating caspase-9. However, execution of apoptosis was either with or without the presence of capsase-3, depending on each cell line. Western blot revealed that Cannabis sativa hexane extract induced apoptosis via the activation of caspase-9 and caspase-3 when compared to untreated cells in all three cell lines. Similar results were obtained during treatment of all three cell lines with camptothecin. This was not the case with butanol. Butanol extracts up-modulated caspase-9 and caspase-3 in SiHa and HeLa cells only. Caspase-3 was not up-modulated in ME-180 cells. Caspase 3/7 activity assay revealed the up-modulation of caspase 3/7 following treatment of cervical cancer cells. However on the basis of the Western blot results, wherein butanol extract failed to up-modulate caspase-3, we can conclude that caspase-7 was responsible for the reported activity. Cannabidiol effectively up-modulated caspase-9 and caspase-3 in all three cell lines, when compared to the untreated and Cannabis sativa extract. From the results we can conclude that, apoptosis induction was caspase dependent. The aim of this study was to evaluate for the anti-growth effects of Cannabis sativa extracts and to also determine the mode of cell death following treatment. The activity of Cannabis sativa extracts was compared to that of cannabidiol, in order to verify whether the reported results were due to the presence of the compound. The study showed that the activity of one of the extracts might have been due to the presence of cannabidiol. It further demonstrated the ability of Cannabis sativa to induce apoptosis with or without cell cycle arrest and via mitochondrial pathway. More research needs to be done elucidating the mechanism between the active ingredients and molecular targets involved in the regulation of the cell cycle. Bcl-2-associated death promoter Bak-1: Bcl2-antagonist/killer 1 Bax: Bcl2-associated X protein Bcl-2: B-cell lymphoma 2 BH: Bcl-2 homology domain BH3 interacting-domain Bik: Bcl-2-interacting killer FITC: Fluorescein isothiocyanate P53: High Perfomance Liquid Chromatography RBBP6: Retinoblastoma binding protein 6 Alexander A, Smith PF, Rosengren RJ. Cannabinoids in the treatment of cancer. Cancer Lett. 2009;285:6–12. Arbyn M, Castellsague X, de Sanjose S, Bruni L, Saraiya M, Bray F, Ferlay J. Worldwide burden of cervical cancer in 2008. Ann Oncol. 2011;22:2675–86. Armania N, Yazan LS, Ismail IS, Foo JB, Tor YS, Ishak N, Ismail N, Ismail M. Dillenia suffruticosa extract inhibits proliferation of human breast cancer cell lines (MCF-7 and MDA-MB-231) via induction of G2/M arrest and apoptosis. Molecules. 2013;18(11):13320–39. Bla'zquez C, Galve-Roperh I, Guzma'n M. De novo-synthesized ceramide signals apoptosis in astrocytes via extracellular signal-regulated kinase. FASEB J. 2000;14:2315–22. Bortner CD, Oldenburg NB, Cidlowski JA. The role of DNA fragmentation in apoptosis. Trends Cell Biol. 1995;5:21–6. Caffarel MM, Andradas C, Perez-Gomez E, Guzman M, Sanchez C. Cannabinoids: Anew hope for breast cancer therapy? Cancer Treat Rev. 2012;38:911–8. Chen P, Yu J, Chalmers B, Drisko J, Yang J, Li B, Chen Q. Pharmacologic ascorbate induces cytotoxicity in prostate cancer cells through ATP depletion and the induction of autophagy. Anticancer Drugs Preclinical Rep. 2011;23:437–44. Choene M, Motadi L. Validation of the Antiproliferative Effects of Euphorbia tirucalli extracts in Breast Cancer Cell Lines. Mol Biol. 2016;50(1):115–28. Chipuk TE, Kuwana T, Bouchier-Hayes L, Droin MN, Newmeyer DD, Schuler M, Green DR. Direct activation of Bax by p53 mediates Mitochondrial Membrane Permeabilization and Apoptosis. Sci J. 2004;303:1010. de Bruin EC, Medema JP. Apoptosis and non-apoptosis deaths in cancer development and treatment response. Cancer Treat Rev. 2008;34:737–49. Flemming R, Muntendam T, Steup T, Kayser O. Chemistry and biological activity of tetrahydrocannabinol and its derivatives. Top Heterocycl Chem. 2007;10:1–42. GLOBOCAN 2012 v1.0, Cancer Incidence and Mortality Worldwide: IARC Cancer Base No. 11 [Internet]. Lyon, France: International Agency for Research on Cancer. Available from http://globocan.iarc.fr Accessed 25 July 2014. Gumulec J, Balvan J, Sztalmachova M, Raudeska M, et al. Cisplatin-resistant prostate cancer model: differences in antioxidant system, apoptosis, and cell cycle. Int J Oncol. doi:10.3892/ijo.2013.2223. Happyana N, Agnolet S, Muntendam R, Van Dam A, Schneider B, Kayser O. Analysis of cannabinoids in laser-micro dissected trichomes of medicinal Cannabis sativa using LCMS and cryogenic NMR. Phytochemistry. 2013;87:51–9. Lemasters JJ, Nieminen A, Qian T, Frost LC, et al. The mitochondrial permeability transition in cell death: a common mechanism in necrosis, apoptosis, and autophagy. Biochim Biophys Acta. 1998;1366:177–96. Ligresti A, Moriello AS, Matias I, et al. Anti-tumor activity of plant cannabinoids with the emphasis on the effect of cannabidiol on human breast cancer. J Pharmacol Exp Ther. 2006;318(3):1375–87. Li-Weber M. Targeting apoptosis pathways in cancer by Chinese medicine. Cancer Lett. 2013;332:304–12. Lozano I. The therapeutic use of Cannabis sativa in Arabic medicine. J Cannabis Ther. 2001;1(1):63-70. Moela P, Choene MS, Motadi LR. Silencing RBBP6 (Retinoblastoma binding protein 6) sensitizes breast cancer cells MCF-7 to camptothecin and staurosporine-induced cell death. Immunology. 2013;219:1–9. Nobili S, Lippi D, Witort E, Donnini M, et al. Natural compounds for cancer treatment and prevention. Pharmacol Res. 2009;59(6):365–78. O'Brien MA, Kirby R. Apoptosis: a review of pro-apoptotic and anti-apoptotic pathways and dysregulation in disease. J Vet Emerg Crit Care. 2008;18(6):572–85. Rao GV, Kumar S, Islam M, Mansour ES. Folk medicines for anticancer therapy-a current status. Cancer Ther. 2008;6:913–22. Romano B, Borrelli F, Pagano E, Cascio MG, Pertwee RG, Izzo AA. Inhibition of colon carcinogenesis by a standardized Cannabis sativa extract with high content of cannabidiol. Phytomedicine. 2014;21(5):631–9. Safaraz S, Adhami VM, Syed DN, Afaq, Mukhtar H. Cannabinoids for cancer treatment: Progress and promise. Cancer Res. 2008;68(2):339–44. Sharma M, Hudson JB, Adomat H, Guns E, Cox ME. In Vitro Anticancer Activity of Plant-Derived Cannabidiol on Prostate Cancer Cell Line. Pharmacol Pharm. 2014;5:806–20. Shrivastava A, Kuzontkoski PM, Groopman JE, Prasad A. Cannabidiol induces programmed cell death by coordinating the cross-talk between apoptosis and autophagy. Mol Cancer Ther. 2011;10(7):1161–72. Street RA, Prinsloo G. "Commercially Important Medicinal Plants of South Africa: A Review.". J Chem. 2013:1–16. doi:10.1155/2013/205048. Thafeni M, Sayed Y, Motadi L. Euphorbia mauritanica and Kedrostis hirtella extracts induces cell death in lung cancer cells. J Mol Biol. 2012;39(12):10785–94. Turner CE, Hadley KW, Holley HJ, Billets S, Mole Jr LM. Constituents of Cannabis sativa L. VIII: Possible biological application of a new method to separate cannabidiol and cannabichromene. J Pharm Sci. 1975;64(5):810–4. Yamaori S, Kushihara M, Yamamoto I, Watanabe K. Characterization of major phytocannabinoids, cannabidiol and cannabinol, as isoform-selective and potent inhibitors of human CYP1 enzymes. Biochem Pharmacol. 2010;79:1691–8. Our gratitude goes to South African MRC for funding assistance. The work was funded by MRC. The datasets supporting the conclusions of this article are included within the article and its additional files. STL was responsible for the experimental design and LRM prepared the manuscript. Both authors read and approved the final manuscript. The authors give consent for the journal to be published. This study was approved by the Human Research Ethics Committee (Medical): M140801. Department of Biochemistry, North-west University (Mafikeng campus), Private Bag X1290, Potchefstroom, 2520, South Africa Sindiswa T. Lukhele & Lesetja R. Motadi Search for Sindiswa T. Lukhele in: Search for Lesetja R. Motadi in: Correspondence to Lesetja R. Motadi. Lukhele, S.T., Motadi, L.R. Cannabidiol rather than Cannabis sativa extracts inhibit cell growth and induce apoptosis in cervical cancer cells. BMC Complement Altern Med 16, 335 (2016) doi:10.1186/s12906-016-1280-0
CommonCrawl
Wiener–Lévy theorem Wiener–Lévy theorem is a theorem in Fourier analysis, which states that a function of an absolutely convergent Fourier series has an absolutely convergent Fourier series under some conditions. The theorem was named after Norbert Wiener and Paul Lévy. Norbert Wiener first proved Wiener's 1/f theorem,[1] see Wiener's theorem. It states that if f has absolutely convergent Fourier series and is never zero, then its inverse 1/f also has an absolutely convergent Fourier series. Wiener–Levy theorem Paul Levy generalized Wiener's result,[2] showing that Let $F(\theta )=\sum \limits _{k=-\infty }^{\infty }c_{k}e^{ik\theta },\quad \theta \in [0,2\pi ]$ be an absolutely convergent Fourier series with $\|F\|=\sum \limits _{k=-\infty }^{\infty }|c_{k}|<\infty .$ The values of $F(\theta )$ lie on a curve $C$, and $H(t)$ is an analytic (not necessarily single-valued) function of a complex variable which is regular at every point of $C$. Then $H[F(\theta )]$ has an absolutely convergent Fourier series. The proof can be found in the Zygmund's classic book Trigonometric Series.[3] Example Let $H(\theta )=\ln(\theta )$ and $F(\theta )=\sum \limits _{k=0}^{\infty }p_{k}e^{ik\theta },(\sum \limits _{k=0}^{\infty }p_{k}=1$) is characteristic function of discrete probability distribution. So $F(\theta )$ is an absolutely convergent Fourier series. If $F(\theta )$ has no zeros, then we have $H[F(\theta )]=\ln \left(\sum \limits _{k=0}^{\infty }p_{k}e^{ik\theta }\right)=\sum _{k=0}^{\infty }c_{k}e^{ik\theta },$ where $\|H\|=\sum \limits _{k=0}^{\infty }|c_{k}|<\infty .$ The statistical application of this example can be found in discrete pseudo compound Poisson distribution[4] and zero-inflated model. If a discrete r.v. $X$ with $\Pr(X=i)=P_{i}$, $i\in \mathbb {N} $, has the probability generating function of the form $P(z)=\sum \limits _{i=0}^{\infty }P_{i}z^{i}=\exp \left\{\sum \limits _{i=1}^{\infty }\alpha _{i}\lambda (z^{i}-1)\right\},z=e^{ik\theta }$ where $\sum \limits _{i=1}^{\infty }\alpha _{i}=1$, $\sum \limits _{i=1}^{\infty }\left|\alpha _{i}\right|<\infty $, $\alpha _{i}\in \mathbb {R} $, and $\lambda >0$. Then $X$ is said to have the discrete pseudo compound Poisson distribution, abbreviated DPCP. We denote it as $X\sim DPCP({\alpha _{1}}\lambda ,{\alpha _{2}}\lambda ,\cdots )$. See also • Wiener's theorem (disambiguation) References 1. Wiener, N. (1932). "Tauberian Theorems". Annals of Mathematics. 33 (1): 1–100. doi:10.2307/1968102. JSTOR 1968102. 2. Lévy, P. (1935). "Sur la convergence absolue des séries de Fourier". Compositio Mathematica. 1: 1–14. 3. Zygmund, A. (2002). Trigonometric Series. Cambridge: Cambridge University Press. p. 245. 4. Huiming, Zhang; Li, Bo; G. Jay Kerns (2017). "A characterization of signed discrete infinitely divisible distributions". Studia Scientiarum Mathematicarum Hungarica. 54: 446–470. arXiv:1701.03892. doi:10.1556/012.2017.54.4.1377.
Wikipedia
\begin{document} \title{A matrix variant of the Erd\H{o}s-Falconer distance problems over finite field} \author{Hieu T. Ngo} \address{Institute of Mathematics, Vietnam Academy of Science and Technology, Hanoi, Vietnam} \email{[email protected]} \subjclass[2010]{11T24, 52C10} \keywords{Erd\H{o}s-Falconer distance problems, discrete Fourier analysis, quadratic Gauss sums} \thanks{} \begin{abstract} We study a matrix analog of the Erd\H{o}s-Falconer distance problems in vector spaces over finite fields. There arises an interesting analysis of certain quadratic matrix Gauss sums. \end{abstract} \maketitle \section{Introduction} \subsection{Problem formulation}\label{subsect:formulation} Let $\mathbb{F}_q$ be a finite field of odd cardinality $q$. On the vector space $\mathbb{F}_q^r$, consider the norm-like function $$ \|\mathbf{x}\|= \sum_{i=1}^r x_i^2 \in \mathbb{F}_q $$ where $\mathbf{x}=(x_1,\dots,x_r)\in\mathbb{F}_q^r$. On $\mathbb{F}_q^r \times \mathbb{F}_q^r$, define the distance-like function $$ d(\mathbf{x},\mathbf{y})=\|\mathbf{x}-\mathbf{y}\| \quad (\mathbf{x},\mathbf{y} \in \mathbb{F}_q^r). $$ The finite field Erd\H{o}s distance problem seeks, for a subset $E\subseteq\mathbb{F}_q^r$, the smallest possible cardinality of the `distance set' $$ \Delta(E)=\{d(\mathbf{x},\mathbf{y}): \mathbf{x},\mathbf{y} \in E\}. $$ Throughout this paper, the cardinality $q$ of $\mathbb{F}_q$ is an asymptotic parameter that can be arbitrarily large. For two functions $f$ and $g$ of odd prime powers $q$ which take values in the complex numbers, we adopt the Vinogradov notation $f\gg g$ (resp.~$f\ll g$) to mean that there exists a positive constant $c>0$ such that $|f(q)|>c|g(q)|$ (resp.~$|f(q)|< c|g(q)|$) for all such $q$; the Bachmann-Landau notations $f=O(g)$ and $g=\Omega(f)$ have the same meaning as $f\ll g$. If both asymptotics $f\ll g$ and $g\ll f$ are satisfied, one writes $f\asymp g$ or, equivalently, $f=\Theta(g)$. In addition, by $f\sim g$ we mean $\frac{f(q)}{g(q)}$ tends to $1$ as $q$ approaches infinity. The following finite field Falconer distance conjecture was proposed by Iosevich and Rudnev \cite[Conjecture 1.1]{IR07}. \begin{conj}[Iosevich-Rudnev]\label{conj:IR} Suppose that $r\geq 2$ and that $E\subseteq \mathbb{F}_q^r$ has cardinality at least $cq^{\frac{r}{2}}$ with $c$ sufficiently large. Then $\#\Delta(E) \gg q$. \end{conj} The lower bound exponent $\frac{r+1}{2}$ was the first result toward this conjecture \cite[Theorem 1.3]{IR07}. \begin{thm}[Iosevich-Rudnev]\label{thm:IR} Suppose that $r\geq 2$ and that $E\subseteq \mathbb{F}_q^r$ has cardinality at least $cq^{\frac{r+1}{2}}$ where $c>0$ is sufficiently large. Then $\Delta(E) = \mathbb{F}_q$. \end{thm} An alternative proof, which bypassed Kloosterman sums, of Theorem \ref{thm:IR} was presented in \cite{AMM17}. The Erd\H{o}s-Falconer distance problems in vector spaces over finite fields exhibit the important philosophy of finite field models in arithmetic combinatorics \cite{Green05,Wolf15}. The work \cite{IR07} has stimulated many extensions and generalizations, albeit still in the context of vector spaces over finite fields (see, for instance, \cite{IK08,Vinh11EF,KS12,Dietmann13}). There is a further direction that is worth inquiring: the generalization from commutativity to noncommutativity. To our best knowledge, no noncommutative version of the Erd\H{o}s-Falconer distance problems has been formulated in the literature. In this paper, we propose a matrix analog of the finite field Erd\H{o}s-Falconer distance problems as follows. Let $n$ and $r$ be positive integers. Let ${\rm M}_n={\rm M}_n(\mathbb{F}_q)$ denote the (noncommutative) algebra of $n\times n$ matrices over $\mathbb{F}_q$. On the free ${\rm M}_n$-module ${\rm M}_n^r$, consider the norm-like function $$ \|\mathbf{X}\|= \sum_{i=1}^r X_i^2 \in {\rm M}_n $$ where $\mathbf{X}=(X_1,\dots,X_r)\in{\rm M}_n^r$. On ${\rm M}_n^r \times {\rm M}_n^r$, define the distance-like function $$ d(\mathbf{X},\mathbf{Y})=\|\mathbf{X}-\mathbf{Y}\| \quad (\mathbf{X},\mathbf{Y} \in {\rm M}_n^r). $$ The matrix Erd\H{o}s distance problem seeks, for a subset $E\subseteq{\rm M}_n^r$, the smallest possible cardinality of the `distance set' $$ \Delta(E)=\{d(\mathbf{X},\mathbf{Y}): \mathbf{X},\mathbf{Y} \in E\}. $$ The matrix Falconer distance problem is to determine the smallest exponent $e=e(n,r)$ such that, if $r\geq 2$ and $E\subseteq {\rm M}_n^r$ has cardinality at least $cq^{e}$ with $c$ sufficiently large, then $\#\Delta(E) \gg q^{n^2}$. \subsection{New results}\label{subsect:new-results} Our goal is to give a nontrivial lower bound for the cardinality of a subset such that its distance set is the whole matrix algebra. Our first two results achieve this goal when the matrix algebra has small rank $2$ or $3$. \begin{thm}\label{thm:strong-Falconer-rank2} Let $\mathbb{F}_q$ be a finite field of odd cardinality $q$. Let $E$ be a subset of $\left({\rm M}_2(\mathbb{F}_q)\right)^r$ with $r\geq 4$. For every $\epsilon>0$, there exists a constant $c=c(\epsilon)>0$ such that $$\Delta(E)={\rm M}_2(\mathbb{F}_q)$$ whenever $(\# E) > cq^{3r+3+\epsilon}$. \end{thm} \begin{thm}\label{thm:strong-Falconer-rank3} Let $\mathbb{F}_q$ be a finite field of odd cardinality $q$. Let $E$ be a subset of $\left({\rm M}_3(\mathbb{F}_q)\right)^r$ with $r\geq 3$. For every $\epsilon>0$, there exists a constant $c=c(\epsilon)>0$ such that $$\Delta(E)={\rm M}_3(\mathbb{F}_q)$$ whenever $(\# E) > cq^{7r+4+\epsilon}$. \end{thm} Our next result concerns a matrix algebra of general high rank. \begin{thm}\label{thm:strong-Falconer-general-rank} Let $\mathbb{F}_q$ be a finite field of odd cardinality $q$. Let $E$ be a subset of $\left({\rm M}_n(\mathbb{F}_q)\right)^r$ with $n\geq 3$ and $r\geq 3$. For every $\epsilon>0$, there exists a constant $c=c(\epsilon)>0$ such that $$\Delta(E)={\rm M}_n(\mathbb{F}_q)$$ whenever $(\# E) > cq^{rn^2-(r-2)(n-1)+\epsilon}$. \end{thm} \begin{rem} Theorem \ref{thm:strong-Falconer-rank3} can be derived as a corollary of Theorem \ref{thm:strong-Falconer-general-rank}. However, we will prove Theorem \ref{thm:strong-Falconer-rank3} before Theorem \ref{thm:strong-Falconer-general-rank}. We believe that the proof of the former helps one familiarize with various ingredients, such as invariants of quadratic cycle types, that go into the proof of the latter. \end{rem} \subsection{}\label{subsect:organization} The paper is organized as follows. In Section \ref{sect:prelim}, we collect relevant notions and results concerning discrete Fourier analysis and similarity classes of matrices over a finite field. Section \ref{sect:Gauss-sums} studies a matrix analog of the classical quadratic Gauss sums. We then analyze `matrix spheres' over a finite field, thereby proving Theorems \ref{thm:strong-Falconer-rank2} and \ref{thm:strong-Falconer-rank3} in Section \ref{sect:proofs-ranks-23} and Theorem \ref{thm:strong-Falconer-general-rank} in Section \ref{sect:proofs-general-rank}. \section{Preliminaries}\label{sect:prelim} \subsection{Discrete Fourier analysis on matrices}\label{subsect:2-Fourier} In this section we recall basic notions of discrete Fourier analysis on matrix spaces over a finite field. Let $\mathbb{F}_q$ be a finite field with $q=p^g$ elements. Let ${\rm M}_n:={\rm M}_n(\mathbb{F}_q)$ be the algebra of $n\times n$ matrices over $\mathbb{F}_q$. Denote by ${\rm Tr}_{\mathbb{F}_q/\mathbb{F}_p}$ the standard trace map. The set ${\rm M}_n$ with matrix addition is a finite abelian group and has a nontrivial additive character $$ \psi(X)=\exp\(\frac{2\pi i}{p} \, {\rm Tr}_{\mathbb{F}_q/\mathbb{F}_p}\({\rm Tr} \, X\)\) \quad (X\in {\rm M}_n) $$ where ${\rm Tr}\,X$ is the matrix trace of $X$. Let $\widehat{\rm M}_n$ denote the Pontryagin dual of ${\rm M}_n$, i.e.~the group of all additive characters of ${\rm M}_n$. The map $$ {\rm M}_n \to \widehat{\rm M}_n, A\mapsto\psi_A $$ where $\psi_A(X)=\psi(AX)\, (X\in{\rm M}_n)$, is a group isomorphism and gives a parametrization of $\widehat{\rm M}_n$. One has the orthogonal relation $$ \frac{1}{q^{n^2}}\sum_{A\in{\rm M}_n} \psi(AX) = \delta_0(X) \quad (X\in{\rm M}_n) $$ where $\delta_0(X)$ equals $1$ if $X=0$ and equals $0$ otherwise. Let $r$ be a positive integer. The free ${\rm M}_n$-module ${\rm M}_n^r$ is equipped with the dot product $$ \mathbf{A} \cdot \mathbf{B} = \sum_{i=1}^r A_iB_i $$ where $\mathbf{A}=(A_1,\dots,A_r) \in {\rm M}_n^r$ and $\mathbf{B}=(B_1,\dots,B_r)\in {\rm M}_n^r$. The Fourier transform of a complex-valued function $f$ on ${\rm M}_n^r$ is given by $$ \widehat{f}(\mathbf{M}) = \frac{1}{q^{rn^2}} \sum_{\mathbf{A}\in{\rm M}_n^r} f(\mathbf{A}) \overline{\psi}(\mathbf{M}\cdot \mathbf{A}) \quad (\mathbf{M}\in{\rm M}_n^r). $$ The Fourier inversion formula expands $f$ in terms of $\widehat{f}$: $$ f(\mathbf{A})= \sum_{\mathbf{M}\in{\rm M}_n^r} \widehat{f}(\mathbf{M}) \psi(\mathbf{M}\cdot \mathbf{A}). $$ The Plancherel formula relates the $L^2$-norms of $f$ and $\widehat{f}$: $$ \frac{1}{q^{rn^2}} \sum_{\mathbf{A}\in{\rm M}_n^r} |f(\mathbf{A})|^2 = \sum_{\mathbf{M}\in{\rm M}_n^r} |\widehat{f}(\mathbf{M})|^2. $$ \subsection{Similarity of matrices} \label{subsect:2-sim} In this section, let $k$ be an arbitrary field and consider the conjugate action of ${\rm GL}_n(k)$ on ${\rm M}_n(k)$. Two matrices in the same orbit are said to be \emph{similar}; each orbit is called a \emph{similarity class}. \subsubsection*{Rational canonical forms} \label{sssect:2-RCF} The theory of rational canonical forms asserts that a square matrix over $k$ is classified up to conjugation by a sequence of monic polynomials $$f_1(T),\dots,f_s(T) \in k[T]$$ satisfying $f_{i+1}(T)|f_i(T)$ for all $1\leq i<s$. More precisely, for a monic polynomial $f(T)\in k[T]$ define the companion matrix $C_f$ as follows. If $f(T)=T+c$, set $C_f=(-c)$. If $f(T)=T^r+c_{r-1}T^{r-1}+\cdots+c_1T+c_0$, let $C_f$ be the $r\times r$ matrix $$ C_f=\begin{pmatrix} 0 & 0 & \cdots & 0 & -c_0 \\ 1 & 0 & \cdots & 0 & -c_1 \\ 0 & 1 & \cdots & 0 & -c_2 \\ \cdots \\ 0 & 0 & \cdots & 1 & -c_{r-1} \\ \end{pmatrix}; $$ this is the matrix corresponding to the `multiplication by $T$' linear transformation on the vector space $\frac{k[T]}{(f(T))}$. Any matrix $A\in {\rm M}_n(k)$ is conjugate (by a matrix in ${\rm GL}_n(k)$) to a block diagonal matrix $$ R_A = \begin{pmatrix} C_{f_1} & 0 & \cdots & 0 \\ 0 & C_{f_2} & \cdots & 0 \\ \cdots \\ 0 & 0 & \cdots & C_{f_{s}} \end{pmatrix} $$ where $f_{i+1}(T)|f_i(T)$ for all $1\leq i<s$. The monic polynomials $\{f_i(T): 1\leq i\leq s\}$ are the \emph{invariant factors} of $A$; the matrix $R_A$ is the \emph{rational canonical form} of $A$. Two matrices in ${\rm M}_n(k)$ are similar if and only if they have the same invariant factors. We refer the reader to \cite[Section 9.2]{Rotman10} for a thorough and clear exposition of the theory of rational canonical forms. \subsubsection*{Cycle types} \label{sssect:2-cycle-type} Similarity classes of matrices over a field can also be described in terms of cycle types as follows. Let ${\rm Irr}(k[T])$ denote the set of monic irreducible polynomials in $k[T]$. For any matrix $A\in {\rm M}_n(k)$, one defines a $k[T]$-module structure on $k^n$ by $T\cdot v = Av$ for $v\in k^n$; this is called the \emph{operator module} associated to $A$ and denoted by $M_A$. The operator modules $M_A$ and $M_B$ are isomorphic if and only if the matrices $A$ and $B$ are similar. For $\pi \in {\rm Irr}(k[T])$, the $\pi$-primary part of $M_A$ is $$ M_{A,\pi}=\{v\in M_A: \pi^rv=0 \,\, \text{for some} \,\, r\in\mathbb{N}\}. $$ There is a decomposition $$ M_{A,\pi} \cong \bigoplus_i \frac{k[T]}{(\pi^{\lambda_{\pi,i}})} $$ where the $\lambda_{\pi,i}$ are positive integers. Let $\Lambda$ denote the set of all partitions. The \emph{cycle type} of $A$ is the map $$ \nu_A: {\rm Irr}(k[T]) \to \Lambda $$ given by $\nu_A(\pi)=\lambda_{\pi}=(\lambda_{\pi,i})_i$. Put $|\lambda_{\pi}|:=\sum_i \lambda_{\pi,i}$. One can express the cycle type of $A$ as a formal product $$ \nu_A = \prod_{\pi} \pi^{\lambda_{\pi}}. $$ Define the \emph{degree of the formal product} $\prod_{\pi} \pi^{\lambda_{\pi}}$ to be $\sum_{\pi} \deg(\pi)|\lambda_{\pi}|$; note that the degree of the cycle type $\nu_A$ is nothing but $n$. The cycle type $A\mapsto \nu_A$ gives a bijection between similarity classes in ${\rm M}_n(k)$ and the set of all maps from ${\rm Irr}(k[T])$ to $\Lambda$ which have degree $n$. \subsubsection*{Class types and centralizers} \label{sssect:2-class-type} Over a finite field $k=\mathbb{F}_q$, Green \cite{Green55} introduced the notion of (similarity) class type. If $A\in{\rm M}_n(\mathbb{F}_q)$ has cycle type $\nu_A = \pi_1^{\lambda_1} \cdots \pi_s^{\lambda_s}$ and for each $i$ the polynomial $\pi_i$ has degree $d_i$, the \emph{(similarity) class type} of $A$ is the formal product $$ \tau_A = d_1^{\lambda_1} \cdots d_s^{\lambda_s}. $$ Green \cite[Lemma 2.1]{Green55} discovered that the class type of a matrix determines its centralizer up to isomorphism. Furthermore, Britnell and Wildon \cite[Theorem 2.7]{BW11} showed that two matrices have the same class type if and only if their centralizers are conjugate (by an element of ${\rm GL}_n(\mathbb{F}_q)$). Both of these results made essential use of the finite field assumption; see \cite{BW14} for a generalization to an arbitrary field. From the class type of a matrix, one can compute the size of its centralizer, thanks to the classical works of Kung \cite{Kung81}, Stong \cite{Stong88}, and Fulman \cite{Fulman99}. Let $A\in{\rm M}_n(\mathbb{F}_q)$ have class type $$ \tau_A = d_1^{\lambda_1} \cdots d_s^{\lambda_s}. $$ Dropping the subscript, we write $d^{\lambda}$ for an arbitrary component $d_i^{\lambda_i}$ of $\tau_A$. For a partition $\lambda$ of a positive integer into parts $\lambda_{1}\geq\lambda_2\geq \cdots$, set \begin{align*} |\lambda| &= \sum_i \lambda_i, \\ |\lambda|_2 &= \sum_i \lambda_i^2. \end{align*} Let $m_j(\lambda)$ be the number of parts in the partition $\lambda$ which are equal to $j$. The conjugate partition $\lambda'$ of $\lambda$ has its $j^{\rm th}$ part given by $$ \lambda'_j = m_j(\lambda) + m_{j+1}(\lambda) + \cdots . $$ Write $$ (1/x)_r=\prod_{j=1}^r\(1-1/x^j\). $$ Define (cf.~\cite[Section 2]{Fulman99}, \cite[Theorem 9.5]{PSS15}) \begin{align} c\(d^\lambda\) &= q^{d |\lambda'|_2} \prod_j (1/q^d)_{m_j(\lambda)}. \label{eq:cent-size-1} \\ &= q^{d \sum_{j}(\lambda'_j)^2} \prod_j (1/q^d)_{m_j(\lambda)}. \nonumber \end{align} Then the centralizer in ${\rm GL}_n(\mathbb{F}_q)$ of $A$ has size \begin{equation}\label{eq:cent-size-2} c(\tau_A)= c\(d_1^{\lambda_1} \cdots d_s^{\lambda_s}\) = \prod_{i=1}^s c\(d_i^{\lambda_i}\). \end{equation} \section{Quadratic matrix Gauss sums}\label{sect:Gauss-sums} Let $p$ be an odd prime and $\mathbb{F}_q$ be a finite field with $q=p^g$ elements. Write ${\rm M}_n={\rm M}_n(\mathbb{F}_q)$ for the algebra of $n\times n$ matrices over $\mathbb{F}_q$. For $A,B\in{\rm M}_n$, define the quadratic matrix Gauss sum \begin{equation}\label{eq:quadratic-matrix-Gauss-sum} G(A,B) = \sum_{X\in{\rm M}_n} \psi(AX^2+BX). \end{equation} \subsection{The trace quadratic form}\label{subsect:31} For $A\in{\rm M}_n$, the trace form ${\rm Tr}(AX^2)$ defines a quadratic space $(V_A,Q_A)$ of dimension $n^2$ over $\mathbb{F}_q$. In a series of papers \cite{Kuroda97,Kuroda99,Kuroda04}, Kuroda studied this `trace quadratic form' and applied it to compute $G(A,B)$ in the special case $B=0$. Suppose that $A$ is conjugate by an element of ${\rm GL}_n\(\overline{\mathbb{F}}_q\)$ to a Jordan canonical form $$ \begin{pmatrix} A_{1} & 0 & \cdots & 0 \\ 0 & A_{2} & \cdots & 0 \\ \cdots \\ 0 & 0 & \cdots & A_{r} \end{pmatrix} $$ where $A_i$ is the $n_i \times n_i$ Jordan block with eigenvalue $\alpha_i$. The radical of $V_A$, denoted ${\rm rad}\,A$, has dimension \cite[Lemma 3]{Kuroda04} \begin{equation}\label{eq:radical-invariant} \rho_A = \sum_{\substack{1\leq i,j \leq r \\ \alpha_i+\alpha_j=0}} \min\(n_i,n_j\). \end{equation} Let us call $\rho_A$ the \emph{radical invariant} of the matrix $A$. \begin{prop}\label{prop:Kuroda}\cite[Lemma 4]{Kuroda04} Suppose that the characteristic polynomial of $A$ has a factorization into irreducible polynomials $$ P_A(T) = T^{c} \cdot \prod_{i=1}^{r} \pi_i\(T^2\)^{a_i} \cdot \prod_{j=1}^{s}\zeta_j(T)^{b_j} $$ where for each $1\leq j\leq s$ the polynomial $\zeta_j(T)$ is not a polynomial in $T^2$. Then one has an orthogonal decomposition $$ V_A \cong W_{\zeta_1}^{b_1} \perp \cdots \perp W_{\zeta_s}^{b_s} \perp H \perp {\rm rad} \,A $$ where $W_{\zeta_j}$ is a regular subspace of $V_A$ depending on $\zeta_j(T)$ with dimension $$ \dim \, W_{\zeta_j} = \deg\, \zeta_j(T) \quad (1\leq j \leq s), $$ and $H$ is a hyperbolic space. \end{prop} \subsection{Estimating quadratic matrix Gauss sums}\label{subsect:32} Recall the following beautiful exponential sum estimate of Deligne \cite[Théorèm 8.4]{Deligne74} (see also \cite{Katz99}). \begin{thm}[Deligne]\label{thm:Deligne} Let $f(x_1,\dots,x_n)\in \mathbb{F}_q[x_1,\dots,x_n]$ be a polynomial of degree $d$ written as $f=F_d+F_{d-1}+\cdots+F_0$ with $F_i$ homogeneous of degree $i$. Suppose that the degree $d$ is relatively prime to $q$, and that the locus $F_d=0$ is a nonsingular hypersurface in $\mathbb{P}^{n-1}$. Then $$ \left| \sum_{x \in \mathbb{F}_q^n} \psi(f(x)) \right| \leq (d-1)^n q^{\frac{n}{2}} . $$ \end{thm} We are in a position to deduce an upper bound for $G(A,B)$. \begin{prop}\label{prop:matrix-Gauss} Let $\rho_A$ denote the dimension of the radical ${\rm rad}\,A$ of $(V_A,Q_A)$. One has $$ G(A,B) \ll q^{\frac{n^2+\rho_A}{2}}. $$ \end{prop} \begin{proof} By Proposition \ref{prop:Kuroda}, the quadratic space $V_A$ has an orthogonal decomposition \begin{equation}\label{eq-pf:quad-decomp} V_A \cong W_A \perp H \perp {\rm rad}\,A \end{equation} where $W_A$ is regular of dimension $w_A$, $H$ is a hyperbolic space of dimension $2h_A$, and the radical ${\rm rad}\,A$ has dimension $\rho_A$. Hence $$ n^2=w_A+2h_A+\rho_A. $$ It follows from the decomposition \eqref{eq-pf:quad-decomp} that, after a change of variables, there exist: \begin{itemize} \item a nonsingular homogeneous quadratic polynomial $$ f_2(\mathbf{x}) \in \mathbb{F}_q[\mathbf{x}] $$ in $w_A$ variables $\mathbf{x}=(x_1,\dots,x_{w_A})$, \item a linear form $f_1(\mathbf{x}) \in \mathbb{F}_q[\mathbf{x}]$, \item the quadratic polynomial $$ g_2(\mathbf{y},\mathbf{z}) = \sum_{i=1}^{h_A} y_iz_i $$ where $\mathbf{y}=(y_1,\dots,y_{h_A})$ and $\mathbf{z}=(z_1,\dots,z_{h_A})$, \item a linear form $g_1(\mathbf{y},\mathbf{z}) \in \mathbb{F}_q[\mathbf{y},\mathbf{z}]$, \item a linear form $r_1(\mathbf{u})\in \mathbb{F}_q[\mathbf{u}]$ in $\rho_A$ variables $\mathbf{u}=(u_1,\dots,u_{\rho_A})$, \end{itemize} such that $$ G(A,B) = \( \sum_{\mathbf{x}\in \mathbb{F}_q^{w_A}} \psi\( f_2(\mathbf{x}) + f_1(\mathbf{x}) \) \) \( \sum_{\mathbf{y},\mathbf{z}\in \mathbb{F}_q^{h_A}} \psi\( g_2(\mathbf{y},\mathbf{z}) + g_1(\mathbf{y},\mathbf{z}) \) \) \( \sum_{\mathbf{u}\in \mathbb{F}_q^{\rho_A}} \psi\( r_1(\mathbf{u}) \) \). $$ By virtue of Theorem \ref{thm:Deligne}, the first factor is $O(q^{\frac{w_A}{2}})$. It is plain that the second factor is $O(q^{h_A})$, and that the third factor is $O(q^{\rho_A})$. Thus $$ G(A,B) \ll q^{\frac{w_A}{2}+h_A+\rho_A} = q^{\frac{n^2+\rho_A}{2}}. $$ The proposition is proved. \end{proof} \begin{rem} Proposition \ref{prop:matrix-Gauss} reduces the estimation of $G(A,B)$ to that of the radical invariant $\rho_A$. By \cite[Theorem 2]{Kuroda04}, we have $|G(A,0)|=q^{\frac{n^2+\rho_A}{2}}$ . An interesting problem is to evaluate $G(A,B)$ for all $A,B\in{\rm M}_n$. \end{rem} \subsection{Quadratic cycle types and quadratic class types}\label{subsect:3-quad-types} In this section, we introduce the notions of quadratic cycle type and quadratic class type, aiming to extract information about radical invariants most succinctly. Let ${\rm M}_n:={\rm M}_n(\mathbb{F}_q)$ and $I:={\rm Irr}(\mathbb{F}_q[T])$. Recall that the cycle type, or more generally the class type, of a matrix $A\in{\rm M}_n$ encodes the size of its similarity class. The cycle type of $A$ can be expressed more refinedly as a four-part formal product \begin{equation}\label{eq:quad-cyc-type} \nu_A = T^\alpha \cdot \prod_\zeta \left( \zeta(T)^{\beta_\zeta} \zeta(-T)^{\gamma_\zeta} \right) \cdot \prod_\pi \pi(T^2)^{\lambda_\pi} \cdot \prod_\eta \eta(T)^{\kappa_\eta} . \end{equation} Here $\zeta$ varies in $I$ such that each $\zeta$ is not a polynomial in $T^2$, $\pi$ varies in $I$, and $\eta$ varies in $I$ such that any two (not necessarily distinct) irreducibles $\eta_1$ and $\eta_2$ satisfy $\eta_1(-T) \neq \eta_2(T)$. Correspondingly, the exponents $\alpha,\beta_\zeta,\gamma_\zeta,\lambda_\pi,\kappa_\eta$ are partitions. We shall refer to \eqref{eq:quad-cyc-type} as the \emph{quadratic cycle type} of the matrix $A$. Forgetting the irreducible polynomials and retaining only their degrees, one arrives at the formal product \begin{equation}\label{eq:quad-class-type} \tau_A = 0^\alpha \cdot \prod_\zeta \left( \deg(\zeta)^{\beta_\zeta} \deg(\zeta)^{\gamma_\zeta} \right)_{(+1)} \cdot \prod_\pi \left( \deg(\pi)^{\lambda_\pi} \right)_{(2)} \cdot \prod_\eta \left( \deg(\eta)^{\kappa_\eta} \right)_{(-1)}, \end{equation} called the \emph{quadratic class type} of the matrix $A$. \subsection{Incidence functions}\label{subsect:incidence-functions} We follow the argument of Iosevich and Rudnev \cite{IR07}. Let ${\rm M}_n:={\rm M}_n(\mathbb{F}_q)$ and consider $E\subseteq{\rm M}_n^r$. Define the \emph{incidence function} $$ \nu(T)=\frac{1}{(\# E)^2} \#\{(\mathbf{X},\mathbf{Y}) \in E \times E: d(\mathbf{X},\mathbf{Y})=T\} \quad (T\in{\rm M}_n). $$ For $T\in{\rm M}_n$, the \emph{`matrix sphere' of `radius' $T$} is $$ \sigma_T=\{\mathbf{X}\in{\rm M}_n^r:\|\mathbf{X}\|=T\} . $$ On applying the Fourier inversion formula as in \cite[Section 2.2]{IR07}, we deduce that \begin{equation}\label{eq:IR:incidence} \nu(T) = \frac{(\# \sigma_T)}{q^{rn^2}} + \frac{q^{2rn^2}}{(\# E)^2} \sum_{\mathbf{M}\neq 0} \left| \widehat{E}(\mathbf{M}) \right|^2 \widehat{\sigma}_T(\mathbf{M}). \end{equation} \begin{lem}\label{lem:matrix-sphere} If $T\in{\rm M}_n$ and $\mathbf{M}\in {\rm M}_n^r\setminus\{0\}$, then \begin{equation} \# \sigma_T = q^{n^2(r-1)} + \frac{1}{q^{n^2}} \sum_{S\in{\rm M}_n\setminus\{0\}} \psi\(-ST\) G(S,0)^r \label{eq:IR:sphere-size} \end{equation} and \begin{equation} \widehat{\sigma}_T(\mathbf{M}) = \frac{1}{q^{(r+1)n^2}} \sum_{S\in{\rm M}_n\setminus\{0\}} \psi(-ST) \prod_{i=1}^r G(S,-M_i). \label{eq:IR:FT-sphere-nonzero-phase} \end{equation} \end{lem} \begin{proof} We compute the size of a matrix sphere: \begin{align*} \# \sigma_T &= \frac{1}{q^{n^2}} \sum_{\mathbf{X}\in{\rm M}_n^r} \sum_{S\in{\rm M}_n} \psi\(S\(\|\mathbf{X}\|-T\)\) \\ &= q^{n^2(r-1)} + \frac{1}{q^{n^2}} \sum_{\mathbf{X}=(X_1,\dots,X_r)\in{\rm M}_n^r} \sum_{S\in{\rm M}_n\setminus\{0\}} \psi\(S\(\|\mathbf{X}\|-T\)\) \\ &= q^{n^2(r-1)} + \frac{1}{q^{n^2}} \sum_{S\in{\rm M}_n\setminus\{0\}} \psi\(-ST\) G(S,0)^r. \end{align*} We compute the Fourier transform a matrix sphere at $\mathbf{M}=(M_1,\dots,M_r)\in{\rm M}_n^r$: \begin{align*} \widehat{\sigma}_T(\mathbf{M}) &= \frac{1}{q^{rn^2}} \sum_{\mathbf{X}\in{\rm M}_n^r} \sigma_T(\mathbf{X})\overline{\psi}(\mathbf{M}\cdot\mathbf{X}) \\ &= \frac{1}{q^{(r+1)n^2}} \sum_{\mathbf{X}=(X_1,\dots,X_r)\in{\rm M}_n^r} \sum_{S\in{\rm M}_n} \overline{\psi}(\mathbf{M}\cdot\mathbf{X}) \psi\(S\(\|\mathbf{X}\|-T\)\) \\ &= \frac{1}{q^{(r+1)n^2}} \sum_{S\in{\rm M}_n} \psi(-ST) \prod_{i=1}^r G(S,-M_i). \end{align*} If $\mathbf{M}\neq {\bf 0}$, we further have \begin{equation*} \widehat{\sigma}_T(\mathbf{M}) = \frac{1}{q^{(r+1)n^2}} \sum_{S\in{\rm M}_n\setminus\{0\}} \psi(-ST) \prod_{i=1}^r G(S,-M_i). \end{equation*} The lemma is proved. \end{proof} \section{Rank 2 and rank 3 matrices}\label{sect:proofs-ranks-23} \subsection{Matrix spheres in \texorpdfstring{${\rm M}_2(\mathbb{F}_q)$}{TEXT}}\label{subsect:4-rank2-matrices} Write ${\rm M}_2 = {\rm M}_2(\mathbb{F}_q)$. \subsubsection*{Quadratic types} Set $I:={\rm Irr}(\mathbb{F}_q[T])$. A matrix in ${\rm M}_2$ has precisely one of the following quadratic cycle types: \begin{enumerate} \item[${\rm (i)}$] $T^{(1,1)}$ \item[${\rm (ii)}$] $T^{(2)}$ \item[${\rm (iii)}$] $T^{(1)}\cdot (T-\lambda)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times)$ \item[${\rm (iv)}$] $(T+\lambda)^{(1)} (T-\lambda)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times)$ \item[${\rm (v)}$] $\pi(T^2)^{(1)} \quad (\pi\in I, \deg(\pi)=1)$ \item[${\rm (vi)}$] $(T-\lambda)^{(1)} (T-\lambda')^{(1)} \quad (\lambda,\lambda'\in \mathbb{F}_q^\times, \lambda'\neq -\lambda)$ \item[${\rm (vii)}$] $\eta(T)^{(1)} \quad (\eta\in I, \deg(\eta)=2,\eta(T)\neq\eta(-T))$. \end{enumerate} These quadratic cycle types give rise to to the following quadratic class types: \begin{enumerate} \item[${\rm (i)}$] $0^{(1,1)}$ \item[${\rm (ii)}$] $0^{(2)}$ \item[${\rm (iii)}$] $0^{(1)}\cdot \left( 1^{(1)} \right)_{(-1)}$ \item[${\rm (iv)}$] $\left( 1^{(1)} 1^{(1)} \right)_{(+1)}$ \item[${\rm (v)}$] $\left( 1^{(1)} \right)_{(2)}$ \item[${\rm (vi)}$] $\left( 1^{(1)} 1^{(1)} \right)_{(-1)}$ \item[${\rm (vii)}$] $\left( 2^{(1)} \right)_{(-1)}$. \end{enumerate} \begin{table}[!h] \begin{tabular}{|c|l|l|l|l|l|} \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Quadratic \\ class type\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Radical \\ invariant\end{tabular} & \begin{tabular}[c]{@{}l@{}}Centralizer \\ size\end{tabular} & \begin{tabular}[c]{@{}l@{}}Similarity class \\ size\end{tabular} & \begin{tabular}[c]{@{}l@{}}No.~quadratic \\ cycle types\end{tabular} & \begin{tabular}[c]{@{}l@{}}Incidence\\ contribution\end{tabular} \\ \hline ${\rm (i)}$ & $4$ & $\sim q^4$ & $\sim 1$ & $1$ & $q^{4r}$ \\ \hline ${\rm (ii)}$ & $2$ & $\sim q^2$ & $\sim q^2$ & $\asymp 1$ & $O(q^{3r+2})$ \\ \hline ${\rm (iii)}$ & $1$ & $\sim q^2$ & $\sim q^2$ & $\asymp q$ & $O(q^{\frac{5r}{2}+3})$ \\ \hline ${\rm (iv)}$ & $2$ & $\sim q^2$ & $\sim q^2$ & $\asymp q$ & $O(q^{3r+3})$ \\ \hline ${\rm (v)}$ & $2$ & $\sim q^2$ & $\sim q^2$ & $\asymp q$ & $O(q^{3r+3})$ \\ \hline ${\rm (vi)}$ & $0$ & $\sim q^2$ & $\sim q^2$ & $\asymp q^2$ & $O(q^{2r+4})$ \\ \hline ${\rm (vii)}$ & $0$ & $\sim q^2$ & $\sim q^2$ & $\asymp q^2$ & $O(q^{2r+4})$ \\ \hline \end{tabular} \caption{\label{rank-2-table} Quadratic class types in ${\rm M}_2(\mathbb{F}_q)$ and their invariants.} \end{table} \subsubsection*{Quadratic Gauss sums and matrix spheres of $2\times 2$ matrices} Table \ref{rank-2-table} computes several invariants of quadratic class types in ${\rm M}_2(\mathbb{F}_q)$: the radical invariant, the centralizer size, the similarity class size, and the number of quadratic cycle types which have a particular quadratic class type. In the last column of Table \ref{rank-2-table}, we estimate the contribution of each quadratic class type to the sums over $S$ in \eqref{eq:IR:sphere-size} and \eqref{eq:IR:FT-sphere-nonzero-phase}. \begin{prop}\label{prop:matrix-sphere-rank2} Suppose that $r\geq 4$. If $T\in {\rm M}_2$ and $\mathbf{M}\in {\rm M}_2^r\setminus\{{\bf 0}\}$, then \begin{equation}\label{eq:IR:sphere-size-rank2} \# \sigma_T \sim q^{4r-4} \end{equation} and \begin{equation} \widehat{\sigma}_T(\mathbf{M}) = 0(q^{-r-1}). \label{eq:IR:FT-sphere-nonzero-phase-rank2} \end{equation} \end{prop} \begin{proof} We first compute the size of the matrix sphere $\sigma_T$. By \eqref{eq:IR:sphere-size}, we have $$ \# \sigma_T = q^{4(r-1)} + \frac{1}{q^{4}} \sum_{S\in {\rm M}_2\setminus\{0\}} \psi\(-ST\) G(S,0)^r . $$ By the column `Incidence contribution' of Table \ref{rank-2-table}, the sum over $S$ is $O(q^{3r+3})$. Since $r\geq 4$, we conclude that $\# \sigma_T \sim q^{4r-4}$. We then compute Fourier transforms of the matrix sphere $\sigma_T$. For $\mathbf{M}\in {\rm M}_2^r\setminus\{{\bf 0}\}$, by \eqref{eq:IR:FT-sphere-nonzero-phase} we have $$ \widehat{\sigma}_T(\mathbf{M}) = \frac{1}{q^{4(r+1)}} \sum_{S\in{\rm M}_2\setminus\{0\}} \psi(-ST) \prod_{i=1}^r G(S,-M_i). $$ By the column `Incidence contribution' of Table \ref{rank-2-table}, the sum over $S$ is $O(q^{3r+3})$. Therefore $\widehat{\sigma}_T(\mathbf{M}) = 0(q^{-r-1})$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:strong-Falconer-rank2}] It follows from \eqref{eq:IR:incidence} that $$ \nu(T) = \frac{(\# \sigma_T)}{q^{4r}} + \frac{q^{8r}}{(\# E)^2} \sum_{\mathbf{M}\neq {\bf 0}} \left| \widehat{E}(\mathbf{M}) \right|^2 \widehat{\sigma}_T(\mathbf{M}). $$ By the Plancherel formula we have $$ \sum_{\mathbf{M}\neq {\bf 0}} \left| \widehat{E}(\mathbf{M}) \right|^2 \leq \sum_{\mathbf{M}\in {\rm M}_2^r} \left| \widehat{E}(\mathbf{M}) \right|^2 = \frac{(\# E)}{q^{4r}}. $$ Therefore, by \eqref{eq:IR:FT-sphere-nonzero-phase-rank2}, $$ \nu(T) = \frac{(\# \sigma_T)}{q^{4r}} + O\left( \frac{q^{3r-1}}{(\# E)} \right) . $$ In view of \eqref{eq:IR:sphere-size-rank2}, $\nu(T)>0$ provided that $$ (\# E) \gg_\epsilon q^{3r+3+\epsilon}. $$ \end{proof} \subsection{Matrix spheres in \texorpdfstring{${\rm M}_3(\mathbb{F}_q)$}{TEXT}}\label{subsect:4-rank3-matrices} Write ${\rm M}_3 = {\rm M}_3(\mathbb{F}_q)$. \subsubsection*{Quadratic types} Set $I:={\rm Irr}(\mathbb{F}_q[T])$. A matrix in ${\rm M}_3$ has precisely one of the following quadratic cycle types: \begin{enumerate} \item[${\rm (i)}$] $T^{(1,1,1)}$ \item[${\rm (ii)}$] $T^{(2,1)}$ \item[${\rm (iii)}$] $T^{(3)}$ \item[${\rm (iv)}$] $T^{(1,1)}\cdot (T-\lambda)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times)$ \item[${\rm (v)}$] $T^{(2)}\cdot (T-\lambda)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times)$ \item[${\rm (vi)}$] $T^{(1)} \cdot (T+\lambda)^{(1)} (T-\lambda)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times)$ \item[${\rm (vii)}$] $T^{(1)} \cdot \pi(T^2)^{(1)} \quad (\pi\in I, \deg(\pi)=1)$ \item[${\rm (viii)}$] $T^{(1)} \cdot (T-\lambda)^{(1)} (T-\lambda')^{(1)} \quad (\lambda,\lambda'\in \mathbb{F}_q^\times, \lambda'\neq -\lambda)$ \item[${\rm (ix)}$] $T^{(1)} \cdot \eta(T)^{(1)} \quad (\eta\in I, \deg(\eta)=2,\eta(T)\neq\eta(-T))$ \item[${\rm (x)}$] $\eta(T)^{(1)} \quad (\eta\in I, \deg(\eta)=3)$ \item[${\rm (xi)}$] $(T-\lambda)^{(1)} \cdot \pi(T^2)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times, \pi\in I, \deg(\pi)=1)$ \item[${\rm (xii)}$] $(T-\lambda)^{(1)} \eta(T)^{(1)} \quad (\lambda\in \mathbb{F}_q^\times, \eta\in I, \deg(\eta)=2,\eta(T)\neq\eta(-T))$ \item[${\rm (xiii)}$] $(T-\lambda_1)^{(1)} (T-\lambda_2)^{(1)} (T-\lambda_3)^{(1)} \quad (\lambda_i \in \mathbb{F}_q^\times, \lambda_i\neq -\lambda_j)$ \item[${\rm (xiv)}$] $(T-\lambda)^{(1)} (T+\lambda)^{(1)} \cdot (T-\lambda')^{(1)} \quad (\lambda,\lambda'\in \mathbb{F}_q^\times, \lambda'\neq \pm \lambda)$ \item[${\rm (xv)}$] $(T-\lambda)^{(2)} (T+\lambda)^{(1)} \quad (\lambda \in \mathbb{F}_q^\times)$ \item[${\rm (xvi)}$] $(T-\lambda)^{(1,1)} (T+\lambda)^{(1)} \quad (\lambda \in \mathbb{F}_q^\times)$. \end{enumerate} These quadratic cycle types give rise to to the following quadratic class types: \begin{enumerate} \item[${\rm (i)}$] $0^{(1,1,1)}$ \item[${\rm (ii)}$] $0^{(2,1)}$ \item[${\rm (iii)}$] $0^{(3)}$ \item[${\rm (iv)}$] $0^{(1,1)}\cdot \left( 1^{(1)} \right)_{(-1)}$ \item[${\rm (v)}$] $0^{(2)}\cdot \left( 1^{(1)} \right)_{(-1)}$ \item[${\rm (vi)}$] $0^{(1)}\cdot \left( 1^{(1)} 1^{(1)} \right)_{(+1)}$ \item[${\rm (vii)}$] $0^{(1)}\cdot \left( 1^{(1)} \right)_{(2)}$ \item[${\rm (viii)}$] $0^{(1)}\cdot \left( 1^{(1)} 1^{(1)} \right)_{(-1)}$ \item[${\rm (ix)}$] $0^{(1)}\cdot \left( 2^{(1)} \right)_{(-1)}$ \item[${\rm (x)}$] $\left( 3^{(1)} \right)_{(-1)}$ \item[${\rm (xi)}$] $\left( 1^{(1)} \right)_{(2)} \cdot \left(1^{(1)}\right)_{(-1)}$ \item[${\rm (xii)}$] $\left( 1^{(1)} 2^{(1)} \right)_{(-1)}$ \item[${\rm (xiii)}$] $\left( 1^{(1)} 1^{(1)} 1^{(1)} \right)_{(-1)}$ \item[${\rm (xiv)}$] $\left( 1^{(1)} 1^{(1)} \right)_{(+1)} \cdot \left( 1^{(1)} \right)_{(-1)}$ \item[${\rm (xv)}$] $\left( 1^{(2)} 1^{(1)} \right)_{(+1)}$ \item[${\rm (xvi)}$] $\left( 1^{(1,1)} 1^{(1)} \right)_{(+1)}$. \end{enumerate} \subsubsection*{Quadratic Gauss sums and matrix spheres of $3\times 3$ matrices} Table \ref{rank-3-table} computes invariants of quadratic class types in ${\rm M}_3(\mathbb{F}_q)$. The last column of Table \ref{rank-3-table} estimates the contribution of each quadratic class type to the sums over $S$ in \eqref{eq:IR:sphere-size} and \eqref{eq:IR:FT-sphere-nonzero-phase}. \begin{prop}\label{prop:matrix-sphere-rank3} Suppose that $r\geq 3$. If $T\in {\rm M}_3$ and $\mathbf{M}\in {\rm M}_3^r\setminus\{{\bf 0}\}$, then \begin{equation} \# \sigma_T \sim q^{9r-9} \label{eq:IR:sphere-size-rank3} \end{equation} and \begin{equation} \widehat{\sigma}_T(\mathbf{M}) = 0(q^{-2r-5}). \label{eq:IR:FT-sphere-nonzero-phase-rank3} \end{equation} \end{prop} \begin{proof} We first compute the size of the matrix sphere $\sigma_T$. By \eqref{eq:IR:sphere-size}, we have $$ \# \sigma_T = q^{9(r-1)} + \frac{1}{q^{9}} \sum_{S\in {\rm M}_3\setminus\{0\}} \psi\(-ST\) G(S,0)^r . $$ By the column `Incidence contribution' of Table \ref{rank-3-table}, the sum over $S$ is $O(q^{7r+4})$. Since $r\geq 3$, we conlude that $\# \sigma_T \sim q^{9r-9}$. We then compute Fourier transforms of the matrix sphere $\sigma_T$. For $\mathbf{M}\in {\rm M}_3^r\setminus\{{\bf 0}\}$, by \eqref{eq:IR:FT-sphere-nonzero-phase} we have $$ \widehat{\sigma}_T(\mathbf{M}) = \frac{1}{q^{9(r+1)}} \sum_{S\in{\rm M}_3\setminus\{0\}} \psi(-ST) \prod_{i=1}^r G(S,-M_i). $$ By the column `Incidence contribution' of Table \ref{rank-3-table}, the sum over $S$ is $O(q^{7r+4})$. Therefore $\widehat{\sigma}_T(\mathbf{M}) = 0(q^{-2r-5})$. \end{proof} \begin{table}[H] \begin{tabular}{|c|l|l|l|l|l|} \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Quadratic\\ class type\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Radical\\ invariant\end{tabular} & \begin{tabular}[c]{@{}l@{}}Centralizer\\ size\end{tabular} & \begin{tabular}[c]{@{}l@{}}Similarity class\\ size\end{tabular} & \begin{tabular}[c]{@{}l@{}}No.~quadratic\\ cycle types\end{tabular} & \begin{tabular}[c]{@{}l@{}}Incidence\\ contribution\end{tabular} \\ \hline ${\rm (i)}$ & $9$ & $\sim q^9$ & $\sim 1$ & $1$ & $q^{9r}$ \\ \hline ${\rm (ii)}$ & $5$ & $\sim q^5$ & $\sim q^4$ & $\asymp 1$ & $O(q^{7r+4})$ \\ \hline ${\rm (iii)}$ & $3$ & $\sim q^3$ & $\sim q^6$ & $\asymp 1$ & $O(q^{6r+6})$ \\ \hline ${\rm (iv)}$ & $4$ & $\sim q^5$ & $\sim q^4$ & $\asymp q$ & $O(q^{\frac{13r}{2}+5})$ \\ \hline ${\rm (v)}$ & $2$ & $\sim q^3$ & $\sim q^6$ & $\asymp q$ & $O(q^{\frac{11r}{2}+7})$ \\ \hline ${\rm (vi)}$ & $3$ & $\sim q^3$ & $\sim q^6$ & $\asymp q$ & $O(q^{6r+7})$ \\ \hline ${\rm (vii)}$ & $3$ & $\sim q^3$ & $\sim q^6$ & $\asymp q$ & $O(q^{6r+7})$ \\ \hline ${\rm (viii)}$ & $1$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^2$ & $O(q^{5r+8})$ \\ \hline ${\rm (ix)}$ & $1$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^2$ & $O(q^{5r+8})$ \\ \hline ${\rm (x)}$ & $0$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^3$ & $O(q^{\frac{9r}{2}+9})$ \\ \hline ${\rm (xi)}$ & $2$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^2$ & $O(q^{\frac{11r}{2}+8})$ \\ \hline ${\rm (xii)}$ & $0$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^3$ & $O(q^{\frac{9r}{2}+9})$ \\ \hline ${\rm (xiii)}$ & $0$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^3$ & $O(q^{\frac{9r}{2}+9})$ \\ \hline ${\rm (xiv)}$ & $2$ & $\sim q^3$ & $\sim q^6$ & $\asymp q^2$ & $O(q^{\frac{11r}{2}+8})$ \\ \hline ${\rm (xv)}$ & $2$ & $\sim q^3$ & $\sim q^6$ & $\asymp q$ & $O(q^{\frac{11r}{2}+7})$ \\ \hline ${\rm (xvi)}$ & $4$ & $\sim q^5$ & $\sim q^4$ & $\asymp q$ & $O(q^{\frac{13r}{2}+5})$ \\ \hline \end{tabular} \caption{\label{rank-3-table} Quadratic class types in ${\rm M}_3(\mathbb{F}_q)$ and their invariants.} \end{table} \begin{proof}[Proof of Theorem \ref{thm:strong-Falconer-rank3}] It follows from \eqref{eq:IR:incidence} that $$ \nu(T) = \frac{(\# \sigma_T)}{q^{9r}} + \frac{q^{18r}}{(\# E)^2} \sum_{\mathbf{M}\neq {\bf 0}} \left| \widehat{E}(\mathbf{M}) \right|^2 \widehat{\sigma}_T(\mathbf{M}). $$ By the Plancherel formula we have $$ \sum_{\mathbf{M}\neq {\bf 0}} \left| \widehat{E}(\mathbf{M}) \right|^2 \leq \sum_{\mathbf{M}\in {\rm M}_3^r} \left| \widehat{E}(\mathbf{M}) \right|^2 = \frac{(\# E)}{q^{9r}}. $$ Therefore, by \eqref{eq:IR:FT-sphere-nonzero-phase-rank3}, $$ \nu(T) = \frac{(\# \sigma_T)}{q^{9r}} + O\left( \frac{q^{7r-5}}{(\# E)} \right) . $$ In view of \eqref{eq:IR:sphere-size-rank3}, $\nu(T)>0$ provided that $$ (\# E) \gg_\epsilon q^{7r+4+\epsilon}. $$ \end{proof} \section{Matrices of rank at least 3}\label{sect:proofs-general-rank} Throughout this section, we write ${\rm M}_n = {\rm M}_n(\mathbb{F}_q)$ and assume that $n\geq 3$. Let $\tau$ be a given quadratic class type in ${\rm M}_n(\mathbb{F}_q)$. For any matrix $A \in {\rm M}_n(\mathbb{F}_q)$ which has quadratic class type $\tau_A=\tau$, we want to estimate the following quantities: \begin{itemize} \item the radical invariant $\rho=\rho_\tau=\rho_A$, \item the size $c=c_\tau=c_A$ of the centralizer in ${\rm GL}_n(\mathbb{F}_q)$ of $A$, \item the size $s=s_\tau=s_A$ of the similarity class of $A$, \item and the number $y=y_\tau=y_A$ of quadratic cycle types which have quadratic class type $\tau$. \end{itemize} \subsection{Quadratic class type \texorpdfstring{$0^\alpha$}{TEXT}}\label{subsect:5-cycle-0} Consider the quadratic class type $\tau=0^\alpha$ where $$ \alpha = 1^{(e_1)} 2^{(e_2)} \dots k^{(e_k)} \quad (k \geq 1) $$ is the partition of size $|\alpha|=\sum_{i=1}^k ie_i=n$ with $e_1$ numbers $1$, $e_2$ numbers $2$, \ldots, $e_k$ numbers $k$. When $k=1$, the invariants of the quadratic class type $\tau$ can be computed easily. \begin{lem}\label{lem:quad-type-0-triv} Let $\alpha = 1^{(n)}$ and $A\in{\rm M}_n$ be a matrix with quadratic class type $\tau=0^\alpha$. Then $A$ is the zero matrix and we have \begin{enumerate} \item[{\rm (i)}] $\rho_\tau=n^2$; \item[{\rm (ii)}] $c_\tau\sim q^{n^2}$ and $s_\tau\sim 1$; \item[{\rm (iii)}] $y_\tau=1$; \item[{\rm (iv)}] if $r\in\mathbb{N}$ and $\mathcal{T}=q^{\frac{r\rho_\tau}{2}}y_\tau s_\tau$, then $\mathcal{T} \sim q^{\frac{rn^2}{2}}$. \end{enumerate} \end{lem} \begin{proof} It is plain that $A$ is the zero matrix. Parts {\rm (i)}--{\rm (iii)} follow immediately from \eqref{eq:radical-invariant}, \eqref{eq:cent-size-2} and the definitions of quadratic cycle type and quadratic class type. Part {\rm (iv)} can be readily derived from parts {\rm (i)}--{\rm (iii)}. \end{proof} \begin{lem}\label{lem:quad-type-0-nontriv} Let $$ \alpha = 1^{(e_1)} 2^{(e_2)} \dots k^{(e_k)} $$ with $k\geq 2$ and $\tau=0^\alpha$. We have: \begin{enumerate} \item[{\rm (i)}] $\rho_\tau \leq n^2-2n+2$; \item[{\rm (ii)}] $c_\tau\sim q^{\rho_\tau}$, $s_\tau\sim q^{n^2-\rho_\tau}$; \item[{\rm (iii)}] $y_\tau=1$; \item[{\rm (iv)}] if $r\in\mathbb{N}$ satisfies $r\geq 2$ and $\mathcal{T}=q^{\frac{r\rho_\tau}{2}}y_\tau s_\tau$, then $$ \mathcal{T} = O\(q^{\frac{rn^2}{2}-(n-1)(r-2)}\). $$ \end{enumerate} \end{lem} \begin{proof} For simplicity, abbreviate $\rho=\rho_\tau, c=c_\tau,s=s_\tau,y=y_\tau$. Statement {\rm (iii)} is trivial. First note that $n=\sum_{i=1}^k ie_i$. It follows from \eqref{eq:radical-invariant} that \begin{equation}\label{eq-pf:rad-inv-form} \rho = \sum_{i=1}^k ie_i^2 + 2\sum_{1\leq i<j\leq k} ie_ie_j. \end{equation} Also note that, the identity \eqref{eq:cent-size-2} yields $c\sim q^d$ with $$ d = \sum_{i=1}^{k} \(\sum_{j=i}^k e_j \)^2 = \rho . $$ Hence {\rm (ii)} follows. We claim the following stronger inequality \begin{equation}\label{eq-pf:rad-inv-ineq-main} \rho \leq n^2-(k-1)(2n-k); \end{equation} the bound {\rm (i)} is an immediate consequence of this claim. To show the claim, we observe that \eqref{eq-pf:rad-inv-form} implies \begin{equation}\label{eq-pf:rad-inv-ineq} k\rho \leq n^2 + (k-1)(n-ke_k)^2. \end{equation} In fact, \eqref{eq-pf:rad-inv-ineq} is equivalent to \begin{equation}\label{eq-pf:rad-inv-ineq-1} \sum_{i=1}^k ike_i^2 + \sum_{1\leq i<j\leq k} 2ik e_ie_j \leq \(\sum_{i=1}^k ie_i\)^2 + (k-1)\(\sum_{i=1}^{k-1} ie_i\)^2 . \end{equation} Each $e_i \,\, (1\leq i\leq k)$ viewed as a variable and hence both sides of \eqref{eq-pf:rad-inv-ineq-1} viewed as quadratic forms, one readily verifies this inequality by expanding and checking that any quadratic form coefficient of the left hand side is at most the corresponding coefficient of the right hand side. Since $e_k\geq 1$, we deduce from \eqref{eq-pf:rad-inv-ineq} that $$ k\rho \leq n^2 + (k-1)(n-k)^2 , $$ from which the claim \eqref{eq-pf:rad-inv-ineq-main} follows. The bound {\rm (i)} is proved. Finally, on combining {\rm (i)}--{\rm (iii)} we conclude {\rm (iv)}. \end{proof} \subsection{Quadratic class type \texorpdfstring{$\prod_{u=1}^{l} \left( d_u^{\beta_u} d_u^{\gamma_u} \right)_{(+1)}$}{TEXT}}\label{subsect:5-cycle+1} Consider in ${\rm M}_n$ the quadratic class type \begin{equation}\label{eq:quad-type-plus1-comp} \tau = \prod_{u=1}^{l} \left(d_u^{\beta_u} d_u^{\gamma_u} \right)_{(+1)} . \end{equation} In particular, $\sum_{u=1}^{l} d_u(|\beta_u|+ |\gamma_u|)=n$. \begin{lem}\label{lem:quad-type-plus1} In ${\rm M}_n$ let $$ \tau = \prod_{u=1}^{l} \left(d_u^{\beta_u} d_u^{\gamma_u} \right)_{(+1)}. $$ We have: \begin{enumerate} \item[{\rm (i)}] $\rho_\tau \leq \frac{n^2}{4}$; \item[{\rm (ii)}] if $r\in\mathbb{N}$ satisfies $r\geq 2$ and $\mathcal{T}=q^{\frac{r\rho_\tau}{2}}y_\tau s_\tau$, then $$ \mathcal{T} = O\(q^{\frac{rn^2}{8}+n^2}\). $$ \end{enumerate} \end{lem} \begin{proof} Let us write the partitions $\beta_u$ and $\gamma_u$ as \begin{align*} \beta_u &= 1^{\(e_{u,1}\)} 2^{\(e_{u,2}\)} \ldots r_u^{\(e_{u,r_u}\)} , \\ \gamma_u &= 1^{\(f_{u,1}\)} 2^{\(f_{u,2}\)} \ldots s_u^{\(f_{u,s_u}\)} . \end{align*} Without loss of generality, we may assume that $r_u\leq s_u$. In view of \eqref{eq:radical-invariant}, we have $\rho_\tau=\sum_{u=1}^{l} \rho_u$ where $$ \rho_u := \sum_{i=1}^{r_u} i e_{u,i} f_{u,i} + \sum_{\substack{1\leq i\leq r_u \\ 1\leq j\leq s_u \\ i\neq j}} \min(i,j) e_{u,i} f_{u,j} . $$ It follows that \begin{align*} \rho_u &\leq \sum_{i=1}^{r_u} \(i e_{u,i}\) f_{u,i} + \sum_{\substack{1\leq i\leq r_u \\ 1\leq j\leq s_u \\ i\neq j}} \(i e_{u,i}\) f_{u,j} \\ &\leq \(\sum_{i=1}^{r_u} i e_{u,i}\) \( \sum_{j=1}^{s_u} f_{u,j} \) \leq \(\sum_{i=1}^{r_u} i e_{u,i}\) \( \sum_{j=1}^{s_u} j f_{u,j} \) \\ &\leq \frac{1}{4} \(\sum_{i=1}^{r_u} i e_{u,i} + \sum_{j=1}^{s_u} j f_{u,j} \)^2 = \frac{1}{4} \( |\beta_u|+ |\gamma_u| \)^2. \end{align*} Therefore $$ \rho_\tau=\sum_{u=1}^{l} \rho_u \leq \frac{1}{4} \sum_{u=1}^{l} \( |\beta_u|+ |\gamma_u| \)^2 \leq \frac{1}{4} \( \sum_{u=1}^{l} d_u \( |\beta_u|+ |\gamma_u| \) \)^2 = \frac{n^2}{4}. $$ Thus {\rm (i)} follows. Because the cardinality of ${\rm M}_n(\mathbb{F}_q)$ is $q^{n^2}$, it is evident that $s_\tau y_\tau \leq q^{n^2}$. Finally, we combine this bound with {\rm (i)} to conclude {\rm (ii)}. \end{proof} \subsection{Quadratic class type \texorpdfstring{$\prod_{u=1}^{l} \left( d_u^{\lambda_u} \right)_{(2)}$}{TEXT}}\label{subsect:5-cycle+2} Consider in ${\rm M}_n$ the quadratic class type \begin{equation}\label{eq:quad-type-2-comp} \tau = \prod_{u=1}^{l} \left( d_u^{\lambda_u} \right)_{(2)}. \end{equation} In particular, $2\sum_{u=1}^{l} d_u |\lambda_u| =n$. \begin{lem}\label{lem:quad-type-2} In ${\rm M}_n$ let $$ \tau = \prod_{u=1}^{l} \left( d_u^{\lambda_u} \right)_{(2)}. $$ We have: \begin{enumerate} \item[{\rm (i)}] $\rho_\tau \leq \frac{n^2}{2}$; \item[{\rm (ii)}] if $r\in\mathbb{N}$ satisfies $r\geq 2$ and $\mathcal{T}=q^{\frac{r\rho_\tau}{2}}y_\tau s_\tau$, then $$ \mathcal{T} = O\(q^{\frac{rn^2}{4}+n^2}\). $$ \end{enumerate} \end{lem} \begin{proof} For a partition $\lambda$, write $\rho(0^\lambda)=\rho_{0^\lambda}$ for the radical invariant of any matrix with the quadratic class type $0^\lambda$. By virtue of \eqref{eq:radical-invariant}, we have $$ \rho_\tau = 2\sum_{u=1}^{l} d_u \rho(0^{\lambda_u}) . $$ It follows that \begin{equation*} \rho_\tau \leq 2\sum_{u=1}^{l} d_u |\lambda_u|^2 \leq 2 \( \sum_{u=1}^{l} d_u |\lambda_u| \)^2 = \frac{n^2}{2}. \end{equation*} Thus {\rm (i)} follows. Because the cardinality of ${\rm M}_n(\mathbb{F}_q)$ is $q^{n^2}$, it is evident that $s_\tau y_\tau \leq q^{n^2}$. We then combine this bound with {\rm (i)} to conclude {\rm (ii)}. \end{proof} \subsection{Quadratic class type \texorpdfstring{$\prod_{u=1}^{l} \left( d_u^{\kappa_u} \right)_{(-1)}$}{TEXT}}\label{subsect:5-cycle-1} Consider in ${\rm M}_n$ the quadratic class type \begin{equation}\label{eq:quad-type-neg1-comp} \tau = \prod_{u=1}^{l} \left( d_u^{\kappa_u} \right)_{(-1)}. \end{equation} In particular, $\sum_{u=1}^{l} d_u |\kappa_u| =n$. \begin{lem}\label{lem:quad-type-neg1} In ${\rm M}_n$ let $$ \tau = \prod_{u=1}^{l} \left( d_u^{\kappa_u} \right)_{(-1)}. $$ We have: \begin{enumerate} \item[{\rm (i)}] $\rho_\tau = 0$; \item[{\rm (ii)}] if $r\in\mathbb{N}$ and $\mathcal{T}=q^{\frac{r\rho_\tau}{2}}y_\tau s_\tau$, then $$ \mathcal{T} \leq q^{n^2}. $$ \end{enumerate} \end{lem} \begin{proof} It follows from \eqref{eq:radical-invariant} that $\rho_\tau = 0$. Because the cardinality of ${\rm M}_n(\mathbb{F}_q)$ is $q^{n^2}$, it is evident that $s_\tau y_\tau \leq q^{n^2}$, whence {\rm (ii)}. \end{proof} \subsection{Matrix spheres in \texorpdfstring{${\rm M}_n(\mathbb{F}_q), \, n\geq 3$}{TEXT}}\label{subsect:4-general-rank-matrices} \begin{lem}\label{lem:gauss-sum-general-rank} Let $\tau$ be a nontrivial quadratic class type in ${\rm M}_n$ with $n\geq 3$; in other words, $\tau \neq 0^{1^{(n)}}$. Suppose that $r\in\mathbb{N}$ satisfies $r\geq 2$ and let $\mathcal{T}=q^{\frac{r\rho_\tau}{2}}y_\tau s_\tau$. Then $$ \mathcal{T} = O\(q^{\frac{rn^2}{2}-(n-1)(r-2)}\). $$ \end{lem} \begin{proof} Let us decompose $\tau$ as $$ \tau = 0^\alpha \cdot \prod_{u=1}^{l_1} \left( d_u^{\beta_u} d_u^{\gamma_u}\right)_{(+1)} \cdot \prod_{u=1}^{l_2} \left( d_u^{\lambda_u} \right)_{(2)} \cdot \prod_{u=1}^{l_3} \left( d_u^{\kappa_u} \right)_{(-1)}. $$ If $l_1=l_2=l_3=0$, then the lemma is none but part {(\rm iv)} of Lemma \ref{lem:quad-type-0-nontriv}. We may now assume that $l_1+l_2+l_3>0$. Put \begin{align*} n_0 &= |\alpha| \\ n_1 &= \sum_{u=1}^{l_1} d_u \( |\beta_u| + |\gamma_u| \) \\ n_2 &= 2 \sum_{u=1}^{l_2} d_u |\lambda_u| \\ n_3 &= \sum_{u=1}^{l_3} d_u |\kappa_u|, \end{align*} so that $n=n_0+n_1+n_2+n_3$. On combining the decomposition of $\tau$ with the last parts of Lemmas \ref{lem:quad-type-0-triv}, \ref{lem:quad-type-0-nontriv}, \ref{lem:quad-type-plus1}, \ref{lem:quad-type-2}, and \ref{lem:quad-type-neg1}, we infer that $\mathcal{T}=\prod_{i=0}^{3}\mathcal{T}_i$ where \begin{align*} \mathcal{T}_0 &= O\( q^{\frac{rn_0^2}{2}} \) \\ \mathcal{T}_1 &= O\( q^{\frac{rn_1^2}{8}+n_1^2} \) \\ \mathcal{T}_2 &= O\( q^{\frac{rn_2^2}{4}+n_2^2} \) \\ \mathcal{T}_3 &= O\( q^{n_3^2} \) . \end{align*} It remains to show that \begin{equation}\label{eq-pf:form-reduced} \frac{rn_0^2}{2} + \frac{rn_1^2}{8}+n_1^2 + \frac{rn_2^2}{4}+n_2^2 + n_3^2 \leq \frac{rn^2}{2} - (n-1)(r-2). \end{equation} Denote by $\mathcal{L}$ (resp.~$\mathcal{R}$) the left hand side (resp.~the right hand side) of \eqref{eq-pf:form-reduced}. We have \begin{equation}\label{eq-pf:form-reduced-1} \mathcal{L} \leq \frac{rn_0^2}{2} + \frac{r(n-n_0)^2}{4}+(n-n_0)^2. \end{equation} The inequality \eqref{eq-pf:form-reduced} then follows from \eqref{eq-pf:form-reduced-1} and the following two inequalities, which can be verified easily: $$ \frac{rn_0^2}{2} + \frac{r(n-n_0)^2}{4}+(n-n_0)^2 \leq \frac{r}{2}\( n^2 - 2n + \frac{3}{2}\) + 1 $$ and $$ \frac{r}{2}\( n^2 - 2n + \frac{3}{2}\) + 1 \leq \frac{rn^2}{2} - (n-1)(r-2). $$ The proof is complete. \end{proof} \begin{prop}\label{prop:matrix-sphere-general-rank} Suppose that $n\geq 3$ and $r\geq 3$. If $T\in {\rm M}_n$ and $\mathbf{M}\in {\rm M}_n^r\setminus\{{\bf 0}\}$, then \begin{equation}\label{eq:IR:sphere-size-general-rank} \# \sigma_T \sim q^{n^2(r-1)} \end{equation} and \begin{equation}\label{eq:IR:FT-sphere-nonzero-phase-general-rank} \widehat{\sigma}_T(\mathbf{M}) = 0(q^{-(r-2)(n-1)-n^2}). \end{equation} \end{prop} \begin{proof} We first compute the size of the matrix sphere $\sigma_T$. By \eqref{eq:IR:sphere-size}, we have \begin{align*} \# \sigma_T &= q^{n^2(r-1)} + \frac{1}{q^{n^2}} \sum_{S\in {\rm M}_n\setminus\{0\}} \psi\(-ST\) G(S,0)^r \\ &=: q^{n^2(r-1)} + \frac{\mathcal{S}}{q^{n^2}} . \end{align*} By virtue of \eqref{prop:matrix-Gauss}, we have $$ G(S,0) \ll q^{\frac{n^2+\rho_S}{2}}. $$ To upper bound $\mathcal{S}$, we partition the sum over $S$ into quadratic class types and apply Lemma \ref{lem:gauss-sum-general-rank}. We have $$ \mathcal{S} \ll q^{rn^2-(n-1)(r-2)}. $$ Thus \eqref{eq:IR:sphere-size-general-rank} follows. We then compute Fourier transforms of the matrix sphere $\sigma_T$. For $\mathbf{M}\in {\rm M}_n^r\setminus\{0\}$, by \eqref{eq:IR:FT-sphere-nonzero-phase} we have \begin{equation*} \widehat{\sigma}_T(\mathbf{M}) = \frac{1}{q^{n^2(r+1)}} \sum_{S\in{\rm M}_n\setminus\{0\}} \psi(-ST) \prod_{i=1}^r G(S,-M_i) =: \frac{\mathcal{S}'}{q^{n^2(r+1)}} . \end{equation*} To upper bound $\mathcal{S}'$, we partition the sum over $S$ into quadratic class types and apply Lemma \ref{lem:gauss-sum-general-rank}. We have $$ \mathcal{S}' \ll q^{rn^2-(n-1)(r-2)}. $$ Thus \eqref{eq:IR:FT-sphere-nonzero-phase-general-rank} follows. \end{proof} We are in a position to prove the main result on the incidence function for a matrix algebra of rank at least 3. \begin{proof}[Proof of Theorem \ref{thm:strong-Falconer-general-rank}] It follows from \eqref{eq:IR:incidence} that $$ \nu(T) = \frac{(\# \sigma_T)}{q^{rn^2}} + \frac{q^{2rn^2}}{(\# E)^2} \sum_{\mathbf{M}\neq {\bf 0}} \left| \widehat{E}(\mathbf{M}) \right|^2 \widehat{\sigma}_T(\mathbf{M}). $$ By the Plancherel formula we have $$ \sum_{\mathbf{M}\neq {\bf 0}} \left| \widehat{E}(\mathbf{M}) \right|^2 \leq \sum_{\mathbf{M}\in {\rm M}_n^r} \left| \widehat{E}(\mathbf{M}) \right|^2 = \frac{(\# E)}{q^{rn^2}}. $$ Therefore, by \eqref{eq:IR:FT-sphere-nonzero-phase-general-rank}, $$ \nu(T) = \frac{(\# \sigma_T)}{q^{rn^2}} + O\left( \frac{q^{rn^2-(r-2)(n-1)-n^2}}{(\# E)} \right) . $$ In view of \eqref{eq:IR:sphere-size-general-rank}, $\nu(T)>0$ provided that $$ (\# E) \gg_\epsilon q^{rn^2-(r-2)(n-1)+\epsilon}. $$ \end{proof} \end{document}
arXiv
\begin{document} \title{A note on the consistency of the Narain-Horvitz-Thompson estimator} \author{Guillaume Chauvet \thanks{ENSAI (CREST), Campus de Ker Lann, Bruz - France, [email protected]}} \maketitle \begin{abstract} For the Narain-Horvitz-Thompson estimator to have usual asymptotic properties such as consistency, some conditions on the sampling design and on the variable of interest are needed. \citet{car:cha:gog:lab:2010} give some sufficient conditions for the mean square consistency, but one of them is usually difficult to prove or does not hold for some unequal probability sampling designs. We propose alternative conditions for the mean square consistency of the Narain-Horvitz-Thompson estimator. A specific result is also proved in case when a martingale sampling algorithm is used, which implies consistency under a fast algorithm for the cube method. \end{abstract} \noindent{\small{{\it Keywords:} Cube method; Martingale algorithm; Mean-square consistency; Multinomial sampling; Sen-Yates-Grundy conditions.}} \section{Introduction} \label{sec:1} \noindent When a random sample $S$ is selected inside a finite population $U$, the Narain (1951)-Horvitz-Thompson (1952) \nocite{nar:1951,hor:tho:1952} estimator $\hat{t}_{y\pi}$ if often used for the total $t_y=\sum_{k \in U} y_k$ of some variable of interest. For the Narain-Horvitz-Thompson estimator to have usual asymptotic properties, such as asymptotic normality or consistency, some conditions on the sampling design and on the variable of interest are needed. Following the approach in \citet{rob:sar:1983} and \citet{bre:ops:2000}, \citet{car:cha:gog:lab:2010} give sufficient conditions for the mean square consistency. However, one of these conditions is related to the second-order inclusion probabilities and is usually difficult to prove for unequal probability sampling designs. \\ \noindent In this note, we propose alternative conditions for the mean square consistency of the Narain-Horvitz-Thompson estimator, i.e. under which \begin{eqnarray} \label{mean:square:cons} E\left\{N^{-1}(\hat{t}_{y\pi}-t_y)\right\}^2 & = & O(n^{-1}) \end{eqnarray} with $N$ the population size. The proposed conditions are usually easier to prove, and are known to hold for several sampling designs with unequal probabilities. We also give conditions under which the Narain-Horvitz-Thompson is consistent in mean square under a martingale sampling algorithm, which implies consistency under a fast algorithm for the cube method \citep{dev:til:2004}. Our asymptotic framework is that described in \citet{isa:ful:1982}. We assume that the population $U$ belongs to a nested sequence $\{U_t\}$ of finite populations with increasing sizes $N_t$, and that the population vector of values $y_{Ut}=(y_{1t},\ldots,y_{Nt})^{\top}$ belongs to a sequence $\{y_{Ut}\}$ of $N_t$-vectors. For simplicity, the index $t$ will be suppressed in what follows and all limiting processes will be taken as $t \to \infty$. \section{Finite population framework} \label{sec:2} \noindent We note $\pi=(\pi_1,\ldots,\pi_N)^{\top}$ a $N$-vector of probabilities. Let $p(\cdot)$ denote a sampling design in $U$ with parameter $\pi$, that is, such that the expected number of draws for unit $k$ in the sample equals $\pi_k>0$. Let $n=\sum_{k \in U} \pi_k$ denote the integer average sample size. A random sample $S$, with or without repetitions, is selected in $U$ by means of the sampling design $p(\cdot)$. The total $t_y$ is unbiasedly estimated by \begin{eqnarray} \label{est:ht:hh} \hat{t}_{y} & = & \sum_{k \in U} \frac{y_k}{\pi_k}~I_k, \end{eqnarray} with $I=(I_1,\ldots,I_N)^{\top}$ and $I_k$ the number of times that unit $k$ is selected in the sample. The variance of $\hat{t}_{y}$ is \begin{eqnarray} \label{var:ht:hh} V \left(\hat{t}_{y}\right) & = & \sum_{k,l \in U} \frac{y_k}{\pi_k} \frac{y_l}{\pi_l}~Cov(I_k,I_l). \end{eqnarray} \noindent If $p(\cdot)$ is a without-replacement sampling design, a same unit $k$ may appear only once in the sample and $I_k$ is a sample membership indicator. Formula (\ref{est:ht:hh}) yields the Narain-Horvitz-Thompson estimator $\hat{t}_{y\pi}$ whose variance is \begin{eqnarray} \label{var:ht} V \left(\hat{t}_{y\pi}\right) & = & \sum_{k \in U} \left(\frac{y_k}{\pi_k}\right)^2 \pi_k(1-\pi_k) + \sum_{k \neq l \in U} \frac{y_k}{\pi_k} \frac{y_l}{\pi_l} (\pi_{kl}-\pi_k \pi_l), \end{eqnarray} with $\pi_{kl}$ the probability that units $k$ and $l$ are selected jointly in $S$. Poisson sampling \citep{haj:1964} is a particular without-replacement sampling design, obtained when the vector $I$ of sample membership indicators is obtained from $N$ independent Bernoulli trials. In such case, the variance of the Narain-Horvitz-Thompson estimator is \begin{eqnarray} \label{var:ht:pois} V\left(\hat{t}_{y\pi}^{po}\right) & = & \sum_{k \in U} \left(\frac{y_k}{\pi_k}\right)^2 \pi_k(1-\pi_k), \end{eqnarray} which is the first term of the variance in (\ref{var:ht}) for any without-replacement sampling design. \\ \noindent If $p(\cdot)$ is a with-replacement sampling design, a same unit $k$ may appear several times in the sample and formula (\ref{est:ht:hh}) yields the \citet{han:hur:1953} estimator $\hat{t}_{yHH}$. Multinomial sampling is a particular with-replacement sampling design, obtained when the sample $S$ is obtained from $n$ independent draws, some unit $k$ being selected with probability $n^{-1} \pi_k$ at each draw. In such case, the variance of the Hansen-Hurwitz estimator is \begin{eqnarray} \label{var:ht:mult} V\left(\hat{t}_{yHH}^{mult}\right) & = & \sum_{k \in U} \pi_k \left(\frac{y_k}{\pi_k}-\frac{t_y}{n} \right)^2. \end{eqnarray} With-replacement sampling designs are less common in surveys. We therefore confine our attention to without-replacement sampling designs and to the Narain-Horvitz-Thompson estimator $\hat{t}_{y\pi}$. However, the variance obtained under multinomial sampling will be a useful benchmark to prove the mean square consistency. \section{Sufficient conditions for mean-square consistency} \label{sec:3} \noindent From (\ref{var:ht}), we obtain \begin{eqnarray} \label{ineg:mean:square:1} V\left(\hat{t}_{y\pi}\right) & \leq & N^2 \left(\frac{1}{N~\min_{l \in U} \pi_l}+\frac{\max_{k \neq l \in U} |\pi_{kl}-\pi_k \pi_l|}{(\min_{l \in U} \pi_l)^2} \right) \times \frac{1}{N} \sum_{k \in U} y_k^2. \end{eqnarray} This directly leads to Proposition \ref{prop0} below. \begin{prop} \label{prop0} \citep{car:cha:gog:lab:2010}. Assume that the following conditions hold: \begin{itemize} \item[] H1. We assume that $\lim_{t \rightarrow \infty} \frac{n}{N} = f \in ]0,1[$. \item[] H2. We assume that $\min_{k \in U} \pi_k \geq \lambda_1>0$. \item[] H3. The variable $y$ has a bounded second moment, i.e. there exists some constant $C_1$ such that $N^{-1} \sum_{k \in U} y_k^2 \leq C_1$. \item[] H4: We have $\limsup_{t \rightarrow \infty} n \max_{k \neq l \in U} |\pi_{kl}-\pi_k \pi_l| < \infty$. \end{itemize} Then (\ref{mean:square:cons}) holds and the Narain-Horvitz-Thompson estimator is consistent in mean square. \end{prop} \noindent The assumptions in Proposition \ref{prop0} are essentially the same as that in \citet{car:cha:gog:lab:2010}, except for the assumption (H3) which was replaced with \begin{itemize} \item[] H3b. The variable $y$ is bounded, i.e. there exists some constant $C_1$ such that $|y_k| \leq C_1$. \end{itemize} \citet{car:gog:lar:2013} noticed however that (H3b) could be weakened to (H3). As noted by \citet{bre:ops:2000}, the assumption (H4) holds for stratified simple random sampling. This property also holds for rejective sampling \citep{haj:1964,boi:lop:rui:2012} and its Sampford-Durbin modification \citep{haj:1981}. However, this property is rather difficult to prove for other sampling designs with unequal probabilities. \\ \noindent When the variable $y$ has non-negative values, a first proposal is to replace (H4) with \begin{itemize} \item[] H4b: there exists some constant $a \geq 0$ such that for any vector $\pi$ of inclusion probabilities, we have for any $k \neq l \in U$: \begin{eqnarray} \pi_{kl} & \leq & \left(1+\frac{a}{n}\right) \times \pi_k \pi_l. \end{eqnarray} \end{itemize} From (\ref{var:ht}), this leads to \begin{eqnarray} \label{var:ineg:2} V(\hat{t}_{y\pi}) & \leq & N^2 \left(\frac{1}{N~\min_{l \in U} \pi_l}+ \frac{a}{n} \right) \times \frac{1}{N} \sum_{k \in U} y_k^2. \end{eqnarray} \begin{prop} \label{prop1} Assume that (H1)-(H3) and (H4b) hold, and that the variable $y$ has non-negative values. Then (\ref{mean:square:cons}) holds and the Narain-Horvitz-Thompson estimator is consistent in mean square. \end{prop} \noindent The assumption (H4) will hold in particular with $a=0$ when the sampling design satisfies the so-called Sen (1953)-Yates-Grundy (1953) conditions \nocite{sen:1953,yat:gru:1953}, namely $\pi_{kl} \leq \pi_k \pi_l$ for any $k \neq l \in U$. This property holds for stratified simple random sampling, and for several sampling algorithms with unequal probability such as Poisson sampling; the Midzuno method, the elimination method, Chao's method and the pivotal method \citep{dev:til:1998}; the Sampford design \citep{gab:1981,gab:1984}; the conditional Poisson sampling design \citep{che:dem:liu:1994}. \\ \noindent In the case when the variable of interest may take both negative and non-negative values, we can consider the alternative condition that \begin{itemize} \item[] H4c: for any vector $\pi$ of inclusion probabilities, the variance of the Narain-Horvitz-Thompson estimator under the sampling design $p(\cdot)$ with parameter $\pi$ is no greater than the variance of the Hansen-Hurwitz estimator under multinomial sampling with parameter $\pi$. \end{itemize} Under (H4c), it follows from (\ref{var:ht:mult}) that for any variable $y$ \begin{eqnarray} V\left(\hat{t}_{y}\right) \leq \sum_{k \in U} \pi_k \left(\frac{y_k}{\pi_k}\right)^2 \leq \frac{N}{\min_{l \in U} \pi_l} \times \frac{1}{N} \sum_{k \in U} y_k^2. \label{var:ht:ineg2} \end{eqnarray} \begin{prop} \label{prop2} Assume that (H1)-(H3) and (H4c) hold. Then (\ref{mean:square:cons}) holds and the Narain-Horvitz-Thompson estimator is consistent in mean square. \end{prop} \noindent The assumption (H4b) holds for simple random sampling, and for several sampling algorithms with unequal probability such as the Sampford design \citep{gab:1981,gab:1984}, the conditional Poisson sampling design \citep{qua:2008}, Chao's method \citep{sen:1989}, the elimination method \citep{dev:til:1998} and pivotal sampling \citep{cha:rui:2014}. Note that in case of pivotal sampling, numerous second-order inclusion probabilities are usually equal to zero \citep{dev:til:1998}, so that assumption (H4) does not hold while (H4b) and (H4c) are respected. \section{Consistency for a martingale sampling algorithm} \label{sec:4} \noindent A martingale sampling algorithm proceeds in steps $i=0,\ldots,T$ from $\pi(0)=\pi$ the vector of inclusion probabilities to $\pi(T)=I$ the final vector of sample membership indicators, such that the sequence $\{\pi(i)\}_{i=0,\ldots,T}$ is a discrete-time martingale with $\pi(i) \in [0,1]^N$ for any $i=0,\ldots,T$; see \citet{til:2011} and \citet{bre:cha:2011}. \\ \noindent Under a martingale sampling algorithm, we have $I-\pi=\sum_{i=0}^T \delta(i)$, where $\{\delta(i)\}_{i=0,\ldots,T}$ are the innovations of the martingale. Since these innovations are not correlated, we have \begin{eqnarray} V(I-\pi) = \sum_{i=0}^T V[\delta(i)] = E\left[ \sum_{i=0}^T \delta(i) \delta(i)^{\top} \right]. \label{var:mart:I} \end{eqnarray} We can write $\hat{t}_y-t_y=\check{y}^{\top} (I-\pi)$ where $\check{y}=(\pi_1^{-1} y_1,\ldots,\pi_N^{-1} y_N)^{\top}$. From (\ref{var:mart:I}), we obtain \begin{eqnarray} V(\hat{t}_y-t_y) & = & E\left[\sum_{i=0}^T \sum_{k,l \in U} \frac{y_k}{\pi_k} \frac{y_l}{\pi_l} \delta_k(i) \delta_l(i) \right]. \label{var:mart:tyhat} \end{eqnarray} \noindent We note \begin{eqnarray} \label{Ui} U_i & = & \{k \in U;~\delta_k(i) \neq 0\} \end{eqnarray} the random subset of units in $U$ that are affected by step $i$. Also, we note $\mathcal{C}=\max_{i=0,\ldots,T} \textrm{Card}(U_i)$. From (\ref{var:mart:tyhat}), we obtain \begin{eqnarray} V(\hat{t}_y-t_y) & = & E\left[\sum_{i=0}^T \sum_{k,l \in U_i} \frac{y_k}{\pi_k} \frac{y_l}{\pi_l} \delta_k(i) \delta_l(i) \right] \nonumber \\ & \leq & E\left[\sum_{i=0}^T \sum_{k,l \in U_i} \left|\frac{y_k}{\pi_k}\right| \times \left|\frac{y_l}{\pi_l}\right| \right] \nonumber \\ & \leq & \left(\frac{\max_{k \in U} |y_k|}{\min_{k \in U} \pi_k} \right)^2 \times \mathcal{C}^2 \times E(T). \label{var:mart:tyhat2} \end{eqnarray} \begin{prop} \label{prop3} Assume that assumptions (H1)-(H2) and (H3b) hold. Assume that $\mathcal{C}=O(1)$ and that $E(T)=O(N)$. Then (\ref{mean:square:cons}) holds and the Narain-Horvitz-Thompson estimator is consistent in mean square. \end{prop} \noindent Note that in Proposition \ref{prop3}, the stronger condition (H3b) on the variable $y$ is needed. Proposition \ref{prop3} is in particular useful when the sample $S$ is selected by means of the cube method \citep{dev:til:2004}. Suppose that a $q$-vector $x_k$ of auxiliary variables is known at the design stage for any unit $k \in U$. The $N \times q$ matrix $A = (x_k/\pi_k)_{k \in U}$ is called the matrix of constraints. The cube method enables to select samples such that the set of balancing equations \begin{eqnarray} \label{bal:samp} \sum_{k \in S} \frac{x_k}{\pi_k} & = & t_x \end{eqnarray} is respected, at least approximately. A fast procedure for balanced sampling proposed by \citet{cha:til:2006,cha:til:2007} is described in Algorithm \ref{algo:1}. At any step $i$, $U_i \subset \{1,\ldots,N\}$ denotes the set of the $q+1$ first columns of $A$ such that $u_k(i)$ is not an integer. This is also the set of the $q+1$ first units in the population $U$ that are still neither selected nor rejected at step $i$. Also, $A_i$ denotes the sub-matrix of $A$ containing the columns in $U_i$. From the definition of $u(i)$ and $\delta(i)$ in Algorithm \ref{algo:1}, we have $\mathcal{C} \leq q+1$. Also, it is easily shown that $[(q+1)^{-1}N] \leq T \leq N$, with $[(q+1)^{-1}N]$ the largest integer smaller than $(q+1)^{-1}N$. Proposition \ref{prop4} below is thus an immediate consequence of Proposition \ref{prop3}. \begin{algorithm}[htb!] First initialize at $\pi(0)=\pi$. Next, at time $i=0,\cdots,T$, repeat the following steps: \begin{enumerate} \item If there exists some vector $v(i) \neq 0$ such that $v(i) \in Ker(A_i)$, then: \begin{enumerate} \item Take any such vector $v(i)$ (random or not), and take $u(i)$ such that \begin{eqnarray*} u_k(i) = \left\{\begin{array}{ll} v_k(i) & \textrm{ if } k \in U_i, \\ 0 & \textrm{ otherwise.} \end{array} \right. \end{eqnarray*} Compute $\lambda_1^*(i)$ and $\lambda_2^*(i)$, the largest values of $\lambda_1(i)$ and $\lambda_2(i)$ such that \begin{eqnarray*} 0 \le \pi(i)+\lambda_1(i) u(i) \le 1 & \textrm{ and } & 0 \le \pi(i)-\lambda_2(i) u(i) \le 1. \end{eqnarray*} \item Take $\pi(i+1)=\pi(i)+\delta(i)$, where \begin{eqnarray*} \delta(i) = \left\{\begin{array}{ll} \lambda_1^*(i) u(i) & \textrm{ with probability } \lambda_2^*(i)/\lbrace \lambda_1^*(i)+\lambda_2^*(i) \rbrace, \\ -\lambda_2^*(i) u(i) & \textrm{ with probability } \lambda_1^*(i)/\lbrace \lambda_1^*(i)+\lambda_2^*(i) \rbrace. \end{array} \right. \end{eqnarray*} \end{enumerate} \item Otherwise, drop the last column from the matrix $A_i$ and go back to Step 1. \end{enumerate} \caption{A fast procedure for the cube method} \label{algo:1} \end{algorithm} \begin{prop} \label{prop4} Assume that assumptions (H1)-(H2) and (H3b) hold. Assume that the sample $S$ is selected by means of Algorithm \ref{algo:1}, and that $q=O(1)$. Then $V\left\{N^{-1}(\hat{t}_{y}-t_y)\right\}=O(n^{-1})$ and the Narain-Horvitz-Thompson estimator is consistent in mean square. \end{prop} \noindent Other implementations of the cube method are possible, for which Proposition \ref{prop3} may not be suitable to obtain the mean-square consistency. For the general balanced procedure described in Algorithm 8.3 in \citet{til:2011}, we have $U_i=\left\{k \in U;~\pi_k(i-1) \notin \{0,1\}\right\}$ which means that all the units that are still neither selected nor definitely rejected at step $i-1$ are possibly affected at step $i$. This leads to $\mathcal{C}=N$, so that the assumptions for Proposition \ref{prop3} are not fulfilled. \end{document}
arXiv
Eilenberg–Ganea theorem In mathematics, particularly in homological algebra and algebraic topology, the Eilenberg–Ganea theorem states for every finitely generated group G with certain conditions on its cohomological dimension (namely $3\leq \operatorname {cd} (G)\leq n$), one can construct an aspherical CW complex X of dimension n whose fundamental group is G. The theorem is named after Polish mathematician Samuel Eilenberg and Romanian mathematician Tudor Ganea. The theorem was first published in a short paper in 1957 in the Annals of Mathematics.[1] Definitions Group cohomology: Let $G$ be a group and let $X=K(G,1)$ be the corresponding Eilenberg−MacLane space. Then we have the following singular chain complex which is a free resolution of $\mathbb {Z} $ over the group ring $\mathbb {Z} [G]$ (where $\mathbb {Z} $ is a trivial $\mathbb {Z} [G]$-module): $\cdots \xrightarrow {\delta _{n}+1} C_{n}(E)\xrightarrow {\delta _{n}} C_{n-1}(E)\rightarrow \cdots \rightarrow C_{1}(E)\xrightarrow {\delta _{1}} C_{0}(E)\xrightarrow {\varepsilon } \mathbb {Z} \rightarrow 0,$ where $E$ is the universal cover of $X$ and $C_{k}(E)$ is the free abelian group generated by the singular $k$-chains on $E$. The group cohomology of the group $G$ with coefficient in a $\mathbb {Z} [G]$-module $M$ is the cohomology of this chain complex with coefficients in $M$, and is denoted by $H^{*}(G,M)$. Cohomological dimension: A group $G$ has cohomological dimension $n$ with coefficients in $\mathbb {Z} $ (denoted by $\operatorname {cd} _{\mathbb {Z} }(G)$) if $n=\sup\{k:{\text{There exists a }}\mathbb {Z} [G]{\text{ module }}M{\text{ with }}H^{k}(G,M)\neq 0\}.$ Fact: If $G$ has a projective resolution of length at most $n$, i.e., $\mathbb {Z} $ as trivial $\mathbb {Z} [G]$ module has a projective resolution of length at most $n$ if and only if $H_{\mathbb {Z} }^{i}(G,M)=0$ for all $\mathbb {Z} $-modules $M$ and for all $i>n$. Therefore, we have an alternative definition of cohomological dimension as follows, The cohomological dimension of G with coefficient in $\mathbb {Z} $ is the smallest n (possibly infinity) such that G has a projective resolution of length n, i.e., $\mathbb {Z} $ has a projective resolution of length n as a trivial $\mathbb {Z} [G]$ module. Eilenberg−Ganea theorem Let $G$ be a finitely presented group and $n\geq 3$ be an integer. Suppose the cohomological dimension of $G$ with coefficients in $\mathbb {Z} $ is at most $n$, i.e., $\operatorname {cd} _{\mathbb {Z} }(G)\leq n$. Then there exists an $n$-dimensional aspherical CW complex $X$ such that the fundamental group of $X$ is $G$, i.e., $\pi _{1}(X)=G$. Converse Converse of this theorem is an consequence of cellular homology, and the fact that every free module is projective. Theorem: Let X be an aspherical n-dimensional CW complex with π1(X) = G, then cdZ(G) ≤ n. Related results and conjectures For n = 1 the result is one of the consequences of Stallings theorem about ends of groups.[2] Theorem: Every finitely generated group of cohomological dimension one is free. For $n=2$ the statement is known as the Eilenberg–Ganea conjecture. Eilenberg−Ganea Conjecture: If a group G has cohomological dimension 2 then there is a 2-dimensional aspherical CW complex X with $\pi _{1}(X)=G$. It is known that given a group G with $\operatorname {cd} _{\mathbb {Z} }(G)=2$, there exists a 3-dimensional aspherical CW complex X with $\pi _{1}(X)=G$. See also • Eilenberg–Ganea conjecture • Group cohomology • Cohomological dimension • Stallings theorem about ends of groups References • Eilenberg, Samuel; Ganea, Tudor (1957). "On the Lusternik–Schnirelmann category of abstract groups". Annals of Mathematics. 2nd Ser. 65 (3): 517–518. doi:10.2307/1970062. JSTOR 1970062. MR 0085510. • John R. Stallings, "On torsion-free groups with infinitely many ends", Annals of Mathematics 88 (1968), 312–334. MR0228573 • Bestvina, Mladen; Brady, Noel (1997). "Morse theory and finiteness properties of groups". Inventiones Mathematicae. 129 (3): 445–470. Bibcode:1997InMat.129..445B. doi:10.1007/s002220050168. MR 1465330. S2CID 120422255.. • Kenneth S. Brown, Cohomology of groups, Corrected reprint of the 1982 original, Graduate Texts in Mathematics, 87, Springer-Verlag, New York, 1994. MR1324339. ISBN 0-387-90688-6
Wikipedia
Double-attention mechanism of sequence-to-sequence deep neural networks for automatic speech recognition 음성 인식을 위한 sequence-to-sequence 심층 신경망의 이중 attention 기법 Dongsuk Yook1∗ Dan Lim2 In-Chul Yoo1 육 동석1∗ 임 단2 유 인철1 1Artificial Intelligence Laboratory 2Department of Computer Science and Engineering, Korea University ∗Corresponding Author 입력열과 출력열의 길이가 다른 경우 attention 기법을 이용한 sequence-to-sequence 심층 신경망이 우수한 성능을 보인다. 그러나, 출력열의 길이에 비해서 입력열의 길이가 너무 긴 경우, 그리고 하나의 출력값에 해당하는 입력열의 특성이 변화하는 경우, 하나의 문맥 벡터(context vector)를 사용하는 기존의 attention 방법은 적당하지 않을 수 있다. 본 논문에서는 이러한 문제를 해결하기 위해서 입력열의 왼쪽 부분과 오른쪽 부분을 각각 개별적으로 처리할 수 있는 두 개의 문맥 벡터를 사용하는 이중 attention 기법을 제안한다. 제안한 방법의 효율성은 TIMIT 데이터를 사용한 음성 인식 실험을 통하여 검증하였다. Sequence-to-sequence 심층 신경망 음성 인식 Sequence-to-sequence deep neural networks with attention mechanisms have shown superior performance across various domains, where the sizes of the input and the output sequences may differ. However, if the input sequences are much longer than the output sequences, and the characteristic of the input sequence changes within a single output token, the conventional attention mechanisms are inappropriate, because only a single context vector is used for each output token. In this paper, we propose a double-attention mechanism to handle this problem by using two context vectors that cover the left and the right parts of the input focus separately. The effectiveness of the proposed method is evaluated using speech recognition experiments on the TIMIT corpus. Deep neural network Automatic speech recognition II. Related works 2.1 Sequence-to-sequence models 2.2 Attention mechanism III. Double-attention mechanism IV. Experiments Recently, sequence-to-sequence Deep Neural Networks (DNN) have been widely used in various tasks such as machine translation,[1,2] image captioning,[3] and speech recognition.[4,5,6,7] Attention mechanisms[8,9,10] are critical in the successful application of the sequence-to-sequence models to those tasks, because the models learn the mapping between differently sized input and output sequences by using the attention mechanisms, thus enabling the models to focus on the relevant portion of the input sequence for each output token. For domains where the size of the input sequence is much larger than that of the output sequence, the attention mechanisms should be capable of handling a large area of the input focus for each output token. Particularly, in speech recognition, the input sequences (speech signals) are much longer than the corresponding output sequences (phoneme or word labels). Furthermore, the characteristic of the input speech signals often changes within a single phoneme segment due to the coarticulation effect, implying that a single context vector computed as a weighted average of the high-level representation of the input sequence is insufficient to cover the wide range of a varying input sequence. In this work, we propose a double-attention mechanism that can handle a large area of the input focus with a varying characteristic caused by the left and the right context in the input. It uses two context vectors that cover the left and the right parts of the input focus separately. The experimental results of speech recognition on the TIMIT corpus show that the proposed method is effective in enhancing the speech recognition performance. A sequence-to-sequence model uses the input vector sequence x1:T=(x1,x2,⋯,xT) and produces the output token sequence y1:N=(y1,y2,⋯,yN), whose length may differ from that of the input sequence. As shown in Fig. 1, a sequence-to-sequence model typically consists of four modules: encoder, decoder, attender, and generator. A sequence-to-sequence deep neural network consisting of encoder, decoder, attender, and generator. The encoder uses the input vector sequence x1:T and transforms the input into a high-level representation h1:T=(h1,h2,⋯,hT) as follows: $$h_{1:T}=\mathrm{Encode}(x_{1:T}).$$ (1) Long Short-Term Memory (LSTM)[11,12] networks or Convolutional Neural Networks (CNNs)[13,14] are typically used for the encoder. Depending on the architecture of the encoder, the length of h1:T may be shorter than that of x1:T. The decoder computes its output si at each output token time step i by using the previous decoder's output si-1, the previous context vector ci-1, and the previous output token yi-1 as follows: $$s_i=\mathrm{Decode}(s_{i-1},c_{i-1},y_{i-1}).$$ (2) In this work, we used LSTM networks for the decoder. The attender creates the context vector ci from the encoder's output h1:T and the decoder's output si as follows: $$c_i=\mathrm{Attend}(h_{1:T},s_i).$$ (3) The context vector can be considered as a relevant summary of h1:T that is useful in the prediction of the output token at each time step. Additionally, the attender may utilize the previous alignment vector, which will be explained in the following sections in detail. The context vector ci and the decoder's output si are fed into the generator to produce the conditional probability distribution of the output token yi as follows: $$P(y_i│\;x_{i:T},y_{1:i-1})=\mathrm{Generate}(s_i,c_i).$$ (4) The generator is typically implemented as a MultiLayer Perceptron (MLP) network with a softmax output layer. The context vector ci at the output token time step i is defined as a weighted sum of the encoder's outputs ht's as follows: $$c_i=\sum_t^{}a_{i,t}h_t,$$ (5) where the alignment vector ai is the probability distribution of the weights over h1:T. That is, ai,t represents the contribution of ht for the context vector ci, which is computed as follows: $$e_{i,t}=\mathrm{Score}(s_i,h_t),$$ (6) $$a_{i,t}=\exp(e_{i,t})/\sum_{t'}^{}\exp(e_{i,t'}).$$ (7) Typical choice for the score function, Score(), includes an additive score function and a multiplicative score function defined as follows: $$e_{i,t}=w^T\tanh(\phi(s_i)+\psi(h_t)),\;(\mathrm{additive})$$ (8) $$e_{i,t}=\phi(s_i)\bullet\psi(h_t),\;(\mathrm{multiplicative})$$ (9) where ∙ is a dot product. ϕ() and ψ() are typically implemented using MLPs. These score functions are said to be content-based because each element, ei,t, is computed from the content of the decoder's output si and the encoder's output ht. The hybrid attention mechanism suggested by Chorowski et al.[8] extends the content-based attention mechanism by utilizing the previous alignment vector ai-1. At each output token time step i, the location-aware feature fi is calculated from the previous alignment vector ai-1 as follows: $$f_i=F\otimes\;a_{i-1},$$ (10) where F is a learnable convolution matrix, and ⊗ is a convolution operator. The location-aware feature is expected to facilitate the sequence-to-sequence model to learn the alignment better, as the current alignment between the speech signal and its corresponding text is dependent on the previous alignment information. It can be fed into the additive score function as follows: $$e_{i,t}=w^T\tanh(\phi(s_i)+\psi(h_t)+\theta(f_{i,t})),$$ (11) where θ() is another MLP. In speech recognition, the size of the encoder's output sequence is much larger than that of its corresponding output token sequence. Furthermore, the left and the right speech contexts may affect the current phoneme to be pronounced, owing to the coarticulation effect as shown in Fig. 2. Therefore, a single context vector may not be sufficient to capture the relevant information in detail, because the single context vector represents the entire relevant information for each output token as a weighted sum of the long encoded sequence. A method to mitigate this problem is to use two separate context vectors computed by two attenders: one for the left context and the other for the right context,[15] as shown in Fig. 3. A sample spectrogram of a waveform. The left and the right portions of a phoneme is affected by the previous and the next speech sounds, respectively. A sequence-to-sequence model with a double- attention mechanism. It uses two attenders: one for the left context and the other for the right context. The first context vector ci1 focusing on the left part of a phoneme is obtained using the attention vector ai1 which is computed from si, h1:T, and fi1 as follows: $$f_i^1=F^1\otimes\;a_{i-1}^1,$$ (12) $$e_{i,t}^1=w^T\tanh(\phi^1(s_i)+\psi^1(h_t)+\theta^1(f_{i,t}^1)),$$ (13) $$a_{i,t}^1=\exp(e_{i,t}^1)/\sum_{t'}^{}\exp(e_{i,t'}^1),$$ (14) $$c_i^1=\sum_t^{}a_{i,t}^1h_t.$$ (15) Similarly, the second attention vector ai2 and the second context vector ci2 are computed as follows to focus on the right part of the phoneme: $$f`_i^2=F^2\otimes\;a_i^1,$$ (16) $$e_{i,t}^2=v^T\tanh(\phi^2(c_i^1)+\psi^2(h_t)+\theta^2(f_{i,t}^2)),$$ (17) It is worth noting that fi2 utilizes ai1 instead of ai-11, and ci1 is used instead of si in computing ei,t2. Hence, the second attention vector ai2 is expected to attend to different parts of the input sequence in contrast to the first attention vector ai1, because it considers the first context vector ci1 and its alignment information ai1. As the multiplicative score function generally exhibits better performance than the additive score function, further improvements can be achieved by modifying Eqs. (13) and (17) to use the multiplicative score functions that utilize the location-aware feature as follows: $$e_{i,t}^1=\phi^1(s_i)\bullet\;\psi^1(h_t)+w^T\tanh(\theta^1(f_{i,t}^1)),$$ (20) $$e_{i,t}^2=\phi^2(c_i^1)\bullet\;\psi^2(h_t)+v^T\tanh(\theta^2(f_{i,t}^2)).$$ (21) Finally, the generator is modified to use both ci1 and ci2 as well as si to compute the posterior probability as follows: $$P(y_i│\;x_{i:T},y_{1:i-1})=\geq\;\neq\;\mathrm{rate}(s_i,c_i^1,c_i^2).$$ (22) The multi-head attention method[10] is similar to the proposed double-attention method in that it inhibits a single averaged context vector. In the multi-head attention method, the input is linearly projected into several subspaces, and the attention is computed in each subspace differently to form multiple attentions. However, it does not care which position to attend to and the order of the multiple attentions. The proposed double-attention mechanism is designed to specifically attend to different parts with an order, i.e., the left and the right parts of a phoneme, to cope with the coarticulation effect. The first context vector is computed using the information available at the previous time (ai-11 and si), while the second context vector is computed using the information available at the current time (ai1 and ci1). Therefore, the two context vectors are expected to attend to different parts of the phoneme. To evaluate the effectiveness of the proposed method, speech recognition experiments were conducted using the TIMIT corpus. 40-dimensional Mel-scaled log filter banks with their first and second order temporal derivatives were used as the input feature vectors. The decoder was a single-layer unidirectional LSTM network, whereas the encoder was a deep architecture consisting of seven CNN layers followed by a fully connected layer and three bidirectional LSTM layers as shown in Fig. 4. The first convolutional layer used 256 filters, and the rest of the convolutional layers used 128 filters. The CNN layers used residual connections and batch normalization.[16] The fully connected layer had 1,024 nodes. Each bidirectional LSTM layer had 256 nodes for each direction. To train the model, the Adam algorithm with a learning rate of 10–3, a batch size of 32, and gradient clipping of 1 was used. A dropout with a probability of 0.5 was used across every neural network layer. The structure of the encoder network used in the experiments. Fig. 5 shows example attention vectors for the conventional single-attention mechanism and the proposed double-attention mechanism. As shown in the figure, the area of the single-attention focus (solid lines) is split into two regions (dotted lines for the left context and dashed lines for the right context) using the proposed double- attention mechanism. Alignment examples of four phonemes "oy", "l", "q" (glottal stop), and "ao" in the middle of words "broil or". Solid lines (ai) represent the alignment vector produced by the conventional single-attention mechanism. Dotted lines (ai1) and dashed lines (ai2) represent the left and the right alignment vectors produced by the proposed double-attention mechanism. Table 1 shows the Phone Error Rate (PER) of the speech recognition systems using various attention mechanisms of the sequence-to-sequence models. As shown in the table, the proposed double-attention mechanism with the multiplicative score function reduces the PER by 4 % relatively compared to the baseline system, which uses the conventional single-attention mechanism. When the attention mechanism is replaced with the multi-head attention method,[10] the PER rises to 17.3 %. Performances of various attention mechanisms and score functions for the TIMIT phone recognition task. Attention Score function PER (%) Single-attention Eq. (11) 16.8 Double-attention Eqs. (13) and (17) 16.5 We herein proposed a double-attention mechanism for the sequence-to-sequence deep neural networks to handle large input focuses with changing characteristics. Furthermore, the multiplicative score function with the location- aware feature was proposed to better utilize the left and the right contexts of the input. The experimental results of the speech recognition task on the TIMIT corpus verified that the proposed double-attention mechanism indeed attended to the left and the right contexts of the input focus and reduced the speech recognition error rates. The conventional speech recognition systems with Hidden Markov Models (HMM) typically uses three states for a phone model. This corresponds to a triple-attention mechanism in the sequence-to-sequence deep neural networks, which will be our future research direction. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2017R1E1A1 A01078157). Also, it was partly supported by the MSIT (Ministry of Science and ICT) under the ITRC (Information Technology Research Center) support program (IITP-2018-0-01405) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation), and IITP grant funded by the Korean government (MSIT) (No. 2018-0-00269, A research on safe and convenient big data processing methods). I. Sutskever, O. Vinyals, and Q. Le, "Sequence to sequence learning with neural networks," Proc. Int. Conf. NIPS. 3104-3112 (2014). D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv:1409.0473 (2014). K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio, "Show, attend and tell: neural image caption generation with visual attention," Proc. ICML. 2048-2057 (2015). S. Watanabe, T. Hori, S. Kim, J. Hershey, and T. Hayashi, "Hybrid CTC/attention architecture for end- to-end speech recognition," IEEE J. Selected Topics in Signal Processing, 11, 1240-1253 (2017). 10.1109/JSTSP.2017.2763455 H. Soltau, H. Liao, and H. Sak, "Neural speech recognizer: acoustic-to-word LSTM model for large vocabulary speech recognition," Proc. Interspeech, 3707- 3711 (2017). 10.21437/Interspeech.2017-1566 K. Audhkhasi, B. Kingsbury, B. Ramabhadran, G. Saon, and M. Picheny, "Building competitive direct acoustics-to-word models for English conversational speech recognition," Proc. IEEE ICASSP. 4759-4763 (2018). 10.1109/ICASSP.2018.8461935 C. Chiu, T. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani, "State-of-the-art speech recognition with sequence-to-sequence models," Proc. IEEE ICASSP. 4774-4778 (2018). J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, "Attention-based models for speech recognition," Proc. Int. Conf. NIPS. 577-585 (2015). W. Chan, N. Jaitly, Q. Le, and O. Vinyals, "Listen, attend and spell: a neural network for large vocabulary conversational speech recognition," Proc. IEEE ICASSP. 4960-4964 (2016). A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaizer, and I. Polosukhin, "Attention is all you need," Proc. Int. Conf. NIPS. 5998-6008 (2017). S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, 9, 1735-1780 (1997). 10.1162/neco.1997.9.8.17359377276 K. Greff, R. Srivastava, J. Koutnik, B. Steunebrink, and J. Schmidhuber, "LSTM: a search space odyssey," IEEE Trans. on Neural Networks and Learning Systems, 28, 2222-2232 (2017). 10.1109/TNNLS.2016.258292427411231 Y. LeCun and Y. Bengio, "Convolutional networks for images, speech, and time-series," in Handbook of Brain Theory and Neural Networks, edited by M. A. Arbib (MIT Press, 1995). O. Abdel-Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, "Convolutional neural networks for speech recognition," IEEE/ACM Trans. on Audio, Speech, and Language Processing, 22, 1533-1545 (2014). 10.1109/TASLP.2014.2339736 D. Lim, Improving seq2seq by revising attention mechanism for speech recognition, (Dissertation, Korea University, 2018). Y. Zhang, W. Chan, and N. Jaitly, "Very deep convolutional networks for end-to-end speech recognition," Proc. IEEE ICASSP. 4845-4849 (2017). 10.1109/ICASSP.2017.7953077PMC5568090
CommonCrawl
A note on higher regularity boundary Harnack inequality Full characterization of optimal transport plans for concave costs December 2015, 35(12): 6133-6153. doi: 10.3934/dcds.2015.35.6133 Complexity and regularity of maximal energy domains for the wave equation with fixed initial data Yannick Privat 1, , Emmanuel Trélat 2, and Enrique Zuazua 3, CNRS, Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France Université Pierre et Marie Curie (Univ. Paris 6) and Institut Universitaire de France and Team GECO Inria Saclay, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris BCAM - Basque Center for Applied Mathematics, Mazarredo, 14, E-48009 Bilbao-Basque Country Received September 2013 Revised January 2014 Published May 2015 We consider the homogeneous wave equation on a bounded open connected subset $\Omega$ of $\mathbb{R}^n$. Some initial data being specified, we consider the problem of determining a measurable subset $\omega$ of $\Omega$ maximizing the $L^2$-norm of the restriction of the corresponding solution to $\omega$ over a time interval $[0,T]$, over all possible subsets of $\Omega$ having a certain prescribed measure. We prove that this problem always has at least one solution and that, if the initial data satisfy some analyticity assumptions, then the optimal set is unique and moreover has a finite number of connected components. In contrast, we construct smooth but not analytic initial conditions for which the optimal set is of Cantor type and in particular has an infinite number of connected components. Keywords: Wave equation, Fourier series, optimal domain, Cantor set, calculus of variations.. Mathematics Subject Classification: 93B07, 49K20, 49Q1. Citation: Yannick Privat, Emmanuel Trélat, Enrique Zuazua. Complexity and regularity of maximal energy domains for the wave equation with fixed initial data. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 6133-6153. doi: 10.3934/dcds.2015.35.6133 R. A. Adams, Sobolev Spaces,, Pure and Applied Mathematics, (1975). Google Scholar C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary,, SIAM J. Control Optim., 30 (1992), 1024. doi: 10.1137/0330055. Google Scholar N. Burq and P. Gérard, Condition nécessaire et suffisante pour la contrôlabilité exacte des ondes (French) [A necessary and sufficient condition for the exact controllability of the wave equation],, C. R. Acad. Sci. Paris Sér. I Math., 325 (1997), 749. doi: 10.1016/S0764-4442(97)80053-5. Google Scholar R. M. Hardt, Stratification of real analytic mappings and images,, Invent. Math., 28 (1975), 193. doi: 10.1007/BF01436073. Google Scholar P. Hébrard and A. Henrot, A spillover phenomenon in the optimal location of actuators,, SIAM J. Control Optim., 44 (2005), 349. doi: 10.1137/S0363012903436247. Google Scholar A. Henrot, Extremum Problems for Eigenvalues of Elliptic Operators,, Frontiers in Mathematics, (2006). Google Scholar A. Henrot and M. Pierre, Variation et Optimisation de Formes (French) [Shape Variation and Optimization] Une Analyse Géométrique [A Geometric Analysis],, Math. & Appl., (2005). doi: 10.1007/3-540-37689-5. Google Scholar H. Hironaka, Subanalytic sets,, in Number Theory, (1973), 453. Google Scholar B. Kawohl, Rearrangements and Convexity of Level Sets in PDE,, Lecture Notes in Math., (1150). Google Scholar C. S. Kubrusly and H. Malebranche, Sensors and controllers location in distributed systems - a survey,, Automatica, 21 (1985), 117. doi: 10.1016/0005-1098(85)90107-4. Google Scholar S. Kumar and J. H. Seinfeld, Optimal location of measurements for distributed parameter estimation,, IEEE Trans. Autom. Contr., 23 (1978), 690. doi: 10.1109/TAC.1978.1101803. Google Scholar J.-L. Lions, Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués, Tome 1,, Recherches en Mathématiques Appliquées [Research in Applied Mathematics], (1988). Google Scholar K. Morris, Linear-quadratic optimal actuator location,, IEEE Trans. Automat. Control, 56 (2011), 113. doi: 10.1109/TAC.2010.2052151. Google Scholar A. Münch, Optimal location of the support of the control for the 1-D wave equation: numerical investigations,, Comput. Optim. Appl., 42 (2009), 443. doi: 10.1007/s10589-007-9133-x. Google Scholar E. Nelson, Analytic vectors,, Ann. Math., 70 (1959), 572. doi: 10.2307/1970331. Google Scholar F. Periago, Optimal shape and position of the support for the internal exact control of a string,, Syst. Cont. Letters, 58 (2009), 136. doi: 10.1016/j.sysconle.2008.08.007. Google Scholar Y. Privat, E. Trélat and E. Zuazua, Optimal observation of the one-dimensional wave equation,, J. Fourier Anal. Appl., 19 (2013), 514. doi: 10.1007/s00041-013-9267-4. Google Scholar Y. Privat, E. Trélat and E. Zuazua, Optimal observability of the multi-dimensional wave and Schrödinger equations in quantum ergodic domains,, to appear in J. Europ. Math. Soc. (JEMS), (2013). Google Scholar Y. Privat, E. Trélat and E. Zuazua, Optimal location of controllers for the one-dimensional wave equation,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 1097. doi: 10.1016/j.anihpc.2012.11.005. Google Scholar Y. Privat, E. Trélat and E. Zuazua, Optimal shape and location of sensors for parabolic equations with random initial data,, Arch. Ration. Mech. Anal., 216 (2015), 921. doi: 10.1007/s00205-014-0823-0. Google Scholar J.-M. Rakotoson, Réarrangement Relatif,, Math. & Appl. (Berlin) [Mathematics & Applications], (2008). doi: 10.1007/978-3-540-69118-1. Google Scholar M. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups,, Birkhäuser Advanced Texts: Basler Lehrbücher, (2009). doi: 10.1007/978-3-7643-8994-9. Google Scholar D. Ucinski and M. Patan, Sensor network design fo the estimation of spatially distributed processes,, Int. J. Appl. Math. Comput. Sci., 20 (2010), 459. doi: 10.2478/v10006-010-0034-2. Google Scholar Bernard Dacorogna, Giovanni Pisante, Ana Margarida Ribeiro. On non quasiconvex problems of the calculus of variations. Discrete & Continuous Dynamical Systems - A, 2005, 13 (4) : 961-983. doi: 10.3934/dcds.2005.13.961 Daniel Faraco, Jan Kristensen. Compactness versus regularity in the calculus of variations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 473-485. doi: 10.3934/dcdsb.2012.17.473 Hans Josef Pesch. Carathéodory's royal road of the calculus of variations: Missed exits to the maximum principle of optimal control theory. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 161-173. doi: 10.3934/naco.2013.3.161 G. Gentile, V. Mastropietro. Convergence of Lindstedt series for the non linear wave equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 509-514. doi: 10.3934/cpaa.2004.3.509 Ivar Ekeland. From Frank Ramsey to René Thom: A classical problem in the calculus of variations leading to an implicit differential equation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1101-1119. doi: 10.3934/dcds.2010.28.1101 James W. Cannon, Mark H. Meilstrup, Andreas Zastrow. The period set of a map from the Cantor set to itself. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2667-2679. doi: 10.3934/dcds.2013.33.2667 Hakima Bessaih, Yalchin Efendiev, Florin Maris. Homogenization of the evolution Stokes equation in a perforated domain with a stochastic Fourier boundary condition. Networks & Heterogeneous Media, 2015, 10 (2) : 343-367. doi: 10.3934/nhm.2015.10.343 Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003 (Special) : 760-770. doi: 10.3934/proc.2003.2003.760 Kim Dang Phung. Boundary stabilization for the wave equation in a bounded cylindrical domain. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 1057-1093. doi: 10.3934/dcds.2008.20.1057 Kazuhiro Ishige, Michinori Ishiwata. Global solutions for a semilinear heat equation in the exterior domain of a compact set. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 847-865. doi: 10.3934/dcds.2012.32.847 Agnieszka B. Malinowska, Delfim F. M. Torres. Euler-Lagrange equations for composition functionals in calculus of variations on time scales. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 577-593. doi: 10.3934/dcds.2011.29.577 Delfim F. M. Torres. Proper extensions of Noether's symmetry theorem for nonsmooth extremals of the calculus of variations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 491-500. doi: 10.3934/cpaa.2004.3.491 Nuno R. O. Bastos, Rui A. C. Ferreira, Delfim F. M. Torres. Necessary optimality conditions for fractional difference problems of the calculus of variations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 417-437. doi: 10.3934/dcds.2011.29.417 Michel Potier-Ferry, Foudil Mohri, Fan Xu, Noureddine Damil, Bouazza Braikat, Khadija Mhada, Heng Hu, Qun Huang, Saeid Nezamabadi. Cellular instabilities analyzed by multi-scale Fourier series: A review. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 585-597. doi: 10.3934/dcdss.2016013 Moez Daoulatli. Energy decay rates for solutions of the wave equation with linear damping in exterior domain. Evolution Equations & Control Theory, 2016, 5 (1) : 37-59. doi: 10.3934/eect.2016.5.37 Laurent Bourgeois, Dmitry Ponomarev, Jérémi Dardé. An inverse obstacle problem for the wave equation in a finite time domain. Inverse Problems & Imaging, 2019, 13 (2) : 377-400. doi: 10.3934/ipi.2019019 Yannick Privat, Emmanuel Trélat. Optimal design of sensors for a damped wave equation. Conference Publications, 2015, 2015 (special) : 936-944. doi: 10.3934/proc.2015.0936 Nikos Katzourakis. Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 313-327. doi: 10.3934/cpaa.2015.14.313 Gisella Croce, Nikos Katzourakis, Giovanni Pisante. $\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6165-6181. doi: 10.3934/dcds.2017266 Ioan Bucataru, Matias F. Dahl. Semi-basic 1-forms and Helmholtz conditions for the inverse problem of the calculus of variations. Journal of Geometric Mechanics, 2009, 1 (2) : 159-180. doi: 10.3934/jgm.2009.1.159 Yannick Privat Emmanuel Trélat Enrique Zuazua
CommonCrawl
Probiotic Yoghurt Made from Milk of Ewes Fed a Diet Supplemented with Spirulina platensis or Fish Oil Ahmed B. Shazly1, Mostafa S. A. Khattab ORCID: orcid.org/0000-0002-4688-45801, Mohamed T. Fouad1, Ahmed M. Abd El Tawab1, Eltaher M. Saudi2 & Mahmoud Abd El-Aziz1 Annals of Microbiology volume 72, Article number: 29 (2022) Cite this article Yoghurt is a widely consumed dairy product around the world. It has healing properties and characteristics that are important for human health. Our goal was to see how using ewes' milk fed Spirulina platensis (SP) or fish oil (FO)-supplemented diets affected the chemical, physical, and nutritional properties of yoghurt, as well as the activity and survival of starter and probiotic bacteria during storage. The collected milk from each ewe group was preheated to 65 °C and homogenized in a laboratory homogenizer, then heated to 90 °C for 5 min, cooled to 42 °C, and divided into two equal portions. The first portion was inoculated with 2.0% mixed starter culture (Lactobacillus bulgaricus and Streptococcus thermophilus, 1:1), whereas the second was inoculated with 2% mixed starter culture and 1% Bifidobacterium longum as a probiotic bacteria. SP yoghurt had the highest levels of short chain-FA, medium chain-FA, mostly C10:0, and long chain-FA, namely C16:0, C18:2 and the lowest levels of C18:0 and C18:1, followed by FO yoghurt. The addition of SP or FO to ewes' diets resulted in yoghurt with higher viable counts of L. bulgaricus and S. thermophilus, which were still >107 cfu/g at the end of storage, as well as a higher level of acetaldehyde content (P<0.05) as a flavor compound, than the control (C) yoghurt. The viscosity of SP yoghurt was higher than that of FO and C yoghurt; the difference was not significant. The addition of B. longum, a probiotic bacteria, to all yoghurt samples, improved antioxidant activities, particularly against ABTS• radicals, but reduced SP yoghurt viscosity. When B. longum was added, acetaldehyde content increased from 39.91, 90.47, and 129.31 μmol/100g in C, FA, and SP yoghurts to 46.67, 135.55, and 144.1 μmol/100g in probiotic C, FA, and SP yoghurts, respectively. There was no significant difference in sensory qualities among all the yoghurt samples during all storage periods. Supplementing the ewes' diets with Spirulina platensis or fish oil can change the fatty acid composition of the resulting yoghurt. The starter culture's activity, flavor compounds, and some chemical, physical, and antioxidant properties of milk produced from these diets can all be improved, particularly in yoghurt treated with probiotic bacteria (B. longum). There has been a surge in interest and funding for research into innovative functional foods in recent years. As a result, global changes in consumer demand for natural and healthful foods, such as milk and dairy products, are constantly expanding. Modification of animal diets with bioactive feed additions such as algae and fish oil (Madhusudan et al. 2011; Abo El-Nor and Khattab 2012; Khattab et al. 2022) or microalgae is one technique for creating such diets (Christaki et al. 2012; Hussein et al. 2020.( Those enriched with low saturated fatty acid (SFA) content and high quality polyunsaturated fatty acids (PUFA), which humans and animals cannot synthesize and can protect against diseases including cardiovascular disease, diabetes, atherosclerosis, skin diseases, and arthritis (Gouveia et al. 2008; Christaki et al. 2012). Incorporating marine supplements or plant oils rich in 18:2n-6 into a ruminant's diet is an effective nutritional strategy for altering milk and increasing polyunsaturated fatty acids (PUFA) such as cis-9,trans-11 conjugated linoleic acid (CLA) and 22:6n-3 in bovine (Toral et al. 2012). Spirulina, Arthrospira platensis, is an edible blue-green microalgae that is high in proteins (up to 70%) and contains a variety of minerals and vitamins, including vitamins B12, B1, B2, B, and vitamin E, as well as carbohydrate contents (15-20%) composed of glucose and glycogen, lipids (up to 7%), and essential fatty acids such as linoleic acid and γ-linolenic acid (Wells et al. 2017). Spirulina is a valuable resource for natural antioxidants, such as phycocyanin pigments, carotenoids, and phenolic compounds (Soni Arora and Rana 2017; Wells et al. 2017). The findings suggest that Spirulina could be used as a feed source for various animal species. It has been associated with improved animal development and nutritional product quality (Bichi et al. 2013). Cows given dietary Spirulina exhibited a 21% increase in milk production, according to Kulpys et al. (2009). Furthermore, Simkus et al. (2008) found that cows receiving Spirulina had higher milk fat (between 17.6 to 25.0%), milk protein (up by 9.7%), and lactose (up by 11.7%) than cows receiving no Spirulina. Because fish oil is one of the greatest sources of long-chain PUFA, like eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in the human diet, its insufficient consumption might have a significant impact on health (Freitas and Campos 2019). Omega-3 PUFAs are known to protect against cardiovascular disease, reduce the incidence of some types of cancer and autoimmune illnesses, and are necessary for the healthy development of brain and retina functions (Sokoła-Wysoczańska et al. 2018). Ruminant products such as milk, dairy products, and beef have been criticized for their high levels of SFA and low levels of ω-3 PUFA (Kliem and Shingfield 2016; Rodriguez-Herrera et al. 2017). Fish oil and microalgae have been used in most research that has attempted to improve the health characteristics of milk and dairy products as the main dietary source of ω-3 PUFA (Givens and Gibbs 2006). Probiotics are live microorganisms that benefit the host when given in sufficient amounts. The health benefits of functional foods can be further boosted by supplementing them with specific lactic acid bacteria, which are the most commonly utilized probiotic cultures in dairy products and beverages (El-Kholy et al. 2016; El-Shenawy et al. 2019). Probiotic bacteria provide a variety of health benefits, the most important of which is viability, or the ability to survive in the gastrointestinal tract in a certain number, improve the microbial balance of the digestive system, and survive in a variety of environments. Yoghurt fortified with probiotics has been shown to have various health benefits, including anti-diabetic effects in diabetic rats (Abbas et al. 2017; Terpou et al. 2019). Furthermore, adding probiotic-fermented dairy products will improve the functional qualities of nutrients considered "functional food" (El-Shenawy et al. 2016; Arowolo and He 2018). Hence, combining probiotics with fermented milk modified, which can be used as dietary supplements in dairy products to achieve high efficiency in increasing the growth of probiotics. The nutritional importance of the mixed form of milk modified and probiotics can be considered highly nutritious and cost-effective due to a senior count of probiotic bacteria (Markowiak and Śliżewska 2017). As a result, it's reasonable to predict that a diet supplemented with fish oil and Spirulina alga, both of which are high in unsaturated fatty acids and secondary fatty components, will improve the fatty acid profile and also the physical and nutritional qualities of milk. Also, incorporating these supplements into yoghurt milk promotes the ability of probiotic bacteria and improves nutritional and physicochemical qualities, making it a functional dairy food. Therefore, this work was aimed to see how using milk from ewes fed Spirulina platensis or fish oil-supplemented diets affected the chemical, physical, and health aspects of yoghurt, as well as the activity and survival of starter cultures (L. bulgaricus and S. thermophiles) and probiotic bacteria (B. longum) during storage. A Spirulina alga, Spirulina platensis, was obtained from the marine toxins Laboratory, National Research Centre, Egypt. Fish oil was purchased from the local market, Cairo, Egypt. Starter culture (Lactobacillus delbrueckii ssp. bulgaricus and Streptococcus salivarius ssp. thermophilus) and probiotic bacteria (Bifidobacterium longum ssp. longum 35624 ATCC) were obtained from stock cultures of Dairy Microbiology Lab., National Research Centre, Cairo, Egypt. Sigma-Aldrich, USA, provided 1-diphenyl-2-picrylhydrazyl (DPPH•) and 2,2'-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS•). All chemicals and reagents were analytical grade and came from various sources. Experimental design–animals and feeding In a complete randomized design, thirty lactating Barki ewes weighing 40±2.3 kg were randomly assigned to three experimental groups 7 days post-parturition and followed for 60 days. According to NRC, ewes were separately housed in pens (1.5 m2/ewe) with free access to water and an experimental diet (ad libitum) to meet their nutritional needs. The diets consisted of a 60:10:30 concentrate feed mixture, clover hay, and bean straw, respectively. Table 1 shows the chemical composition of the ingredients and diet. The experimental diets included a control diet with no additions (C), a control diet supplemented with 10 mL fish oil/kg DM, and a control diet supplemented with 5 g Spirulina platensis /kg DM. All diets were given twice daily at 07:00 and 17:00 hr, and milk samples were taken from each animal in the morning and evening. Each animal group's sample was mixed samples of a fixed percentage of the evening and morning yield. Table 1 The chemical composition of the ingredients and experimental diet (g/kg DM) Yoghurt making The collected milk from each ewes group was preheated to 65 °C and homogenized in a laboratory homogenizer (EURO TURRAXT 20b, IKA Lobo Technik 27000 min G1). The homogenized milk was heated to 90 °C for 5 min, cooling to 42 °C, and divided into two equal portions. The first portion was inoculated with 2.0% mixed starter culture (L. bulgaricus and S. thermophiles, 1:1), whereas the second was inoculated with 2% mixed starter culture and 1% B. longum as a probiotic bacteria (2:1). All treatments were poured into 150 mL plastic cups and incubated at 42 °C until homogenous coagulation was achieved (Hassan et al. 2015). The yoghurt samples were stored at 5±2 °C for 15 days. Total solids, fat, total nitrogen, and ash content of yoghurt were determined using AOAC (2007). The protein content was obtained by multiplying the percentage of TN by 6.38. A laboratory pH meter with a glass electrode was used to measure the changes in pH in the yoghurt samples during storage (HANNA, Instrument, Portugal. A water soluble nitrogen/total nitrogen ratio (WSN/TN ratio) was used to determine the level of proteolysis in the yoghurt samples during storage, according to Innocente (1997). The concentration of acetaldehyde in the yoghurt samples was measured using a spectrophotometer (Shimadzu, 240-UV–Vis, Japan) as described by Less and Jago (1970). Fatty acids profile The fatty acid methyl ester of yoghurt samples was prepared according to the method of AOAC (2007). Fatty acid methyl esters were injected into (HP 6890 series GC) apparatus provided with a DB-23 column (60 m x 0.32 mm x 25 μm). Carrier gas was N 2 with flow rate 2.2 mL/min, splitting ratio of 1:50. The injector temperature was 250 °C and that of Flame Ionization Detector (FID) was 300 °C. The temperature setting was as follows: 50 °C to 210 °C/min and then held at 210 o C for 25 min. peaks were identified by comparing the retention times obtained with standard methyl esters. Antioxidant activities of yoghurt Antiradical activities of yoghurt samples were estimated in yoghurt supernatant using stable DPPH radicals (DPPH•) and stable ABTS radicals (ABTS•) assays according to Brand-Williams et al. (1995) and Re et al. (1999), respectively. Briefly, 20 g of yoghurt were centrifuged at 4000 g for 5 min before filtered through Whatman filter paper No 1. 100 mL of yoghurt supernatant was added to 3.9 mL of DPPH working solution (25 mg DPPH/L methanol) or ABTS working solution (7 mM ABTS solution with 2.45 mM K2S2O8). After incubation for 30 min in the dark at room temperature (25±2 °C), the degree of decolorization was measured in a spectrophotometer (Shimadzu spectrophotometer, UV–Vis. 1201, Japan) at 517 nm for the DPPH• and 700 nm for the ABTS• radical-scavenging activity assays. Control solutions, DPPH and ABTS solutions without yoghurt supernatant, were prepared in the same manner as the assay mixture. The following formula was used to determine both ABTS• and DPPH• scavenging activities: $$\mathrm{Yoghurt}\ \mathrm{antiradical}\ \mathrm{activity}\ \left(\%\right)=\left[\left({\mathrm{A}}_0-{\mathrm{A}}_1\right)/{\mathrm{A}}_0\right]\ \mathrm{x}\ 100$$ A0 is the absorbance of the control (DPPH or ABTS solution), and A1 is the absorbance of the sample. Bacteriological analysis Yoghurt samples were diluted and subsequently plated in duplicate onto selective media. The MRS agar medium was used to enumerate L. bulgaricus (Mohamed et al. 2017), while S. thermophiles was enumerated on M17 agar (Hussein et al. 2017). B. longum was determined according to Blanchette et al. (1996) using modified MRS agar supplemented with 0.05% L-cysteine-HCl. All plates were incubated at 37 °C anaerobic for 4 days. Water holding capacity (WHC) The WHC of yoghurt samples was determined by the centrifuge method according to Akalın et al. (2012) with some modifications. Twenty grams of yoghurt (10 °C) were centrifuged for ten min at 5,000 xg. The pellet was weighed after the separated whey was removed. The following equation was used to determine the WHC: $$\mathrm{WHC}\ \left(\%\right)=\left[1-\left(\mathrm{Pellet}\ \mathrm{weight}/\mathrm{Initial}\ \mathrm{sample}\ \mathrm{weight}\right)\right]\ \mathrm{x}\ 100$$ Structure viscosity Before viscosity measurement, the yoghurt sample was gently swirled 5 times in a clockwise direction with a plastic spoon. A Brookfield digital viscometer (Model DV-II, Canada) connected with spindle-4 was used to determine structure viscosity at 7 °C. The yoghurt sample was treated to selected rpm ranging from 3.0 to 30 for an upward curve. Structure viscosity was expressed as a Pascal (Pa s). Sensory evaluation According to Mohebbi and Ghoddusi (2008), experienced judges selected from staff members of the Dairy Department, National Research Center, Egypt, evaluated the yoghurt samples for sensory attributes (appearance, body & texture, flavor) on a 9-point hedonic scale (9 excellent, 1 unacceptable). Yoghurt samples were presented in three-digit coded white plastic containers and tasted 15 min after leaving the refrigerator. Statistical analysis was performed using the GLM procedure with SAS (2004) software. Analysis of variance (ANOVA) and Duncan's multiple comparison procedure were used to compare the means. A probability of P<0.05 was used to establish statistical significance. Composition of yoghurt The chemical composition of probiotic and non-probiotic yoghurts made from milk produced by ewes fed a diet supplemented with Spirulina platensis (SP) or fish oil (FO) was not significantly different (Table 2). Total solids, proteins, fat, lactose, and ash content ranged from 14.22 to 14.68, 4.28 to 4.42, 3.93 to 4.15, 4.86 to 5.20, and 0.95 to 0.98%, respectively. This means that feeding on SP or FO had no significant effect (P>0.05) on the percentage of milk ingredients. Similar observations were found in milk produced from cows fed a diet supplemented with flaxseed and soybean oil (Hassan et al. 2020) or Spirulina platensis microalgae (Lamminen et al. 2019). Table 2 Chemical composition of probiotic and non-probiotic yoghurts made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil Fatty acids composition of yoghurt Table 3 shows the fatty acid content of probiotic and non-probiotic yoghurt made from milk of ewes' on a diet supplemented with Spirulina platensis or fish oil. In general, Spirulina platensis (SP) and fish oil (FO) supplements had a significant effect on the fatty acid composition of milk. Short-chain FA (SCFA), medium-chain FA (MCFA), and saturated FA (SFA) content were the highest in SP yoghurt, while unsaturated FA (USFA) and long-chain FA content (LCFA) were the lowest. The SCFA, MCFA, and SFA increased by 77.67, 31.03, and 14.63%, while USFA and LCFA decreased by 19.66 and 9.73% compared to control (C) yoghurt, respectively. In particular, C4:0 (46.67%), C10:0 (49.49%), C12:0 (43.53%), C16:0 (26.76%), C18:2 (38.23%), and CLA (35.00%) showed the greatest increase, whereas C18:0 (-44.04%) and C18:1 (-27.57%) showed the greatest decline. These findings disagree with Christaki et al. (2012) and Kouřimská et al. (2014) in cow's milk and goat's milk, respectively. Excessive algal supplementation may negatively impact feed intake, ruminal metabolism, milk production, and lipids composition (Altomonte et al. 2018). When ewes were fed fish oil, a comparable but less pronounced change in the fatty acids of FO yoghurt was found. C18:0 and C18:1 content decreased, whereas C16:0, C18:2, and CLA content increased, with the latter being more pronounced than in SP yoghurts. These findings are consistent with those of Shingfield et al. (2003) in milk fat of cows fed fish oil, but they differ from those in milk fat of Holstein cows fed fish oil or soybean oil (Fatahnia et al. 2008). Except for a modest drop in the level of SCFA (-5.36%), C18:0 (-11.10%) of probiotic SP yoghurt and C18:2 (-9.23%) of probiotic FO yoghurt when probiotic bacteria (B. longum) were added, no apparent changes in fatty acids content were identified when compared to non-probiotics. Table 3 Fatty acids profile of probiotic and non-probiotic yoghurts made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil Viable counts of starters and probiotic The main cultures in yogurt are L. bulgaricus and S. thermophilus. Bifidobacteria and other probiotic bacteria cultures can be added to yoghurt. B. longum is a multifunctional, clinically effective probiotic with a long history of safe use in the treatment of gastrointestinal, immunological, and infectious diseases in humans (Wong et al. 2019). Fig 1 shows the viable counts (log10 cfu/g) of yoghurt starters (S. thermophilus and L. bulgaricus) and probiotic bacteria (B. longum) after 15 days of storage at 5±2 °C. The counts of L. bulgaricus and S. thermophilus in both SP and FO yoghurts were greater than in C yoghurt; the difference being significant only in counts of L. bulgaricus on days 1, 3, and 5. A slight increase in the viable counts of starter cultures was found when B. longum was added (P>0.05(. This is comparable to Sharma et al. (2014), who found that using probiotic bacterial cultures promotes the growth of favored microorganisms while crowding out potentially harmful bacteria. When compared to probiotic C yoghurt, both probiotic SP and probiotic FO yoghurts had a similar, small rise in B. longum counts (P>0.05). During storage, all yoghurt samples exhibited a little increase in viable counts (P>0.05) until day 3, then decline thereafter, the decrease being significant only at day 15 (P<0.05). However, the viable counts of S. thermophiles, L. bulgaricus and B. longum were still >107 cfu/g at the end of storage. A similar trend was observed by Mani-López et al. (2013). Sarvari et al. (2014, b) mentioned that the decline of viability for bifidobacteria was gradual and steady during storage. B. longum loses viability during storage due to a large amount of acid, hydrogen peroxide, and maybe bacteriocins produced by L. bulgaricus (Sarvari et al., 2014, b). However, B. longum in FO yoghurt was more stable during storage than SP or C yoghurts. This could be due to the presence of bioactive components with high antioxidant activity (Table 3) in FO yoghurt, which absorb molecular oxygen and thereby prevent B. longum (anaerobic bacteria) from dying. Changes in viable counts of starter cultures and probiotic bacteria of yoghurt made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil during storage at 5±2 °C for 15 days. , C yoghurt; , SP yoghurt; , FO yoghurt; , probiotic C yoghurt; , probiotic SP yoghurt; , probiotic FO yoghurt Biochemical changes The biochemical changes of probiotic and non-probiotic yoghurt made from milk produced by ewes fed SP or FO during storage at 5±2 °C for 15 days are presented in Table 4. The highest acetaldehyde content was found in SP yoghurt (P<0.05), followed by FO yoghurt (P<0.05) and C yoghurt. The high acetaldehyde content in SP yoghurt may be due to the high levels of SCFA and MCFA (Table 3), which improve acetaldehyde formation .The acetaldehyde content of yoghurt increased when B. longum was added as a probiotic; the increase was significant only in PFO yoghurt compared to FO yoghurt (P<0.05). Tamime and Robinson (2007) reported that the protein composition of yoghurt and the bacteria culture and ratio in mixed strains influence the formation of aroma compounds such as acetaldehyde. Proteolytic activities of yoghurt starter cultures, for example, increased acetaldehyde formation by producing threonine in goat's milk. Table 4 Biochemical changes of probiotic and non-probiotic yoghurt made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil during storage at 5±2 °C for 15 days The FO yoghurt also had the highest WSN/TN ratio and the lowest pH; however, the difference was insignificant (P>0.05). An increased starter activity in FO yoghurt; increased viable counts of starters could explain the high WSN/TN ratio and low pH (Fig 1). Acetaldehyde content dropped (P<0.05) while the WSN/TN ratio increased throughout the storage period of 15 days; the difference was significant only in acetaldehyde content (P<0.05). Acetaldehyde content appears to be positively associated with pH value; acetaldehyde content decreases as pH value decrease (El-Shenawy et al. 2019). Similar findings were made in yoghurt containing plant polysaccharides (Hussein et al., 2011) and yoghurt containing plant mucilage (Hassan et al. 2015). However, all yoghurt samples showed a significant decrease (P<0.05) in pH at day 5, after which the decrease was not significant (P>0.05). The changes in pH during storage were also similar in symbiotic low-fat yoghurts (Ramchandran & Shah 2010). According to Mani-López et al. (2013), the pH of yoghurts and fermented milks decreased during storage due to residual microbial activity (post-acidification). As shown in Table 5, probiotic yoghurt had stronger antioxidant activity against DPPH• and ABTS• radicals than non-probiotic yoghurt; the difference being significant only for ABTS• radicals (P<0.05). Sah et al. (2014) focused on this strain's ability to produce bioactive peptides with antioxidant activities co-cultured with yoghurt starters for probiotic yoghurt production. In vitro, hydroxyl radicals and superoxide anion were scavenged by the probiotic supernatant, intact cells, and intracellular cell-free extracts of Bifidobacterium (Shen et al. 2011). Probiotics can improve the antioxidant system and minimize free radical formation, according to Wang et al. (2017). However, the antioxidant activity against the DPPH• radicals was lower than that against the ABTS• radicals. This difference could be related to DPPH's solubility, which is limited to organic solutions. Furthermore, DPPH acts as an oxidizing substrate and a reaction indicator, causing considerable interference (Sah et al. 2014). Table 5 Radical scavenging activities of probiotic and non-probiotic yoghurt made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil during storage at 5±2 °C for 15 days FO yoghurt has greater antioxidant activity against ABTS• radicals (P<0.05) than SP and C yoghurts. However, the DPPH• radical scavenging activity of C, SP, and FO yoghurts did not differ significantly (p>0.05). This suggests that feeding ewes on fish oil can improve the antioxidant activity of the resulting yoghurt, especially against the ABTS• radicals (P<0.05). Such an effect has been found in soft cheese made from milk of lactating animals fed on flaxseed or soybean oils (Hassan et al. 2020). Throughout the storage period, the antioxidant activity of all yoghurt samples increases at the same rate, with the difference being significant only at day 15 against ABTS• radicals. Protein hydrolysis may be correlated to an increase in antioxidant activity during storage. A high positive correlation (r2 = 0.75) was found between the antioxidant activity of cheese and the degree of proteolysis (Hassan et al. 2020). Shazly et al. (2019) reported that the high antioxidant capacity of casein is related to the degree of hydrolysis. Similarly, Sah et al. (2014) discovered a strong positive correlation between the degree of hydrolysis and ABTS• radical scavenging activity (Table 5). Shazly et al. (2019) reported that the high antioxidant capacity of casein is related to the degree of hydrolysis. As shown in Fig 2, the SP and FO yoghurts had better water-holding capacity (WHC) than the C yoghurt, but the differences were not significant (P>0.05). This suggests that consuming SP or FO did not influence the yoghurts' WHC. When B. longum was added, all yoghurt samples showed a small increase (P>0.05) in the WHC at the same rate. This is since B. longum produces a lot of capsular polysaccharides and exopolysaccharides (Tahoun et al. 2017). The production of exopolysaccharides by Bifidobacteria is one of the hypothesized mechanisms for their probiotic activities (Yan et al. 2017). Water-holding capacity of probiotic and non-probiotic yoghurt made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil during storage at 5±2 °C for 15 days. , C yoghurt; , SP yoghurt; , FO yoghurt; , probiotic C yoghurt; , probiotic SP yoghurt; , probiotic FO yoghurt Also, SP yoghurt had a higher viscosity than both FO and C yoghurts at low rpm (up to 6 rpm) (P<0.05), while C yoghurt had higher viscosity than FO yoghurt; after that, the difference was not significant (Fig 3). According to Abd El-Aziz et al. (2015), a high amount of USFA in the emulsion produces small fat droplets, which increase the rheological properties; consequently, the high viscosity of SP yoghurt is due to its high level of SCFA rather than its USFA content. On day 15, all yoghurt samples had a higher viscosity than those on day 1 (P<0.05). The linkages between the gel particles are stronger, and their numbers are greater at a lower temperature throughout the storage time. The particles are more swollen and attached over a longer area, increasing viscosity (Walstra et al. 1999). Other researchers have reported such an effect (Doleyres et al. 2005; Hussein et al. 2017). When B. longum was added, the viscosity of SP yoghurt significantly decreased (P<0.05), while the viscosity of both C and FO yoghurt was not affected compared to non-probiotic yoghurts. In comparison to non-probiotic yoghurts, the viscosity of SP yoghurt dropped significantly (P<0.05) when B. longum was added, while the viscosity of C and FO yoghurts did not change. Structure viscosity of probiotic and non-probiotic yoghurt made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil during storage at 5±2°C for 15 days. , C yoghurt; , SP yoghurt; , FO yoghurt; , probiotic C yoghurt; , probiotic SP yoghurt; , probiotic FO yoghurt Sensory properties Table 6 displays the sensory scores of ewes' yoghurts during the storage period at 5±2°C for 15 days. There was no significant change in sensory attributes such as appearance, flavor, and body & texture during varied storage periods among all ewes' yoghurt samples (P>0.05). The previous finding suggests that the addition of probiotic bacteria (B. longum) or the type of animal feed (Spirulina platensis or fish oil) did not affect the yoghurt's sensory characteristics. Soft cheese prepared from the milk of nursing animals fed a diet enriched with soybean or flaxseed oils showed a similar tendency (Hassan et al. 2020.). Throughout the storage period, no significant changes (P>0.05) in the sensory characteristics scores of all yoghurt samples were seen after storage until 10 days, after which a significant decrease (P<0.05) was noted. Table 6 Sensory evaluation of probiotic and non-probiotic yoghurt made from ewes' milk fed a diet supplemented with Spirulina platensis or fish oil during storage at 5±2 °C for 15 days It can be concluded that supplementing the ewes' diets with Spirulina platensis or fish oil can change the fatty acid composition of the resulting yoghurt. Short and medium-chain fatty acids and some unsaturated fatty acids like linolenic acid and CLA were all increased, but oleic acid was decreased. The starter culture's activity, flavor compounds, and some chemical, physical, and antioxidant properties of milk produced from these diets can be improved, particularly in yoghurt treated with probiotic bacteria (B. longum). Generally, Spirulina platensis, rather than fish oil, had a stronger impact on these modifications in ewes' milk. Propionic bacteria, on the other hand, remained more stable in FO yoghurt during storage. All data generated or analyzed during this study are included in this published article. Abbas HM, Shahein NM, Abd-Rabou NS, Fouad MT, Zaky WM (2017) Probiotic-fermented milk supplemented with rice bran oil. Int J Dairy Sci 12:204–2010 Abd El-Aziz M, Farrag AF, Seleet FL, El-Shiekh MM, Sayed AF (2015) Effect of oils with different fatty acids profile on the physical properties of formulated emulsion. Res J Pharm, Biol Chem Sci 6:1048–1058 Abo El-Nor SAH, Khattab MSA (2012) Enrichment of milk with conjugated linoleic acid by supplementing diets with fish and sunflower oil. Pak J Biol Sci 15:690–693 Akalın AS, Unal G, Dinkci N, Hayaloglu AA (2012) Microstructural, textural, and sensory characteristics of probiotic yoghurts fortified with sodium calcium caseinate or whey protein concentrate. J Dairy Sci 95:3617–3628 Altomonte I, Salari F, Licitra R, Martini M (2018) Use of microalgae in ruminant nutrition and implications on milk quality – A review. Livest Sci 214:25–35 AOAC (2007) Official Methods of Analysis. 18th Edition, Association of Official Analytical chemists, Gaithersburg Arowolo MA, He J (2018) Use of probiotics and botanical extracts to improveruminant production in the tropics: A review. Animal Nutrition 4:241–249 Bichi E, Frutos P, Toral PG, Keisler D, Hervás G, Loor JJ (2013) Dietary marine algae and its influence on tissue gene network expression during milk fat depression in dairy ewes. Anim Feed Sci Technol 186:36–44 Blanchette L, Roy D, Belanger G, Gauthier FS (1996) Production of cottage cheese using dressing fermented by Bifidiobacteria. J Dairy Sci 79:8–11 Brand-Williams W, Cuvelier ME, Berset C (1995) Use of a free radical method to evaluate antioxidant activity. LWT – Food. Sci Technol 28:25–30 Christaki E, Karatzia M, Bonos E, Florou-Paneri P, Karatzias C (2012) Effect of dietary spirulina platensis on milk fatty acid profile of dairy cows. Asian J Anim Vet Adv 7:597–604 Doleyres Y, Schaub L, Lacroix C (2005) Comparison of the functionality of exopolysaccharides produced in situ or added as bioingredients on yogurt properties. J Dairy Sci 88:4146–4156 El-Kholy W, Abd El-Khalek AB, Mohamed SHS, Fouad MT, Kassem JM (2016) Tallaga cheese as a new functional dairy product. Am J Food Technol 11:182–192 El-Shenawy M, Abd El-Aziz M, Elkholy W, Fouad MT (2016) Probiotic ice cream made with tiger-nut (Cyperusesculentus) extract. Am J Food Technol 11:204–212 El-Shenawy M, Fouad MT, Hassan LK, Seleet FL, Abd El-Aziz M (2019) A probiotic beverage made from tiger-nut extract and milk permeate. Pak J Biol Sci 22:180–187 Fatahnia F, Nikkhah A, Zamiri MJ, Kahrizi D (2008) Effect of dietary fish oil and soybean oil on milk production and composition of holstein cows in early lactation. Asian-Aust J Anim Sci 21:386–391 Freitas R, Campos MM (2019) protective effects of omega-3 fatty acids in cancer-related complications. Nutrients 11:945 Givens DI, Gibbs RA (2006) Very long chain n-3 polyunsaturated fatty acids in the food chain in the UK and the potential of animal derived foods to increase intake. Nutr Bull 31:104–110 Gouveia L, Batista AP, Sousa I, Raymundo A, Bandarra NM (2008) Microalgae in novel food products. In: Papadopoulos KN (ed) Food Chemistry Research Developments. Nova Science Publishers, Inc, Hauppauge Hassan LK, Haggag HF, El-Kalyoubi MH, Abd EL-Aziz M, El-Sayed MM, Sayed AF (2015) Physico-chemical properties of yoghurt containing cress seed mucilage or guar gum. Ann Agricultural Sci 60:21–28 Hassan LK, Shazly AB, Kholif AM, Sayed AF, Abd El-Aziz M (2020) Effect of flaxseed (Linum usitatissimum) and soybean (Glycine max) oils in Egyptian lactating buffalo and cow diets on the milk and soft cheese quality. Acta Sci Anim Sci 42:47200 Hussein AMS, Fouad MT, Abd El-Aziz M, Ashour VE, Mostafa EAM (2017) Evaluation of physico-chemical properties of some date varieties and yoghurt made with its syrups. J Biol Sci 17:213–221 Hussein HA, Fouad MT, Abd El-Razik KA, Abo El-Maaty AM, D'Ambrosio C, Scaloni AA, Gomaa M (2020) Study on prevalence and bacterial etiology of mastitis, and effects of subclinical mastitis and stage of lactation on SCC in dairy goats in Egypt. Trop Anim Health Prod 52:3091–3097 Hussein MM, Hassan FAM, Abdel Daym HH, Salama A, Enab AK, Abd El-Galil AA (2011) Utilization of some plant polysaccharides for improving yoghurt consistency. Ann Agric Sci 56:97–103 Innocente N (1997) Free amino acids and water-soluble nitrogen as ripening indices in Montasio cheese. Lait 77:359–369 Khattab M, Tawab AEL, Ahmed M, Saudi E, Awad A, Saad S (2022) Fatty acids profile and ∆9 desaturase index of milk from barki ewes fed diets supplemented with Spirulina platensis or fish oil. Egypt J Chem 65:231–237 Kliem KE, Shingfield KJ (2016) Manipulation of milk fatty acid composition in lactating cows: Opportunities and challenges. Eur J Lipid Sci Technol 118:1661–1683 Kouřimská L, Vondráčková E, Fantová M, Nový P, Nohejlová L, Michnová K (2014) effect of feeding with algae on fatty acid profile of goat's milk. Sci Agric Bohem 45:162–169 Kulpys J, Paulauskas E, Pilipavicius V, Stankevicius R (2009) Influence of cyanobacteria Arthrospira (Spirulina) platensis biomass additive towards the body condition of lactation cows and biochemical milk indexes. Agron Res 7:823–835 Lamminen M, Halmemies-Beauchet-Filleau A, Kokkonen T, Vanhatalo A, Jaakkola S (2019) The effect of partial substitution of rapeseed meal and faba beans by Spirulina platensis microalgae on milk production, nitrogen utilization, and amino acid metabolism of lactating dairy cows. J Dairy Sci 102:7102–7117 Less GJ, Jago GR (1970) Formation of acetaldehyde from -deox-D.S.-phosphate in lactic acid bacteria. J Dairy Res 43:139–144 Madhusudan C, Manoj S, Rahul K, Rishi CM (2011) Seaweeds: A diet with nutritional, medicinal and industrial value. Res J Med Plant 5:153–157 Mani-López E, Palou E, López-Malo A (2013) Probiotic viability and storage stability of yogurts and fermented milks prepared with several mixtures of lactic acid bacteria. J Dairy Sci 97:2578–2590 Markowiak P, Śliżewska K (2017) Effects of probiotics, prebiotics, and symbiotic on human health. Nutrients 9:1021 Mohamed DA, Hassanein MH, El-Messery TM, Fouad MT, El-Said MM, Fouda K, Abdel-Razek AG (2017) Amelioration of type 2 diabetes mellitus by yoghurt supplemented with probiotics and olive pomace extract. J Biol Sci 17:320–333 Mohebbi M, Ghoddusi HB (2008) Rheological and sensory evaluation of yoghurts containing probiotic cultures. J Agric Sci Technol 10:147–156 Ramchandran L, Shah NP (2010) Characterization of functional, biochemical and textural properties of symbiotic low-fat yogurts during refrigerated storage. LWT - Food Sci Technol 43:819–827 Re R, Pellegrini N, Proteggente A (1999) Antioxidant activity applying an improved ABTS radical cation decolorization assay. Free Radic Biol Med 26:1231–1237 Rodriguez-Herrera M, Khatri Y, Marsh SP, Posri W, Sinclair LA (2017) Feeding microalgae at a high level to finishing heifers increases the long-chain n-3 fatty acid composition of beef with only small effects on the sensory quality. Int J Food Sci Technol 53:1405–1413 Sah BNP, Vasiljevic T, McKechnie S, Donkor ON (2014) Effect of probiotics on antioxidant and antimutagenic activities of crude peptide extract from yogurt. Food Chem 156:264–270 Sarvari F, Mortazavian AM, Fazeli MR (2014) Biochemical characteristics and viability of probiotic and yogurt bacteria in yogurt during the fermentation and refrigerated storage. Appl Food Biotechnol 1:55–61 SAS (2004) Statistical Analysis System, User's Guide. Statistical. Version 7th ed. SAS Inst Inc Cary N.C. USA. Sharma R, Bhaskar B, Sanodiya BS, Thakur G, Jaiswal P, Yadav N, Sharma A, Bisen PS (2014) Probiotic efficacy and potential of Streptococcus thermophilus modulating human health: a synoptic review. J Pharm Biol Sci 9:2319–7676 Shazly AB, He Z, Abd El-Aziz M, Zeng M, Zhang S, Qin F, Chen J (2019) Release of antioxidant peptides from buffalo and bovine caseins: Influence of proteases on antioxidant capacities. Food Chem 274:261–267 Shen Q, Shang N, Li P (2011) In vitro and in vivo antioxidant activity of Bifidobacterium animalis 01 isolated from centenarians. Curr Microbiol 62:1097–1103 Shingfield KJ, Ahvenjärvi S, Toivonen V, Ärölä A, Nurmela KVV, Huhtanen P, Griinari JM (2003) Effect of dietary fish oil on biohydrogenation of fatty acids and milk fatty acid content in cows. Anim Sci 77:165–179 Simkus A, Oberauskas V, Zelvyte R, Monkeviciene I, Laugalis J, Sederevicius A, Simkiene A, Juozaitiene V, Juozaitis A, Bartkeviciute Z (2008) The effect of the microalga Spirulina platensis on milk production and some microbiological and biochemical parameters in dairy cows. Zhivotnov'dni Nauki 45:42–49 Sokoła-Wysoczańska E, Wysoczański T, Wagner J, Czyż K, Bodkowski R, Lochyński S, Patkowska-Sokoła B (2018) Polyunsaturated fatty acids and their potential therapeutic role in cardiovascular system disorders-A review. Nutrients 10:1561 Soni Arora R, Rana R (2017) Spirulina – from growth to nutritional product: a review. Trends Food Sci Technol 69:157–171 Tahoun A, Masutani H, El-Sharkawy H, Gillespie T, Honda RP, Kuwata K, Inagaki M, Yabe M, Nomura I, Suzuki T (2017) Capsular polysaccharide inhibits adhesion of Bifidobacterium longum 105-A to enterocyte-like Caco-2 cells and phagocytosis by macrophages. Gut Pathogens 9:1–27 Tamime AY, Robinson RK (2007) In Book, Yoghurt, Chapter: Traditional and recent developments in yoghurt production and related products, pp 348–467 Terpou A, Papadaki A, Lappa IK, Kachrimanidou V, Bosnea LA, Kopsahelis N (2019) Probiotics in food systems: significance and emerging strategies towards improved viability and delivery of enhanced beneficial value. Nutrients 11:1591 Toral P, Belenguer A, Shingfield K, Hervás G, Toivonen V, Frutos P (2012) Fatty acid composition and bacterial community changes in the rumen fluid of lactating sheep fed sunflower oil plus incremental levels of marine algae. J dairy sci 95:794–806 Walstra P, Geurts TJ, Noomen A, Jellema A, Van Boekel MAJS (1999) Dairy technology: principles of milk properties and processes. Marcel Dekker, Inc, New York Wang Y, Yanping W, Yuanyuan W, Han X, Xiaoqiang M, Dongyou Y, Yibing W, Weifen L (2017) Antioxidant properties of probiotic bacteria. Nutrients 9:521 Wells ML, Potin P, Craigi JS, Raven JA, Merchant SS, Helliwell KE, Smith AG, Camire ME, Brawley SH (2017) Algae as nutritional and fun food sources: revisiting our understanding. J Appl Phycol 29:949–982 Wong CB, Odamaki T, Xiao J (2019) Beneficial effects of Bifidobacterium longum subsp. longum BB536 on human health: Modulation of gut microbiome as the principal action. J Function Foods 54:506–519 Yan S, Zhao G, Liu X, Zhao J, Zhangac H, Chen W (2017) Production of exopolysaccharide by Bifidobacterium longum isolated from elderly and infant feces and analysis of priming glycosyltransferase genes. RSC Adv 7:31736–31744 Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). There was no form of financing for this research. Dairy Department, Food Industries and Nutrition Research Institute, National Research Centre, Giza, Egypt Ahmed B. Shazly, Mostafa S. A. Khattab, Mohamed T. Fouad, Ahmed M. Abd El Tawab & Mahmoud Abd El-Aziz Animal Production Department, Faculty of Agriculture, Al-Azhar University, Cairo, Egypt Eltaher M. Saudi Ahmed B. Shazly Mostafa S. A. Khattab Mohamed T. Fouad Ahmed M. Abd El Tawab Mahmoud Abd El-Aziz All authors contributed substantially towards the paper. The author(s) read and approved the final manuscript. Correspondence to Mostafa S. A. Khattab. All of the authors consent to the publication of this manuscript in Annals of Microbiology. The authors declare no competing interests in publishing this manuscript. Shazly, A.B., Khattab, M.S.A., Fouad, M.T. et al. Probiotic Yoghurt Made from Milk of Ewes Fed a Diet Supplemented with Spirulina platensis or Fish Oil. Ann Microbiol 72, 29 (2022). https://doi.org/10.1186/s13213-022-01686-4 Ewes' milk Spirulina platensis Antioxidant activities
CommonCrawl
\begin{document} \title[Profinite many-sorted algebras and ultraproducts] {When are profinite many-sorted algebras retracts of ultraproducts of finite many-sorted algebras? } \author[Climent]{J. Climent Vidal} \address{Universitat de Val\`{e}ncia\\ Departament de L\`{o}gica i Filosofia de la Ci\`{e}ncia\\ Av. Blasco Ib\'{a}\~{n}ez, 30-$7^{\mathrm{a}}$, 46010 Val\`{e}ncia, Spain} \email{[email protected]} \author[Cosme]{E. Cosme Ll\'{o}pez} \address{Universitat de Val\`{e}ncia\\ Departament d'\`{A}lgebra\\ Dr. Moliner, 50, 46100 Burjassot, Val\`{e}ncia, Spain} \email{[email protected]} \subjclass[2000]{Primary: 03C20, 08A68; Secondary: 18A30.} \keywords{Support of a many-sorted set, family of many-sorted algebras with constant support, profinite, retract, projective limit, inductive limit, ultraproduct.} \date{\today} \begin{abstract} For a set of sorts $S$ and an $S$-sorted signature $\Sigma$ we prove that a profinite $\Sigma$-algebra, i.e., a projective limit of a projective system of finite $\Sigma$-algebras, is a retract of an ultraproduct of finite $\Sigma$-algebras if the family consisting of the finite $\Sigma$-algebras underlying the projective system is with constant support. In addition, we provide a categorial rendering of the above result. Specifically, after obtaining a category where the objects are the pairs formed by a nonempty upward directed preordered set and by an ultrafilter containing the filter of the final sections of it, we show that there exists a functor from the just mentioned category whose object mapping assigns to an object a natural transformation which is a retraction. \end{abstract} \maketitle \section{Introduction.} In their article ``Profinite structures are retracts of ultraproducts of finite structures''~\cite{mm07}, H. L. Mariano and F. Miraglia proved, for a single-sorted first order language with equality $\mathcal{L}$, that the profinite $\mathcal{L}$-algebraic systems, i.e., the projective limits of finite $\mathcal{L}$-algebraic systems, are retracts of certain ultraproducts of finite $\mathcal{L}$-algebraic systems. It is true that, broadly speaking, almost all fundamental statements from single-sorted algebra (or single-sorted equational logic), when suitably translated, are also valid for many-sorted algebra (or many-sorted equational logic). However, there are statements from single-sorted algebra which can not be generalized to many-sorted algebras without some type of qualification, which is ultimately grounded on the fact that many-sorted equational logic is not an inessential variation of single-sorted equational logic. (Some examples of theorems about single-sorted algebras which do not go through in their original form to the setting of many-sorted algebras can be found e.g., in~\cite{cs04}--\cite{cs16}, \cite{gm85}, \cite{m76}, and \cite{mrs13}.) In this connection, the aforementioned result of Mariano and Miraglia is no exception and in order to be adapted to many-sorted algebras, it will also require some adjustment. Accordingly, for an arbitrary set of sorts $S$ and an arbitrary $S$-sorted signature $\Sigma$, the main objective of this article is to establish a sufficient (and natural) condition for a profinite $\Sigma$-algebra to be a retract of an ultraproduct of finite $\Sigma$-algebras (let us notice that after having done that, the extension of this result to the case of a many-sorted first order language with equality $\mathcal{L}$ and $\mathcal{L}$-algebraic systems is straightforward). We point out that the required adjustment is, ultimately, founded on the concept of support mapping for the set of sorts $S$ and on the notion of family of $\Sigma$-algebras with constant support (details will be found in the penultimate section of this article). We next proceed to succinctly summarize the contents of the subsequent sections of this article. The reader will find a more detailed explanation at the beginning of the succeeding sections. In Section 2, for the convenience of the reader, we recall, mostly without proofs, for a set of sorts $S$ and an $S$-sorted signature $\Sigma$, those notions and constructions of the theories of $S$-sorted sets and of $\Sigma$-algebras which are indispensable to define in the following section those others which will allow us to achieve the above mentioned main results, thus making, so we hope, our exposition self-contained. After having stated all of these auxiliary results we provide in Section 3 a solution to the problem posed in the title of this article. Concretely, we prove, for an $S$-sorted signature $\Sigma$, the following proposition: { \begin{quotation} \noindent If $\mathbf{A}$ is a profinite $\Sigma$-algebra, i.e., a projective limit of a projective system $\boldsymbol{\mathcal{A}}$ of finite $\Sigma$-algebras relative to a nonempty upward directed preordered set $\mathbf{I} = (I,\leq)$, and $(\mathbf{A}^{i})_{i\in I}$, the underlying family of finite $\Sigma$-algebras of $\boldsymbol{\mathcal{A}}$, is with constant support, then, for a suitable ultrafilter $\mathcal{F}$ on $I$, we have that $\mathbf{A}$ is a retract of $\prod_{i\in I}\mathbf{A}^{i}/\equiv^{\mathcal{F}}$, the ultraproduct of $(\mathbf{A}^{i})_{i\in I}$ relative to $\mathcal{F}$. \end{quotation} } Finally, in Section 4, after obtaining, by means of the Grothendieck construction for a covariant functor from a convenient category of nonempty upward directed preordered sets to the category of sets, a category in which the objects are the pairs formed by a nonempty upward directed preordered set and by an ultrafilter containing the filter of the final sections of it, we provide a categorial rendering of the aforementioned many-sorted version of Mariano-Miraglia theorem. Specifically, we show that there exists a functor from the just mentioned category whose object mapping assigns to an object a natural transformation, between two functors from a suitable category of projective systems of $\Sigma$-algebras to the category of $\Sigma$-algebras, which is a retraction. Our underlying set theory is $\mathbf{ZFSk}$, Zermelo-Fraenkel-Skolem set theory (also known as $\mathbf{ZFC}$, i.e., Zermelo-Fraenkel set theory with the axiom of choice) plus the existence of a Grothendieck universe $\ensuremath{\boldsymbol{\mathcal{U}}}$, fixed once and for all (see~\cite{sM98}, pp.~21--24). We recall that the elements of $\ensuremath{\boldsymbol{\mathcal{U}}}$ are called $\ensuremath{\boldsymbol{\mathcal{U}}}$-small sets and the subsets of $\ensuremath{\boldsymbol{\mathcal{U}}}$ are called $\ensuremath{\boldsymbol{\mathcal{U}}}$-large sets or classes. Moreover, from now on $\mathbf{Set}$ stands for the category of sets, i.e., the category whose set of objects is $\ensuremath{\boldsymbol{\mathcal{U}}}$ and whose set of morphisms is $\bigcup_{A,B\in \boldsymbol{\mathcal{U}}}\mathrm{Hom}(A,B)$, the set of all mappings between $\ensuremath{\boldsymbol{\mathcal{U}}}$-small sets. In all that follows we use standard concepts and constructions from category theory, see~\cite{hs73}, \cite{sM98}, and \cite{em76}, and from many-sorted algebra, see~\cite{m76} and \cite{w92}. More specific notational and conceptual conventions will be included and explained in the following section. \section{Preliminaries.} In this section we introduce those basic notions and constructions which we shall need to obtain the aforementioned main result of this article. Specifically, for a set (of sorts) $S$ in $\ensuremath{\boldsymbol{\mathcal{U}}}$, we begin by recalling the concept of free monoid on $S$, which will be fundamental for defining the concept of $S$-sorted signature. Following this we define the concepts of $S$-sorted set, $S$-sorted mapping from an $S$-sorted set to another, and the corresponding category. Moreover, we define the subset relation between $S$-sorted sets, the notion of finiteness as applied to $S$-sorted sets, the concept of support of an $S$-sorted set, and its properties, the notion of $S$-sorted equivalence on an $S$-sorted set, the quotient $S$-sorted set of an $S$-sorted set by an $S$-sorted equivalence on it, the usual set-theoretic operations on the $S$-sorted sets, and the notion of family of $S$-sorted sets with constant support. Afterwards, for a set (of sorts) $S$ in $\ensuremath{\boldsymbol{\mathcal{U}}}$, we define the notion of $S$-sorted signature. Next, for an $S$-sorted signature $\Sigma$, we define the concepts of $\Sigma$-algebra, $\Sigma$-homomorphism (or, to abbreviate, homomorphism) from a $\Sigma$-algebra to another, and the corresponding category. Moreover, we define the notions of support of a $\Sigma$-algebra, of finite $\Sigma$-algebra, of family of $\Sigma$-algebras with constant support, and of subalgebra of a $\Sigma$-algebra, the construction of the product of a family of $\Sigma$-algebras, the concept of congruence on a $\Sigma$-algebra, and the construction of the quotient $\Sigma$-algebra of a $\Sigma$-algebra by a congruence on it. From now on we make the following assumption: $S$ is a set of sorts in $\ensuremath{\boldsymbol{\mathcal{U}}}$, fixed once and for all. \begin{definition} The \emph{free monoid on} $S$, denoted by $\mathbf{S}^{\star}$, is $(S^{\star},\curlywedge,\lambda)$, where $S^{\star}$, the set of all \emph{words on} $S$, is $\bigcup_{n\in\mathbb{N}}\mathrm{Hom}(n,S)$, $\curlywedge$, the \emph{concatenation} of words on $S$, is the binary operation on $S^{\star}$ which sends a pair of words $(w,v)$ on $S$ to the mapping $w\curlywedge v$ from $\lvert w \rvert+\lvert v \rvert$ to $S$, where $\lvert w \rvert$ and $\lvert v \rvert$ are the lengths ($\equiv$ domains) of the mappings $w$ and $v$, respectively, defined as follows: $w\curlywedge v(i) = w_{i}$, if $0\leq i < \lvert w \rvert$; $w\curlywedge v(i) = v_{i-\lvert w \rvert}$, if $\lvert w \rvert\leq i < \lvert w \rvert+\lvert v \rvert$, and $\lambda$, the \emph{empty word on} $S$, is the unique mapping from $0 = \varnothing$ to $S$. \end{definition} \begin{definition} An $S$-\emph{sorted set} is a function $A = (A_{s})_{s\in S}$ from $S$ to $\ensuremath{\boldsymbol{\mathcal{U}}}$. If $A$ and $B$ are $S$-sorted sets, an $S$-\emph{sorted mapping from} $A$ \emph{to} $B$ is an $S$-indexed family $f = (f_{s})_{s\in S}$, where, for every $s$ in $S$, $f_{s}$ is a mapping from $A_{s}$ to $B_{s}$. Thus, an $S$-sorted mapping from $A$ to $B$ is an element of $\prod_{s\in S}\mathrm{Hom}(A_{s}, B_{s})$. We denote by $\mathrm{Hom}(A,B)$ the set of all $S$-sorted mappings from $A$ to $B$. From now on, $\mathbf{Set}^{S}$ stands for the category of $S$-sorted sets and $S$-sorted mappings. \end{definition} \begin{definition} Let $I$ be a set in $\ensuremath{\boldsymbol{\mathcal{U}}}$ and $(A^{i})_{i\in I}$ an $I$-indexed family of $S$-sorted sets. Then the \emph{product} of $(A^{i})_{i\in I}$, denoted by $\prod_{i\in I}A^{i}$, is the $S$-sorted set defined, for every $s\in S$, as $\left(\prod\nolimits_{i\in I}A^{i}\right)_{s} = \prod\nolimits_{i\in I}A^{i}_{s}$. Moreover, for every $i\in I$, the \emph{i-th canonical projection}, $\mathrm{pr}^{I,i} = (\mathrm{pr}^{I,i}_{s})_{s\in S}$, abbreviated to $\mathrm{pr}^{i} = (\mathrm{pr}^{i}_{s})_{s\in S}$ when this is unlikely to cause confusion, is the $S$-sorted mapping from $\prod_{i\in I}A^{i}$ to $A^{i}$ which, for every $s\in S$, sends $(a_{i})_{i\in I}$ in $\prod_{i\in I}A^{i}_{s}$ to $a_{i}$ in $A^{i}_{s}$. On the other hand, if $B$ is an $S$-sorted set and $(f^{i})_{i\in I}$ an $I$-indexed family of $S$-sorted mappings, where, for every $i\in I$, $f^{i}$ is an $S$-sorted mapping from $B$ to $A^{i}$, then we denote by $\left<f^{i}\right>_{i\in I}$ the unique $S$-sorted mapping $f$ from $B$ to $\prod_{i\in I}A^{i}$ such that, for every $i\in I$, $\mathrm{pr}^{i}\circ f = f^{i}$. The remaining set-theoretic operations on $S$-sorted sets are defined in a similar way, i.e., componentwise. \end{definition} \begin{remark} For a set $I$ in $\ensuremath{\boldsymbol{\mathcal{U}}}$ and an $I$-indexed family of $S$-sorted sets $(A^{i})_{i\in I}$, the ordered pair $(\prod_{i\in I}A^{i},(\mathrm{pr}^{i})_{i\in I})$ is a product of $(A^{i})_{i\in I}$ in $\mathbf{Set}^{S}$. \end{remark} \begin{definition} We denote by $1^{S}$ or, to abbreviate, by $1$, the (standard) final $S$-sorted set of $\mathbf{Set}^{S}$, which is $1^{S} = (1)_{s\in S}$, and by $\varnothing^{S}$ the initial $S$-sorted set, which is $\varnothing^{S} = (\varnothing)_{s\in S}$. \end{definition} \begin{definition} If $A$ and $B$ are $S$-sorted sets, then we will say that $A$ is a \emph{subset} of $B$, denoted by $A\subseteq B$, if, for every $s\in S$, $A_{s}\subseteq B_{s}$. \end{definition} \begin{definition} Let $f,g\colon A\usebox{\xymor} B$ be two $S$-sorted mappings. Then the \emph{equalizer} of $f$ and $g$, denoted by $\mathrm{Eq}(f,g)$, is the subset of $A$ defined, for every $s\in S$, as $\mathrm{Eq}(f,g)_{s} = \{a\in A_{s}\mid f_{s}(a)=g_{s}(a)\}$. Moreover, $\mathrm{eq}(f,g)$ is the canonical embedding of $\mathrm{Eq}(f,g)$ into $A$. \end{definition} \begin{remark} For a parallel pair $f,g\colon A\usebox{\xymor} B$ of $S$-sorted mappings, the ordered pair $(\mathrm{Eq}(f,g),\mathrm{eq}(f,g))$ is an equalizer of $f$ and $g$ in $\mathbf{Set}^{S}$. \end{remark} \begin{definition} An $S$-sorted set $A$ is \emph{finite} if $\coprod A = \bigcup_{s\in S}(A_{s}\times \{s\})$ is finite. We say that $A$ is a \emph{finite} subset of $B$ if $A$ is finite and $A\subseteq B$. \end{definition} \begin{remark} For an object $A$ of the topos $\mathbf{Set}^{S}$, are equivalent: (1) $A$ is finite, (2) $A$ is a finitary object of $\mathbf{Set}^{S}$, and (3) $A$ is a strongly finitary object of $\mathbf{Set}^{S}$. In $\mathbf{Set}^{S}$ there is another notion of finiteness: An $S$-sorted set $A$ is $S$-finite if, and only if, for every $s\in S$, $A_{s}$ is finite. However, unless $S$ is finite, this notion of finiteness is not categorial. \end{remark} \begin{definition} Let $A$ be an $S$-sorted set. Then the \emph{support of} $A$, denoted by $\mathrm{supp}_{S}(A)$, is the set $\{\,s\in S\mid A_{s}\neq \varnothing\,\}$. \end{definition} \begin{remark} An $S$-sorted set $A$ is finite if, and only if, $\mathrm{supp}_{S}(A)$ is finite and, for every $s\in \mathrm{supp}_{S}(A)$, $A_{s}$ is finite. \end{remark} In the following proposition we gather together only those properties of the mapping $\mathrm{supp}_{S}\colon\ensuremath{\boldsymbol{\mathcal{U}}}^{S}\usebox{\xymor} \mathrm{Sub}(S)$, the support mapping for $S$, which sends an $S$-sorted set $A$ to $\mathrm{supp}_{S}(A)$, which will actually be used afterwards. \begin{proposition}\label{propssupport} Let $A$ and $B$ be two $S$-sorted sets, $I$ a set in $\ensuremath{\boldsymbol{\mathcal{U}}}$, and $(A^{i})_{i\in I}$ an $I$-indexed family of $S$-sorted sets. Then the following properties hold: \begin{enumerate} \item $\mathrm{Hom}(A,B)\neq \varnothing$ if, and only if, $\mathrm{supp}_{S}(A)\subseteq\mathrm{supp}_{S}(B)$. Therefore, if $A\subseteq B$, then $\mathrm{supp}_{S}(A)\subseteq\mathrm{supp}_{S}(B)$. \item If from $A$ to $B$ there exists a surjective $S$-sorted mapping $f$, then we have that $\mathrm{supp}_{S}(A) = \mathrm{supp}_{S}(B)$. \item $\mathrm{supp}_{S}(\prod_{i\in I}A^{i}) = \bigcap\nolimits_{i\in I}\mathrm{supp}_{S}(A^{i})$ (if $I = \varnothing$, we adopt the convention that $\bigcap\nolimits_{i\in I}\mathrm{supp}_{S}(A^{i}) = S$, since $\prod_{i\in \varnothing} A^{i}$ is $1 = (1)_{s\in S}$, the final object of $\mathbf{Set}^{S}$). \end{enumerate} \end{proposition} \begin{remark} The concept of support does not play any significant role in the case of the single-sorted algebras. Nevertheless, it (together with, among others, the notions of uniform algebraic closure operator on an $S$-sorted set, delta of Kronecker, subfinal $S$-sorted set, finite $S$-sorted set, and family of $S$-sorted sets with constant support) has turned to be essential to accomplish some investigations in the field of many-sorted algebras, e.g., those carried out in~\cite{cs04}--\cite{cs16}. \end{remark} In the following definition of the concept of family of $S$-sorted sets with constant support use will be made of the concept of support defined above. \begin{definition} Let $I$ be a set and $(A^{i})_{i\in I}$ an $I$-indexed family of $S$-sorted sets. We say that $(A^{i})_{i\in I}$ is a family of $S$-sorted sets with \emph{constant support} if, for every $i, j\in I$, $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A^{j})$. \end{definition} \begin{definition} An $S$-\emph{sorted equivalence relation on} (or, to abbreviate, an $S$-\emph{sorted equivalence on}) an $S$-sorted set $A$ is an $S$-sorted relation $\Phi$ on $A$, i.e., a subset $\Phi = (\Phi_{s})_{s\in S}$ of the cartesian product $A\times A = (A_{s}\times A_{s})_{s\in S}$ such that, for every $s\in S$, $\Phi_{s}$ is an equivalence relation on $A_{s}$. For an $S$-sorted equivalence relation $\Phi$ on $A$, $A/\Phi$, the $S$-\emph{sorted quotient set of} $A$ \emph{by} $\Phi$, is $(A_{s}/\Phi_{s})_{s\in S}$, and $\mathrm{pr}^{\Phi}\colon A\usebox{\xymor} A/\Phi$, the \emph{canonical projection from} $A$ \emph{to} $A/\Phi$, is the $S$-sorted mapping $(\mathrm{pr}^{\Phi_{s}})_{s\in S}$, where, for every $s\in S$, $\mathrm{pr}^{\Phi_{s}}$ is the canonical projection from $A_{s}$ to $A_{s}/\Phi_{s}$ (which sends $x$ in $A_{s}$ to $\mathrm{pr}^{\Phi_{s}}(x) = [x]_{\Phi_{s}}$, the $\Phi_{s}$-equivalence class of $x$, in $A_{s}/\Phi_{s}$). \end{definition} \begin{remark} Let $A$ be an $S$-sorted set and $\Phi\in\mathrm{Eqv}(A)$. Then, by Proposition~\ref{propssupport}, $\mathrm{supp}_{S}(A) = \mathrm{supp}_{S}(A/\Phi)$. \end{remark} We next recall the concept of kernel of an $S$-sorted mapping and the universal property of the $S$-sorted quotient set of an $S$-sorted set by an $S$-sorted equivalence on it \begin{definition} Let $f\colon A\usebox{\xymor} B$ be an $S$-sorted mapping. Then the \emph{kernel} of $f$, denoted by $\mathrm{Ker}(f)$, is the $S$-sorted relation defined, for every $s\in S$, as $\mathrm{Ker}(f)_{s} = \mathrm{Ker}(f_{s})$ (i.e., as the kernel pair of $f_{s}$). \end{definition} \begin{proposition} If $f$ is an $S$-sorted mapping from $A$ to $B$, then we have that $\mathrm{Ker}(f)\in\mathrm{Eqv}(A)$. Moreover, given an $S$-sorted set $A$ and an $S$-sorted equivalence $\Phi$ on $A$, the pair $(\mathrm{pr}^{\Phi},A/\Phi)$ is such that (1) $\mathrm{Ker}(\mathrm{pr}^{\Phi}) = \Phi$, and (2) \emph{(universal property)} for every $S$-sorted mapping $f\colon A\usebox{\xymor} B$, if $\Phi\subseteq \mathrm{Ker}(f)$, then there exists a unique $S$-sorted mapping $\mathrm{p}^{\Phi,\mathrm{Ker}(f)}$ from $A/\Phi$ to $B$ such that $f = \mathrm{p}^{\Phi,\mathrm{Ker}(f)}\circ \mathrm{pr}^{\Phi}$. \end{proposition} Following this we define, for the set of sorts $S$, the category of $S$-sorted signatures. \begin{definition}\label{$S$-sorted signature} An $S$-\emph{sorted signature} is a function $\Sigma$ from $S^{\star}\times S$ to $\ensuremath{\boldsymbol{\mathcal{U}}}$ which sends a pair $(w,s)\in S^{\star}\times S$ to the set $\Sigma_{w,s}$ of the \emph{formal operations} of \emph{arity} $w$, \emph{sort} (or \emph{coarity}) $s$, and \emph{rank} (or \emph{biarity}) $(w,s)$. Sometimes we will write $\sigma\colon w\usebox{\xymor} s$ to indicate that the formal operation $\sigma$ belongs to $\Sigma_{w,s}$. \end{definition} From now on we make the following assumption: $\Sigma$ stands for an $S$-sorted signature, fixed once and for all. We next define the category of $\Sigma$-algebras. \begin{definition} The $S^{\star}\times S$-sorted set of the \emph{finitary operations on} an $S$-sorted set $A$ is $(\mathrm{Hom}(A_{w},A_{s}))_{(w,s)\in S^{\star}\times S}$, where, for every $w\in S^{\star}$, $A_{w} = \prod_{i\in \lvert w\rvert}A_{w_{i}}$, with $\lvert w\rvert$ denoting the length of the word $w$. A \emph{structure of} $\Sigma$-\emph{algebra on} an $S$-sorted set $A$ is a family $(F_{w,s})_{(w,s)\in S^{\star}\times S}$, denoted by $F$, where, for $(w,s)\in S^{\star}\times S$, $F_{w,s}$ is a mapping from $\Sigma_{w,s}$ to $\mathrm{Hom}(A_{w},A_{s})$. For a pair $(w,s)\in S^{\star}\times S$ and a formal operation $\sigma\in \Sigma_{w,s}$, in order to simplify the notation, the operation from $A_{w}$ to $A_{s}$ corresponding to $\sigma$ under $F_{w,s}$ will be written as $F_{\sigma}$ instead of $F_{w,s}(\sigma)$. A $\Sigma$-\emph{algebra} is a pair $(A,F)$, abbreviated to $\mathbf{A}$, where $A$ is an $S$-sorted set and $F$ a structure of $\Sigma$-algebra on $A$. A $\Sigma$-\emph{homomorphism} from $\mathbf{A}$ to $\mathbf{B}$, where $\mathbf{B} = (B,G)$, is a triple $(\mathbf{A},f,\mathbf{B})$, abbreviated to $f\colon \mathbf{A}\usebox{\xymor} \mathbf{B}$, where $f$ is an $S$-sorted mapping from $A$ to $B$ such that, for every $(w,s)\in S^{\star}\times S$, every $\sigma\in \Sigma_{w,s}$, and every $(a_{i})_{i\in \lvert w\rvert}\in A_{w}$, we have that $ f_{s}(F_{\sigma}((a_{i})_{i\in \lvert w\rvert})) = G_{\sigma}(f_{w}((a_{i})_{i\in \lvert w\rvert})), $ where $f_{w}$ is the mapping $\prod_{i\in \lvert w\rvert}f_{w_{i}}$ from $A_{w}$ to $B_{w}$ which sends $(a_{i})_{i\in \lvert w\rvert}$ in $A_{w}$ to $(f_{w_{i}}(a_{i}))_{i\in \lvert w\rvert}$ in $B_{w}$. We denote by $\mathbf{Alg}(\Sigma)$ the category of $\Sigma$-algebras and $\Sigma$-homomorphisms (or, to abbreviate, homomorphisms) and by $\mathrm{Alg}(\Sigma)$ the set of objects of $\mathbf{Alg}(\Sigma)$. \end{definition} \begin{definition} Let $\mathbf{A}$ be a $\Sigma$-algebra. Then the \emph{support of} $\mathbf{A}$, denoted by $\mathrm{supp}_{S}(\mathbf{A})$, is $\mathrm{supp}_{S}(A)$, i.e., the support of the underlying $S$-sorted set $A$ of $\mathbf{A}$. \end{definition} \begin{remark} The set $\{\mathrm{supp}_{S}(\mathbf{A})\mid \mathbf{A}\in \mathrm{Alg}(\Sigma)\}$ is a closure system on $S$. \end{remark} \begin{definition} Let $\mathbf{A}$ be a $\Sigma$-algebra. We say that $\mathbf{A}$ is \emph{finite} if $A$, the underlying $S$-sorted set of $\mathbf{A}$, is finite. \end{definition} We next define when a subset $X$ of the underlying $S$-sorted set $A$ of a $\Sigma$-algebra $\mathbf{A}$ is closed under an operation of $\mathbf{A}$, as well as when $X$ is a subalgebra of $\mathbf{A}$. \begin{definition}\label{Subalg} Let $\mathbf{A}$ be a $\Sigma$-algebra and $X\subseteq A$. Let $\sigma$ be such that $\sigma\colon w\usebox{\xymor} s$, i.e., a formal operation in $\Sigma_{w,s}$. We say that $X$ is \emph{closed under the operation} $F_{\sigma}\colon A_{w}\usebox{\xymor} A_{s}$ if, for every $a\in X_{w}$, $F_{\sigma}(a)\in X_{s}$. We say that $X$ is a \emph{subalgebra} of $\mathbf{A}$ if $X$ is closed under the operations of $\mathbf{A}$. We also say, equivalently, that a $\Sigma$-algebra $\mathbf{X}$ is a \emph{subalgebra} of $\mathbf{A}$ if $X\subseteq A$ and the canonical embedding of $X$ into $A$ determines an embedding of $\mathbf{X}$ into $\mathbf{A}$. \end{definition} We now recall the concept of product of a family of $\Sigma$-algebras. \begin{definition} Let $I$ be a set in $\ensuremath{\boldsymbol{\mathcal{U}}}$ and $(\mathbf{A}^{i})_{i\in I}$ an $I$-indexed family of $\Sigma$-algebras, where, for every $i\in I$, $\mathbf{A}^{i} = (A^{i},F^{i})$. The \emph{product} of $(\mathbf{A}^{i})_{i\in I}$, denoted by $\prod_{i\in I}\mathbf{A}^{i}$, is the $\Sigma$-algebra $(\prod_{i\in I}A^{i},F)$ where, for every $\sigma\colon w\usebox{\xymor} s$ in $\Sigma$, $F_{\sigma}$ sends $(a_{\alpha})_{\alpha\in \lvert w\rvert}$ in $(\prod_{i\in I}A^{i})_{w}$ to $(F^{i}_{\sigma}((a_{\alpha}(i))_{\alpha\in \lvert w\rvert}))_{i\in I}$ in $\prod_{i\in I}A^{i}_{s}$. For every $i\in I$, the \emph{$i$-th canonical projection}, $\mathrm{pr}^{i} = (\mathrm{pr}^{i}_{s})_{s\in S}$, is the homomorphism from $\prod_{i\in I}\mathbf{A}^{i}$ to $\mathbf{A}^{i}$ which, for every $s\in S$, sends $(a_{i})_{i\in I}$ in $\prod_{i\in I}A^{i}_{s}$ to $a_{i}$ in $A^{i}_{s}$. On the other hand, if $\mathbf{B}$ is a $\Sigma$-algebra and $(f^{i})_{i\in I}$ an $I$-indexed family of homomorphisms, where, for every $i\in I$, $f^{i}$ is a homomorphism from $\mathbf{B}$ to $\mathbf{A}^{i}$, then we denote by $\left<f^{i}\right>_{i\in I}$ the unique homomorphism $f$ from $\mathbf{B}$ to $\prod_{i\in I}\mathbf{A}^{i}$ such that, for every $i\in I$, $\mathrm{pr}^{i}\circ f = f^{i}$. \end{definition} In the following definition of the concept of family of $\Sigma$-algebras with constant support use will be made of the concept of an $I$-indexed family of $S$-sorted sets with constant support. \begin{definition} Let $I$ be a set and $(\mathbf{A}^{i})_{i\in I}$ an $I$-indexed family of $\Sigma$-algebras. We say that $(\mathbf{A}^{i})_{i\in I}$ is a family of $\Sigma$-algebras with \emph{constant support} if $(A^{i})_{i\in I}$, the underlying family of $S$-sorted sets of $(\mathbf{A}^{i})_{i\in I}$, is a family of $S$-sorted sets with constant support. \end{definition} Our next goal is to define the concepts of congruence on a $\Sigma$-algebra and of quotient of a $\Sigma$-algebra by a congruence on it. Moreover, we recall the notion of kernel of a homomorphism between $\Sigma$-algebras and the universal property of the quotient of a $\Sigma$-algebra by a congruence on it. \begin{definition} Let $\mathbf{A}$ be a $\Sigma$-algebra and $\Phi$ an $S$-sorted equivalence on $A$. We say that $\Phi$ is an $S$-\emph{sorted congruence on} (or, to abbreviate, a \emph{congruence on}) $\mathbf{A}$ if, for every $(w,s)\in (S^{\star}-\{\lambda\})\times S$, every $\sigma\colon w\usebox{\xymor} s$, and every $a,b\in A_{w}$, if, for every $i\in \lvert w\rvert$, $(a_{i}, b_{i})\in\Phi_{w_{i}}$, then $(F_{\sigma}(a), F_{\sigma}(b))\in \Phi_{s}$. \end{definition} \begin{definition} Let $\mathbf{A}$ be a $\Sigma$-algebra and $\Phi\in\mathrm{Cgr}(\mathbf{A})$. Then $\mathbf{A}/\Phi$, the \emph{quotient $\Sigma$-algebra} of $\mathbf{A}$ \emph{by} $\Phi$, is the $\Sigma$-algebra $(A/\Phi,F^{\mathbf{A}/\Phi})$, where, for every $\sigma\colon w\usebox{\xymor} s$, the operation $F_{\sigma}^{\mathbf{A}/\Phi}\colon (A/\Phi)_{w}\usebox{\xymor} A_{s}/\Phi_{s}$, also denoted, to simplify, by $F_{\sigma}$, sends $([a_{i}]_{\Phi_{w_{i}}})_{i\in\lvert w\rvert}$ in $(A/\Phi)_{w}$ to $[F_{\sigma}((a_{i})_{i\in \lvert w\rvert})]_{\Phi_{s}}$ in $A_{s}/\Phi_{s}$. And $\mathrm{pr}^{\Phi}\colon \mathbf{A}\usebox{\xymor} \mathbf{A}/\Phi$, the \emph{canonical projection from} $\mathbf{A}$ \emph{to} $\mathbf{A}/\Phi$, is the homomorphism determined by the $S$-sorted mapping $\mathrm{pr}^{\Phi}$ from $A$ to $A/\Phi$. \end{definition} \begin{proposition} If $f$ is a homomorphism from $\mathbf{A}$ to $\mathbf{B}$, then $\mathrm{Ker}(f)\in\mathrm{Cgr}(\mathbf{A})$. Moreover, given a $\Sigma$-algebra $\mathbf{A}$ and a congruence $\Phi$ on $\mathbf{A}$, the pair $(\mathrm{pr}^{\Phi},\mathbf{A}/\Phi)$ is such that (1) $\mathrm{Ker}(\mathrm{pr}^{\Phi}) = \Phi$, and (2) \emph{(universal property)} for every homomorphism $f\colon\mathbf{A}\usebox{\xymor} \mathbf{B}$, if $\Phi\subseteq \mathrm{Ker}(f)$, then there exists a unique homomorphism $\mathrm{p}^{\Phi,\mathrm{Ker}(f)}$ from $\mathbf{A}/\Phi$ to $\mathbf{B}$ such that $f = \mathrm{p}^{\Phi,\mathrm{Ker}(f)}\circ \mathrm{pr}^{\Phi}$. \end{proposition} \begin{proposition} Let $f,g\colon\mathbf{A}\usebox{\xymor} \mathbf{B}$ be two homomorphisms of $\Sigma$-algebras. Then the pair $(\mathbf{Eq}(f,g),\mathrm{eq}(f,g))$, with $\mathbf{Eq}(f,g)$ the subalgebra of $\mathbf{A}$ determined by the $S$-sorted set $\mathrm{Eq}(f,g)=(\{a\in A_{s}\mid f_{s}(a)=g_{s}(a)\})_{s\in S}$, and $\mathrm{eq}(f,g)$ the canonical embedding in $\mathbf{A}$, is an equalizer of $f$ and $g$ in $\mathbf{Alg}(\Sigma)$. \end{proposition} We next define the concept of projective system of $\Sigma$-algebras and state the existence of the projective limit of a projective system of $\Sigma$-algebras. But before we start doing all that we recall that every preordered set $\mathbf{I} = (I,\leq)$ has a canonically associated category, also denoted by $\mathbf{I}$, whose set of objects is $I$ and whose set of morphisms is $\leq$, thus, for every $i,j\in I$, $\mathrm{Hom}(i,j) = \{(i,j)\}$, if $(i,j)\in \leq$, and $\mathrm{Hom}(i,j) = \varnothing$, otherwise. \begin{definition} Let $\mathbf{I}$ be a preordered set. A \emph{projective system} of $\Sigma$-algebras relative to $\mathbf{I}$ is a contravariant functor from (the category canonically associated to) $\mathbf{I}$ to $\mathbf{Alg}(\Sigma)$, i.e., an ordered pair $\boldsymbol{\mathcal{A}} = ((\mathbf{A}^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$ such that: \begin{enumerate} \item For every $i\in I$, $\mathbf{A}^{i}$ is a $\Sigma$-algebra. \item For every $(i,j)\in\leq$, $f^{j,i}\colon\mathbf{A}^{j}\usebox{\xymor} \mathbf{A}^{i}$. \item For every $i\in I$, $f^{i,i} = \mathrm{id}_{\mathbf{A}^{i}}$. \item For every $i,j,k\in I$, if $(i,j)\in \leq$ and $(j,k)\in \leq$, then the following diagram commutes $$\xymatrix@C=40pt@R=40pt{ \mathbf{A}^{k} \ar[r]^{f^{k,j}} \ar[dr]_{f^{k,i}} & \mathbf{A}^{j} \ar[d]^{f^{j,i}} \\ & \mathbf{A}^{i} } $$ \end{enumerate} The homomorphisms $f^{j,i}\colon A^{j}\usebox{\xymor} A^{i}$ are called the \emph{transition homomorphisms} of the projective system of $\Sigma$-algebras $\boldsymbol{\mathcal{A}}$ relative to $\mathbf{I}$. A \emph{projective cone to} $\boldsymbol{\mathcal{A}}$ is an ordered pair $(\mathbf{L},(f^{i})_{i\in I})$ where $\mathbf{L}$ is a $\Sigma$-algebra and, for every $i\in I$, $f^{i}\colon \mathbf{L}\usebox{\xymor} \mathbf{A}^{i}$, such that, for every $(i,j)\in \leq$, $f^{i} = f^{j,i}\circ f^{j}$. On the other hand, if $(\mathbf{L},(f^{i})_{i\in I})$ and $(\mathbf{M},(g^{i})_{i\in I})$ are two projective cones to $\boldsymbol{\mathcal{A}}$, then a \emph{morphism} from $(\mathbf{L},(f^{i})_{i\in I})$ to $(\mathbf{M},(g^{i})_{i\in I})$ is a homomorphism $h$ from $\mathbf{L}$ to $\mathbf{M}$ such that, for every $i\in I$, $f^{i} = g^{i}\circ h$. A \emph{projective limit} of $\boldsymbol{\mathcal{A}}$ is a projective cone $(\mathbf{L},(f^{i})_{i\in I})$ to $\boldsymbol{\mathcal{A}}$ such that, for every projective cone $(\mathbf{M},(g^{i})_{i\in I})$ to $\boldsymbol{\mathcal{A}}$, there exits a unique morphism from $(\mathbf{M},(g^{i})_{i\in I})$ to $(\mathbf{L},(f^{i})_{i\in I})$. \end{definition} \begin{proposition} Let $\boldsymbol{\mathcal{A}}$ be a projective system of $\Sigma$-algebras relative to $\mathbf{I}$. Then we denote by $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$, the $\Sigma$-algebra determined by the subalgebra $\varprojlim_{\mathbf{I}}\mathcal{A}$ of $\prod_{i\in I}\mathbf{A}^{i}$, where $\varprojlim_{\mathbf{I}}\mathcal{A}$ is defined as: $$ (\{x\in \textstyle\prod_{i\in I}A^{i}_{s}\mid \forall\, (i,j)\in \leq\, (f^{j,i}(\mathrm{pr}^{j}_{s}(x)) = \mathrm{pr}^{i}_{s}(x))\})_{s\in S}. $$ On the other hand, for every $i\in I$, let $f^{i}$ be the composition $\mathrm{pr}^{i}\circ\mathrm{inc}^{\varprojlim_{\mathbf{I}}\mathcal{A}}$, of the canonical embedding $\mathrm{inc}^{\varprojlim_{\mathbf{I}}\mathcal{A}}$ of $\varprojlim_{\mathbf{I}}\mathcal{A}$ into $\prod_{i\in I}A^{i}$ and the canonical projection $\mathrm{pr}^{i}$ from $\prod_{i\in I}A^{i}$ to $A^{i}$. Then, for every $i\in I$, $f^{i}$ is a homomorphism from $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ to $\mathbf{A}^{i}$ and the pair $(\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}},(f^{i})_{i\in I})$ is a projective limit of $\boldsymbol{\mathcal{A}}$. \end{proposition} We next define the concept of inductive system of $\Sigma$-algebras and state the existence of the inductive limit of an inductive system of $\Sigma$-algebras. \begin{definition} Let $\mathbf{I}$ be an upward directed preordered set. An \emph{inductive system} of $\Sigma$-algebras relative to $\mathbf{I}$ is a covariant functor from (the category canonically associated to) $\mathbf{I}$ to $\mathbf{Alg}(\Sigma)$, i.e., an ordered pair $\boldsymbol{\mathcal{A}} = ((\mathbf{A}^{i})_{i\in I},(f^{i,j})_{(i,j)\in\leq})$ such that \begin{enumerate} \item For every $i\in I$, $\mathbf{A}^{i}$ is a $\Sigma$-algebra. \item For every $(i,j)\in\leq$, $f^{i,j}\colon\mathbf{A}^{i}\usebox{\xymor} \mathbf{A}^{j}$. \item For every $i\in I$, $f^{i,i}=\mathrm{id}_{\mathbf{A}^{i}}$. \item For every $i,j,k\in I$, if $i\leq j\leq k$, then the following diagram commutes $$\xymatrix@C=40pt@R=40pt{ \mathbf{A}^{i} \ar[r]^{f^{i,j}} \ar[rd]_{f^{i,k}} & \mathbf{A}^{j} \ar[d]^{f^{j,k}} \\ & \mathbf{A}^{k} } $$ \end{enumerate} The homomorphisms $f^{i,j}$ are called \emph{transition homomorphisms} of the inductive system of $\Sigma$-algebras $\boldsymbol{\mathcal{A}}$ relative to $\mathbf{I}$. An \emph{inductive cone from} $\boldsymbol{\mathcal{A}}$ is an ordered pair $(\mathbf{L},(f^{i})_{i\in I})$ where $\mathbf{L}$ is a $\Sigma$-algebra and, for every $i\in I$, $f^{i}\colon \mathbf{A}^{i}\usebox{\xymor} \mathbf{L}$, such that, for every $(i,j)\in \leq$, $f^{i} = f^{j}\circ f^{i,j}$. On the other hand, if $(\mathbf{L},(f^{i})_{i\in I})$ and $(\mathbf{M},(g_{i})_{i\in I})$ are two inductive cones from $\boldsymbol{\mathcal{A}}$, then a \emph{morphism} from $(\mathbf{L},(f^{i})_{i\in I})$ to $(\mathbf{M},(g_{i})_{i\in I})$ is a homomorphism $h$ from $\mathbf{L}$ to $\mathbf{M}$ such that, for every $I\in I$, $g^{i} = h\circ f^{i}$. An \emph{inductive limit} of $\boldsymbol{\mathcal{A}}$ is an inductive cone $(\mathbf{L},(f^{i})_{i\in I})$ from $\boldsymbol{\mathcal{A}}$ such that, for every inductive cone $(\mathbf{M},(g^{i})_{i\in I})$ from $\boldsymbol{\mathcal{A}}$, there exits a unique morphism from $(\mathbf{L},(f^{i})_{i\in I})$ to $(\mathbf{M},(g^{i})_{i\in I})$. \end{definition} \begin{proposition} Let $\boldsymbol{\mathcal{A}}$ be an inductive system of $\Sigma$-algebras relative to $\mathbf{I}$. Then we denote by $\varinjlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ the $\Sigma$-algebra which has as underlying $S$-sorted set $\coprod_{i\in I}A^{i}/\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})}$, where $\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})}$ is the $S$-equivalence on $\coprod_{i\in I}A^{i}$ defined as: $$ \textstyle \big(\{\,((a,i),(b,j))\in \big(\coprod_{i\in I}A^{i}_{s}\big)^{2} \mid \exists k\in I ( k\geq i, j \And f^{i,k}_{s}(a) = f^{j,k}_{s}(b)\,\}\big)_{s\in S}, $$ and, for every $(w,s)\in S^{\star}\times S$ and every $\sigma\in\Sigma_{w,s}$, as structural operation $F_{\sigma}$ from $(\coprod_{i\in I}A^{i}/\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})})_{w}$ to $\coprod_{i\in I}A^{i}_{s}/\Phi_{s}^{(\mathbf{I},\boldsymbol{\mathcal{A}})}$ corresponding to $\sigma$ that one defined by associating to an $([(a_{\alpha},i_{\alpha})])_{ \alpha\in \lvert w \rvert}$ in $(\coprod_{i\in I}A^{i}/\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})})_{w}$, $[(F_{\sigma}^{k}(f^{i_{\alpha},k}(a_{\alpha})\mid {\alpha\in \lvert w \rvert}),k)]$ in $\coprod_{i\in I}A^{i}_{s}/\Phi_{s}^{(\mathbf{I},\boldsymbol{\mathcal{A}})}$, where $k$ is an upper bound of $(i_{\alpha})_{ \alpha\in \lvert w \rvert}$ in $\mathbf{I}$ and $F_{\sigma}^{k}$ the structural operation on $\mathbf{A}^{k}$ corresponding to $\sigma$. On the other hand, for every $i\in I$, let $f^{i}$ be the composition $\mathrm{pr}^{\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})}}\circ\mathrm{inc}^{i}$, of the $S$-sorted mapping $\mathrm{inc}^{i}$ from $A^{i}$ to $\coprod_{i\in I}A^{i}_{s}$ and the $S$-sorted mapping $\mathrm{pr}^{\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})}}$ from $\coprod_{i\in I}A^{i}_{s}$ to $\coprod_{i\in I}A^{i}/\Phi^{(\mathbf{I},\boldsymbol{\mathcal{A}})}$. Then, for every $i\in I$, $f^{i}$ is a homomorphism from $\mathbf{A}^{i}$ to $\varinjlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ and the pair $(\varinjlim_{\mathbf{I}}\boldsymbol{\mathcal{A}},(f^{i})_{i\in I})$ is an inductive limit of $(\mathbf{I},\boldsymbol{\mathcal{A}})$. \end{proposition} In the single-sorted case, as in the many-sorted case, to calculate the inductive limit of an inductive system of $\Sigma$-algebras, we can suppress from the inductive system those $\Sigma$-algebras which are initial, i.e., which have $\varnothing$ as underlying set. \begin{remark} Let $\boldsymbol{\mathcal{A}}$ be an inductive system of $\Sigma$-algebras relative to $\mathbf{I}$ and let $J$ be the subset of $I$ defined as $$ J = \{\,i\in I\mid A^{i}\neq (\varnothing)_{s\in S} \,\}. $$ Then $\mathbf{J} = (J,\leq)$ is a directed preordered set (if $i,j\in J$, then $A^{i}\neq (\varnothing)_{s\in S}$, $A^{j}\neq (\varnothing)_{s\in S}$, and there exists a $k\in I$ such that $k\geq i, j$, hence we have the homomorphisms $f^{i,k}$ from $\mathbf{A}^{i}$ to $\mathbf{A}^{k}$ and $f^{j,k}$ from $\mathbf{A}^{j}$ to $\mathbf{A}^{k}$, therefore $A^{k}\neq (\varnothing)_{s\in S}$, and, consequently, $k\in J$). Moreover, by definition, it is easy to see that $\varinjlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ is the same as $\varinjlim_{\mathbf{J}}\boldsymbol{\mathcal{A}}\!\!\upharpoonright\!\!J$. Therefore, to calculate the inductive limit of an inductive system of $\Sigma$-algebras, we can suppress from the inductive system those $\Sigma$-algebras which are initial, i.e., which have as underlying $S$-sorted set $(\varnothing)_{s\in S}$. \end{remark} Moreover, as it is well-known, for \emph{single-sorted} algebras, the inductive limit of an inductive system of \emph{nonempty} $\Sigma$-algebras $\boldsymbol{\mathcal{A}}$ relative to $\mathbf{I}$ can be obtained, alternative, but equivalently, as a quotient algebra $\mathbf{C}/{\equiv}$, where $\mathbf{C}$ is the subalgebra of $\prod_{i\in I}\mathbf{A}_{i}$ determined by the set $C$ of all those choice functions for $(A_{i})_{i\in I}$ which are \emph{eventually consistent}, i.e., by $$ \textstyle C = \{x\in\prod_{i\in I}A_{i}\mid\exists k\in I\,\forall j\geq i\geq k\, (f_{i,j}(x_{i}) = x_{j})\} $$ and $\equiv$ the congruence on $\mathbf{C}$ defined as $$ x\equiv y\text{ if and only if }\exists k\in I\,\forall i\geq k\, (x_{i} = y_{i}). $$ However, for a set of sorts $S$ such that $\mathrm{card}(S)\geq 2$, one can easily find $S$-sorted signatures $\Sigma$ and $\Sigma$-algebras $\mathbf{A}$ such that \begin{enumerate} \item $\mathbf{A}$ is non-initial, i.e., such that the underlying $S$-sorted set is different from $(\varnothing)_{s\in S}$, but \item $\mathbf{A}$ is \emph{globally empty}, i.e., such that there is not any homomorphism from $\mathbf{1}$, the final $\Sigma$-algebra, to $\mathbf{A}$. \end{enumerate} This fact has as a consequence that the above mentioned alternative construction of the inductive limit can not be applied without qualification in the many-sorted case, because the suppression of every occurrence of the initial $\Sigma$-algebra in a direct system does not have any effect on the elimination of those $\Sigma$-algebra which are non-initial but globally empty. \begin{proposition}$[\,$\cite{cs16}, Prop.~2.5$\,]$\label{isomorfismoAlgebrasSoporteConstante} Let $\boldsymbol{\mathcal{A}}$ be an inductive system of $\Sigma$-algebras relative to $\mathbf{I}$, $\mathbf{C}$ the subalgebra of $\prod_{i\in I}\mathbf{A}^{i}$ determined by the $S$-sorted set $C$ of $\prod_{i\in I}\mathbf{A}^{i}$ defined, for every $s\in S$, as follows $$ \textstyle C_{s}=\{x\in\prod_{i\in I}A^{i}_{s} \mid \exists\, k\in I,\; \forall\, j\geq i\geq k,\; f^{i,j}_{s}(x_{i}) = x_{j} \}, $$ and let $\equiv$ be the congruence on $\mathbf{C}$ defined, for every $s\in S$, as follows $$ x\equiv_{s} y \text{ if and only if } \exists\, k\in I,\; \forall\, i\geq k,\; x_{i}=y_{i}. $$ Then $(\mathbf{A}^{i})_{i\in I}$ is a family of $\Sigma$-algebras with constant support if and only if $\mathbf{C}/{\equiv}$ is isomorphic to $\varinjlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$. \end{proposition} The usual definitions of reduced products and ultraproducts for single-sorted algebras have an immediate translation for many-sorted algebras. However, some characterizations of such constructions are not valid for arbitrary families of many-sorted algebras, although they are valid for those families who have the additional property of having constant support. \begin{definition} Let $I$ be a nonempty set, $\mathcal{F}$ a filter on $I$, and $(\mathbf{A}^{i})_{i\in I}$ a family of $\Sigma$-algebras. Then $\boldsymbol{\mathcal{F}} = (\mathcal{F},\leq) = (\mathcal{F},\supseteq)$ is a nonempty upward directed preordered set and $\boldsymbol{\mathcal{A}}(\mathcal{F}) = ((\mathbf{A}(J))_{J\in \mathcal{F}},(\mathrm{p}^{K,J})_{K\leq J})$, where, for every $J\in \mathcal{F}$, $\mathbf{A}(J) = \prod_{j\in J}\mathbf{A}^{j}$ and, for every $J,K\in \mathcal{F}$ such that $K\supseteq J$, $\mathrm{p}^{K,J}$ denotes the unique $\Sigma$-homomorphism $\langle\mathrm{pr}^{K,j}\rangle_{j\in J}\colon\prod_{k\in K}\mathbf{A}^{k}\usebox{\xymor} \prod_{j\in J}\mathbf{A}^{j}$ such that, for every $j\in J$, $\mathrm{pr}^{J,j}\circ \langle\mathrm{pr}^{K,j}\rangle_{j\in J} = \mathrm{pr}^{K,j}$, is an inductive system of $\Sigma$-algebras relative to $\boldsymbol{\mathcal{F}}$. The underlying $\Sigma$-algebra of the inductive limit $(\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F}),(\mathrm{p}^{J})_{J\in \mathcal{F}})$ of $\boldsymbol{\mathcal{A}}(\mathcal{F})$, also denoted by $\prod^{\mathcal{F}}_{i\in I}\mathbf{A}^{i}$, is called the \emph{reduced product} of $(\mathbf{A}^{i})_{i\in I}$ \emph{relative to} $\mathcal{F}$. If $\mathcal{F}$ is an ultrafilter on $I$, then the underlying $\Sigma$-algebra of the inductive limit of the corresponding inductive system $\boldsymbol{\mathcal{A}}(\mathcal{F})$ is called the \emph{ultraproduct} of $(\mathbf{A}^{i})_{i\in I}$ \emph{relative to} $\mathcal{F}$. \end{definition} \begin{proposition}$[\,$\cite{cs16}, Prop.~2.7$\,]$ Let $I$ be a nonempty set, $\mathcal{F}$ a filter on $I$, and $(\mathbf{A}^{i})_{i\in I}$ a family of $\Sigma$-algebras. Then the $S$-sorted relation $\equiv^{\mathcal{F}}$ in $\prod_{i\in I}A^{i}$, defined, for every $s\in S$, as follows $$ a\equiv^{\mathcal{F}}_{s}b\text{ if, and only if, } \mathrm{Eq}(a,b)\in \mathcal{F}, $$ where $\mathrm{Eq}(a,b)=\{i\in I\mid a_{i}=b_{i}\}$ is the equalizer of $a$ and $b$, is a congruence on $\prod_{i\in I}\mathbf{A}^{i}$. \end{proposition} \begin{proposition}$[\,$\cite{cs16}, Prop.~2.8$\,]$ Let $I$ be a nonempty set, $J$ a nonempty subset of $I$, $\mathcal{F}$ the principal filter on $I$ generated by $J$, and $(\mathbf{A}^{i})_{i\in I}$ a family of $\Sigma$-algebras. If $(\mathbf{A}^{i})_{i\in I}$ is a family with constant support, then $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}\cong \prod_{j\in J}\mathbf{A}^{j}$. \end{proposition} As it is well known, the reduced product of a family of single-sorted algebras is isomorphic to a quotient of the product of the family. However, when considering systems of many-sorted algebras, this representation is valid only for systems of many-sorted algebras with constant support. \begin{lemma}\label{ConstSuppDerivedFamilyMSSet} Let $I$ be a nonempty set and $\mathcal{F}$ a filter on $I$. If $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support, then, for every $i\in I$ and every $J\in \mathcal{F}$, $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A(J))$, where $A(J)$ is the underlying $S$-sorted of $\mathbf{A}(J)$. Therefore $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support, i.e., for every $J,K\in\mathcal{F}$, $\mathrm{supp}_{S}(A(J)) = \mathrm{supp}_{S}(A(K))$. \end{lemma} \begin{proof} Let $i$ be an element of $I$ and $J\in \mathcal{F}$. Then, by definition of $A(J)$, by Proposition~\ref{propssupport}, and by hypothesis, we have that $\mathrm{supp}_{S}(A(J)) = \bigcap_{j\in J}\mathrm{supp}_{S}(A^{j}) = \mathrm{supp}_{S}(A^{j})$, for every $j\in J$. But, by hypothesis, $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A^{j})$. Hence $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A(J))$. From this it follows, immediately, that $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support. \end{proof} \begin{proposition}$[\,$\cite{cs16}, Prop.~2.9$\,]$\label{CaracProdRed} Let $I$ be a nonempty set, $\mathcal{F}$ a filter on $I$, and $(\mathbf{A}^{i})_{i\in I}$ a family of $\Sigma$-algebras. If $(\mathbf{A}^{i})_{i\in I}$ is a family with constant support, then $\prod^{\mathcal{F}}_{i\in I}\mathbf{A}^{i}$ is isomorphic to $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}$. \end{proposition} \begin{remark} Let $I$ be a nonempty set, $\mathcal{F}$ a filter on $I$, and $(\mathbf{A}^{i})_{i\in I}$ a family of $\Sigma$-algebras. If $\prod^{\mathcal{F}}_{i\in I}\mathbf{A}^{i} = \varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}$ is isomorphic to $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}$ and $\mathcal{F}$ is such that, for every $s\in S$, $\{i\in I\mid s\in\mathrm{supp}_{S}(A^{i})\}\in \mathcal{F}$, then $(\mathbf{A}^{i})_{i\in I}$ is a family with constant support. \end{remark} \begin{corollary}\label{CaracUltraProd} Let $I$ be a nonempty set, $\mathcal{F}$ an ultrafilter on $I$, and $(\mathbf{A}^{i})_{i\in I}$ a family of $\Sigma$-algebras. If $(\mathbf{A}^{i})_{i\in I}$ is a family with constant support, then $\prod^{\mathcal{F}}_{i\in I}\mathbf{A}^{i}$ is isomorphic to $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}$. \end{corollary} \section{A sufficient condition for a profinite $\Sigma$-algebra to be a retract of an ultraproduct of finite $\Sigma$-algebras.} In this section, after recalling that for a nonempty upward directed preordered set $\mathbf{I}$ the set of all final sections of $\mathbf{I}$ is included in an ultrafilter on $I$ and stating that for a projective system of $S$-sorted sets $\mathcal{A} = ((A^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$ relative to $\mathbf{I}$ and a filter $\mathcal{F}$ on $I$ such that the filter of the final sections of $\mathbf{I}$ is contained in $\mathcal{F}$, if the $I$-indexed family of $S$-sorted sets $(A^{i})_{i\in I}$ is with constant support, then the derived family $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support, we prove that if $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ is a profinite $\Sigma$-algebra, where $\boldsymbol{\mathcal{A}}$ is a projective system of finite $\Sigma$-algebras relative to $\mathbf{I}$ with $\boldsymbol{\mathcal{A}} = ((\mathbf{A}^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$, and the $I$-indexed family of $\Sigma$-algebras $(\mathbf{A}^{i})_{i\in I}$ is with constant support, then $\mathbf{A}$ is a retract of $\prod_{i\in I}\mathbf{A}^{i}/\equiv^{\mathcal{F}}$. \begin{assumption} From now on we assume all preordered sets to be nonempty and upward directed. \end{assumption} \begin{proposition} Let $\mathbf{I}$ be a preordered set. Then the subset $\{\Uparrow\!i\mid i\in I\}$ of $\mathrm{Sub}(I)$, where, for every $i\in I$, $\Uparrow\!i = \{j\in I\mid i\leq j\}$, the final section at $i$ of $\mathbf{I}$, is a filter basis on $I$, i.e., $\{\Uparrow\!i\mid i\in I\}\neq \varnothing$, $\varnothing\not\in \{\Uparrow\!i\mid i\in I\}$, and, for every $i, j\in I$ there exists a $k\in I$ such that $\Uparrow\!k\subseteq \Uparrow\!i\cap\Uparrow\!j$. \end{proposition} We recall that for a preordered set $\mathbf{I}$, and according to the standard definition, the filter on $I$ generated by the filter basis $\{\Uparrow\!i\mid i\in I\}$ on $I$, which is called the filter of the final sections of $\mathbf{I}$ or the Fr\'{e}chet filter of $\mathbf{I}$, is $$ \textstyle \{I\}\cup \{J\subseteq I\mid \exists\, n\in \mathbb{N}-1\,\exists\,(i_{\alpha})_{\alpha\in n}\in I^{n}\,(\bigcap_{\alpha\in n}\Uparrow\!i_{\alpha}\subseteq J)\}, $$ which, on the basis of the above assumption, is precisely $\{J\subseteq I\mid \exists\,i\in I\,(\Uparrow\!i\subseteq J)\}$. Moreover, since every filter $\mathcal{F}$ on a nonempty set $I$ is contained in an ultrafilter on $I$, it follows that $\{\Uparrow\!i\mid i\in I\}$ is contained in an ultrafilter on $I$. From Lemma~\ref{ConstSuppDerivedFamilyMSSet} we obtain the following proposition. \begin{proposition} Let $\mathbf{I}$ be a preordered set and $\mathcal{F}$ a filter on $I$ such that the filter of the final sections of $\mathbf{I}$ is contained in $\mathcal{F}$. If $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support, then, for every $i\in I$ and every $J\in \mathcal{F}$, $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A(J))$. Therefore $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support. \end{proposition} \begin{remark} It is not true, in general, that if there are $j,k\in I$ such that $\mathrm{supp}_{S}(A^{j})\neq \mathrm{supp}_{S}(A^{k})$, then there are $J,K\in \mathcal{F}$ such that $\mathrm{supp}_{S}(A(J)) \neq \mathrm{supp}_{S}(A(K))$ or, what is equivalent, that if $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support, then $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support. (This would be, trivially, fulfilled, e.g., if $(A(J))_{J\in \mathcal{F}}$ were an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support and, for every $i\in I$ and every $j\in \Uparrow\!i$, $\mathrm{supp}_{S}(A^{i})\subseteq \mathrm{supp}_{S}(A^{j})$.) As an example, consider $S = \mathbb{N}$, $I = \mathbb{N}$, $\mathcal{F}$ the Fr\'{e}chet filter on $\mathbb{N}$, and $(A^{n})_{n\in \mathbb{N}}$ the $\mathbb{N}$-indexed family of $\mathbb{N}$-sorted sets, where, for every $n\in \mathbb{N}$, the $\mathbb{N}$-sorted set $A^{n} = (A^{n}_{m})_{m\in \mathbb{N}}$ is such that, for every $m\in \mathbb{N}$, $A^{n}_{m} = \varnothing$, if $n\neq m$, and $A^{n}_{m} = 1 = \{0\}$, otherwise. \end{remark} \begin{proposition} Let $\mathbf{I}$ be a preordered set, $\mathcal{F}$ a filter on $I$ such that the filter of the final sections of $\mathbf{I}$ is contained in $\mathcal{F}$, and $(A^{i})_{i\in I}$ an $I$-indexed family of $S$-sorted sets. Then the following assertions are equivalent: \begin{enumerate} \item $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support. \item For every $i\in I$ and every $J\in \mathcal{F}$, $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A(J))$. \end{enumerate} \end{proposition} \begin{proof} Since it is easy to check that (1) entails (2), we restrict ourselves to show that (2) entails (1). Let us suppose that, for every $i\in I$ and every $J\in \mathcal{F}$, $\mathrm{supp}_{S}(A^{i}) = \mathrm{supp}_{S}(A(J))$. To prove that $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support, let $k$ and $\ell$ be elements of $I$. Then we have that $\mathrm{supp}_{S}(A(\Uparrow\!k)) = \mathrm{supp}_{S}(A^{\ell})$. Hence, by Proposition~\ref{propssupport}, $\mathrm{supp}_{S}(A^{\ell})\subseteq \mathrm{supp}_{S}(A^{k})$. By a similar argument, $\mathrm{supp}_{S}(A^{k}) \subseteq \mathrm{supp}_{S}(A^{\ell})$. Hence $\mathrm{supp}_{S}(A^{k}) = \mathrm{supp}_{S}(A^{\ell})$. Therefore $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support. \end{proof} From Lemma~\ref{ConstSuppDerivedFamilyMSSet} we obtain the following proposition. \begin{proposition}\label{ConstSuppDerivedFamilyAlg} Let $\mathbf{I}$ be a preordered set, $\mathcal{A} = ((A^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$ a projective system of $S$-sorted sets relative to $\mathbf{I}$, and $\mathcal{F}$ a filter on $I$ such that the filter of the final sections of $\mathbf{I}$ is contained in $\mathcal{F}$. If the $I$-indexed family of $S$-sorted sets $(A^{i})_{i\in I}$ is with constant support, then $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support. \end{proposition} \begin{remark} Let $\mathbf{I}$ be a preordered set, $\mathcal{A} = ((A^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$ a projective system of $S$-sorted sets, and $\mathcal{F}$ a filter on $I$ such that the filter of the final sections of $\mathbf{I}$ is contained in $\mathcal{F}$. If, for every $(i,j)\in \leq$, $f^{j,i}$ is surjective, then, by Proposition~\ref{propssupport} and taking into account that $\mathbf{I}$ is upward directed, $(A^{i})_{i\in I}$ is an $I$-indexed family of $S$-sorted sets with constant support. \end{remark} \begin{definition} Let $\mathbf{A}$ be a $\Sigma$-algebra. We call $\mathbf{A}$ a \emph{profinite} $\Sigma$-algebra if it is a projective limit of a projective system of finite $\Sigma$-algebras. \end{definition} \begin{proposition}\label{MS Mariano and Miraglia} Let $\mathbf{I}$ be a preordered set and $\mathcal{F}$ an ultrafilter on $I$ such that the filter basis $\{\Uparrow\!i\mid i\in I\}$ on $I$ is contained in $\mathcal{F}$. If $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ is a profinite $\Sigma$-algebra, where $\boldsymbol{\mathcal{A}}$ is a projective system of finite $\Sigma$-algebras relative to $\mathbf{I}$ with $\boldsymbol{\mathcal{A}} = ((\mathbf{A}^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$, and the $I$-indexed family of finite $\Sigma$-algebras $(\mathbf{A}^{i})_{i\in I}$ is with constant support, then $\mathbf{A}$ is a retract of $\prod_{i\in I}\mathbf{A}^{i}/\equiv^{\mathcal{F}}$. \end{proposition} \begin{proof} By hypothesis, $(\mathbf{A}^{i})_{i\in I}$ is an $I$-indexed family of $\Sigma$-algebras with constant support, hence, by Proposition~\ref{ConstSuppDerivedFamilyAlg}, $(A(J))_{J\in \mathcal{F}}$ is an $\mathcal{F}$-indexed family of $S$-sorted sets with constant support. Thus, by Corollary~\ref{CaracUltraProd}, $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}$ is isomorphic to $\prod^{\mathcal{F}}_{i\in I}\mathbf{A}^{i}$ which, we recall, is $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$, the underlying $\Sigma$-algebra of the inductive limit $(\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F}),(\mathrm{p}^{J})_{J\in \mathcal{F}})$ of the inductive system $\boldsymbol{\mathcal{A}}(\mathcal{F})$ relative to $\boldsymbol{\mathcal{F}}$, where $\boldsymbol{\mathcal{F}}$ is $(\mathcal{F},\leq) = (\mathcal{F},\supseteq)$ and $\boldsymbol{\mathcal{A}}(\mathcal{F})$ is the ordered pair $((\mathbf{A}(J))_{J\in \mathcal{F}},(\mathrm{p}^{J,K})_{J\leq K})$. Therefore, since there exists a canonical embedding $\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}$ of $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ into $\prod_{i\in I}\mathbf{A}^{i}$ and a canonical projection $\mathrm{pr}^{\equiv^{\mathcal{F}}}$ from $\prod_{i\in I}\mathbf{A}^{i}$ to $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}$, the problem comes down to show that there exists a homomorphism $h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}$ from $\prod^{\mathcal{F}}_{i\in I}\mathbf{A}^{i} = \varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$ to $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ such that the following diagram commutes: $$\xymatrix@C=50pt@R=50pt{ \textstyle \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}} \ar[r]^{\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}} \ar[rrd]_{\mathrm{id}_{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}} & \prod_{i\in I}\mathbf{A}^{i} \ar[r]^-{\mathrm{pr}^{\equiv^{\mathcal{F}}}} & \prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}} \cong \varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F}) \ar[d]^{h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}} \\ {} & {} & \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}} } $$ To define $h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}$ (subject to satisfying the requirement just set out), we have to start by defining, for every $J\in \mathcal{F}$ and every $i\in I$, a homomorphism $h^{J,i}$ from $\mathbf{A}(J) = \prod_{j\in J}\mathbf{A}^{j}$ to $\mathbf{A}^{i}$ in such a way that, for every $J,K\in \mathcal{F}$ such that $K\supseteq J$, the homomorphisms $h^{J,i}$ from $\mathbf{A}(J)$ to $\mathbf{A}^{i}$ and $h^{K,i}$ from $\mathbf{A}(K)$ to $\mathbf{A}^{i}$ are compatible with the transition homomorphism $\mathrm{p}^{K,J}$ from $\mathbf{A}(K)$ to $\mathbf{A}(J)$. Afterwards, using the universal property of $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$, we define a homomorphism $h^{i}$ from such an inductive limit to $\mathbf{A}^{i}$, for every $i\in I$. Finally, using the universal property of $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$, we obtain the desired homomorphism $h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}$ from $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$ to $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$. Let $J$ be an element of $\mathcal{F}$ and $i\in I$. We now proceed to define the homomorphism $h^{J,i} = (h^{J,i}_{s})_{s\in S}$ from $\mathbf{A}(J) = \prod_{j\in J}\mathbf{A}^{j}$ to $\mathbf{A}^{i}$. For $s\in \mathrm{supp}_{S}(A^{i})$, $x\in A(J)_{s} = \prod_{j\in J}A^{j}_{s}$, and $y\in A^{i}_{s}$, let $V^{J,i,s}(x,y)$ be the subset of $J\cap\Uparrow\!i$ defined as follows: $$ V^{J,i,s}(x,y) = \{j\in J\cap\Uparrow\!i\mid f^{j,i}_{s}(x_{j}) = y\}. $$ The just stated definition is sound. In fact, $J\cap\Uparrow\!i\in \mathcal{F}$ since $\mathcal{F}$ is an ultrafilter such that $\{\Uparrow\!i\mid i\in I\}\subseteq \mathcal{F}$ and $J\in \mathcal{F}$. Moreover, since, by hypothesis, $(\mathbf{A}^{i})_{i\in I}$ is a family of $\Sigma$-algebras with constant support we have that, for every $J\in \mathcal{F}$ and every $i\in I$, $\mathrm{supp}_{S}(A(J)) = \mathrm{supp}_{S}(A^{i})$. For $J\in \mathcal{F}$, $i\in I$, $s\in \mathrm{supp}_{S}(A^{i})$, $x\in A(J)_{s} = \prod_{j\in J}A^{j}_{s}$, and $y,z\in A^{i}_{s}$, if $y\neq z$, then $V^{J,i,s}(x,y)\cap V^{J,i,s}(x,z) = \varnothing$. This follows from the fact that $f^{j,i}_{s}$ is, in particular, an $S$-sorted mapping. We next prove that $J\cap\Uparrow\!i = \bigcup_{y\in A^{i}_{s}}V^{J,i,s}(x,y)$. It is obvious that $J\cap\Uparrow\!i$ contains $\bigcup_{y\in A^{i}_{s}}V^{J,i,s}(x,y)$. Reciprocally, let $j$ be an element of $J\cap\Uparrow\!i$, then $i\leq j$ and for $y = f^{j,i}_{s}(x_{j})\in A^{i}_{s}$ we have that $j\in V^{J,i,s}(x,f^{j,i}_{s}(x_{j}))\subseteq \bigcup_{y\in A^{i}_{s}}V^{J,i,s}(x,y)$. In what follows it is most useful to use a certain characterization of the notion of ultrafilter on a set. Specifically, a filter $\mathcal{G}$ on a nonempty set $I$ is an ultrafilter, i.e., a maximal filter, if, and only if, for every $J,K\subseteq I$, if $J\cup K\in \mathcal{G}$, then $J\in \mathcal{G}$ or $K\in \mathcal{G}$. This characterization extends, by induction, up to nonempty finite families of subsets of $I$. Moreover, we recall that $\varnothing$ does not belong to any filter. Now, as we have, on the one hand, that $\mathcal{F}$ is an ultrafilter such that $J\cap\Uparrow\!i\in \mathcal{F}$ and, on the other hand, that $J\cap\Uparrow\!i = \bigcup_{y\in A^{i}_{s}}V^{J,i,s}(x,y)$, that $A^{i}_{s}$ is finite, and that if $y,z\in A^{i}_{s}$ are such that $y\neq z$, then $V^{J,i,s}(x,y)\cap V^{J,i,s}(x,z) = \varnothing$, we infer that there exists a unique $y\in A^{i}_{s}$ such that $V^{J,i,s}(x,y)\in \mathcal{F}$. Therefore, we define the mapping $h^{J,i}_{s}$ from $A(J)_{s} = \prod_{j\in J}A^{j}_{s}$ to $A^{i}_{s}$ by assigning to $x\in A(J)_{s}$ the unique $y\in A^{i}_{s}$ such that $V^{J,i,s}(x,y)\in \mathcal{F}$. Thus, for $x\in A(J)_{s}$ and $y\in A^{i}_{s}$, $h^{J,i}_{s}(x) = y$ if, and only if, $V^{J,i,s}(x,y)\in \mathcal{F}$. Our next goal is to show that, for every $i\in I$ and every $J,K\in \mathcal{F}$, if $K\supseteq J$, then the homomorphism $\mathrm{p}^{K,J}$ from $\mathbf{A}(K)$ to $\mathbf{A}(J)$ is such that $h^{J,i}\circ \mathrm{p}^{K,J} = h^{K,i}$ and that $h^{J,i} = (h^{J,i}_{s})_{s\in S}$ is a homomorphism from $\mathbf{A}(J) = \prod_{j\in J}\mathbf{A}^{j}$ to $\mathbf{A}^{i}$. To verify that $h^{J,i}\circ \mathrm{p}^{K,J} = h^{K,i}$, i.e., that, for every $s\in S$, $h^{J,i}_{s}\circ \mathrm{p}^{K,J}_{s} = h^{K,i}_{s}$, we should check that, for every $a\in A(K)_{s}$, $h^{J,i}_{s}(\mathrm{p}^{K,J}_{s}(a)) = h^{K,i}_{s}(a)$. But, for every $s\in S$, if $a\in A(K)_{s}$, then, by definition, $\mathrm{p}^{K,J}_{s}(a) = a\!\!\upharpoonright\!\! J$, where $a\!\!\upharpoonright\!\! J$ is the restriction of $a$ to $J$. Therefore we should check that $h^{J,i}_{s}(a\!\!\upharpoonright\!\! J) = h^{K,i}_{s}(a)$. Let $y$ be $h^{J,i}_{s}(a\!\!\upharpoonright\!\! J)$, i.e., $y$ is the unique element of $A^{i}_{s}$ such that $V^{J,i,s}(a\!\!\upharpoonright\!\! J,y)\in \mathcal{F}$. Then it happens that $$ V^{J,i,s}(a\!\!\upharpoonright\!\! J,y) \subseteq V^{K,i,s}(a,y). $$ Let $j$ be an element of $V^{J,i,s}(a\!\!\upharpoonright\!\! J,y) (= V^{J,i,s}(a\!\!\upharpoonright\!\! J,h^{J,i}_{s}(a\!\!\upharpoonright\!\! J)))$. Then $j\in J\cap\Uparrow\!i$ and $f^{j,i}_{s}((a\!\!\upharpoonright\!\! J)_{j}) = f^{j,i}_{s}(a_{j}) = y$. But, since $J\subseteq K$, we have that $J\cap\Uparrow\!i\subseteq K\cap\Uparrow\!i$. Therefore $j\in K\cap\Uparrow\!i$ and $f^{j,i}_{s}(a_{j}) = y$, i.e., $j\in V^{K,i,s}(a,y)$. Moreover, because $V^{J,i,s}(a\!\!\upharpoonright\!\! J,y)\in \mathcal{F}$, $V^{J,i,s}(a\!\!\upharpoonright\!\! J,y) \subseteq V^{K,i,s}(a,y)$, and $\mathcal{F}$ is a filter, $V^{K,i,s}(a,y)\in \mathcal{F}$. From this it follows that $h^{K,i}_{s}(a) = y$. Therefore $h^{J,i}_{s}(a\!\!\upharpoonright\!\! J) = h^{K,i}_{s}(a)$ and, consequently, $h^{J,i}\circ \mathrm{p}^{K,J} = h^{K,i}$. To show that $h^{J,i} = (h^{J,i}_{s})_{s\in S}$ is a homomorphism from $\mathbf{A}(J) = \prod_{j\in J}\mathbf{A}^{j}$ to $\mathbf{A}^{i}$ we have to check that, for every $(w,s)\in S^{\star}\times S$, every $\sigma\in \Sigma_{w,s}$, and every $(a_{\alpha})_{\alpha\in\lvert w \rvert}\in A(J)_{w} = (\prod_{j\in J}A^{j})_{w} = (\prod_{j\in J}A^{j}_{w_{0}})\times\cdots\times (\prod_{j\in J}A^{j}_{w_{\lvert w \rvert-1}})$, it happens that $$ h^{J,i}_{s}(F^{\mathbf{A}(J)}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert})) = F^{\mathbf{A}^{i}}_{\sigma}(h^{J,i}_{w_{0}}(a_{0}),\ldots,h^{J,i}_{w_{\lvert w \rvert-1}}(a_{\lvert w \rvert-1})). $$ Let us recall that the structural operation $F^{\mathbf{A}(J)}_{\sigma}$ of $\mathbf{A}(J)$ is defined, for every $(a_{\alpha})_{\alpha\in\lvert w \rvert}\in A(J)_{w}$, as: $$ F^{\mathbf{A}(J)}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert}) = (F^{\mathbf{A}^{j}}_{\sigma}((a_{\alpha}(j))_{\alpha\in\lvert w \rvert}))_{j\in J}. $$ Now, for every $\alpha\in\lvert w \rvert$, we have the subset $$ V^{J,i,w_{\alpha}}(a_{\alpha},h^{J,i}_{w_{\alpha}}(a_{\alpha})) = \{j\in J\cap\Uparrow\!i\mid f^{j,i}_{w_{\alpha}}(a_{\alpha}(j)) = h^{J,i}_{w_{\alpha}}(a_{\alpha})\}. $$ of $I$. But, for every $\alpha\in\lvert w \rvert$, we have that $V^{J,i,w_{\alpha}}(a_{\alpha},h^{J,i}_{w_{\alpha}}(a_{\alpha}))\in \mathcal{F}$. Thus, because $\mathcal{F}$ is a filter, we have that $\bigcap_{\alpha\in\lvert w \rvert}V^{J,i,w_{\alpha}}(a_{\alpha},h^{J,i}_{w_{\alpha}}(a_{\alpha}))\in \mathcal{F}$. Moreover, we have the subset $V^{J,i,s}(F^{\mathbf{A}(J)}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert}), F^{\mathbf{A}^{i}}_{\sigma}((h^{J,i}_{w_{\alpha}}(a_{\alpha}))_{\alpha\in\lvert w \rvert}))$ of $I$, which, we recall, is $$ \{j\in J\cap\Uparrow\!i\mid f^{j,i}_{s}(F^{\mathbf{A}^{j}}_{\sigma}((a_{\alpha}(j))_{\alpha\in\lvert w \rvert})) = F^{\mathbf{A}^{i}}_{\sigma}((h^{J,i}_{w_{\alpha}}(a_{\alpha}))_{\alpha\in\lvert w \rvert})\}. $$ Then it happens that $$ \textstyle \bigcap_{\alpha\in\lvert w \rvert}V^{J,i,w_{\alpha}}(a_{\alpha},h^{J,i}_{w_{\alpha}}(a_{\alpha}))\subseteq V^{J,i,s}(F^{\mathbf{A}(J)}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert}), F^{\mathbf{A}^{i}}_{\sigma}((h^{J,i}_{w_{\alpha}}(a_{\alpha}))_{\alpha\in\lvert w \rvert})). $$ Let $j$ be an element of $\bigcap_{\alpha\in\lvert w \rvert}V^{J,i,w_{\alpha}}(a_{\alpha},h^{J,i}_{w_{\alpha}}(a_{\alpha}))$. Then, by definition, $i\leq j$ and, for every $\alpha\in \lvert w \rvert$, we have that $f^{j,i}_{w_{\alpha}}(a_{\alpha}(j)) = h^{J,i}_{w_{\alpha}}(a_{\alpha})$. But, $f^{j,i}$ is a homomorphism from $\mathbf{A}^{j}$ to $\mathbf{A}^{i}$, thus \begin{align*} f^{j,i}_{s}(F^{\mathbf{A}^{j}}_{\sigma}((a_{\alpha}(j))_{\alpha\in\lvert w \rvert})) &= F^{\mathbf{A}^{i}}_{\sigma}(f^{j,i}_{w_{0}}(a_{0}(j)),\ldots,f^{j,i}_{w_{\lvert w \rvert-1}}(a_{\lvert w \rvert-1}(j))) \\ &= F^{\mathbf{A}^{i}}_{\sigma}(h^{J,i}_{w_{0}}(a_{0}),\ldots,h^{J,i}_{w_{\lvert w \rvert-1}}(a_{\lvert w \rvert-1})). \end{align*} Moreover, we have that \begin{align*} f^{j,i}_{s}(F^{\mathbf{A}^{J}}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert})(j)) &= f^{j,i}_{s}(F^{\mathbf{A}^{j}}_{\sigma}((a_{\alpha}(j))_{\alpha\in\lvert w \rvert})) \\ &= F^{\mathbf{A}^{i}}_{\sigma}(h^{J,i}_{w_{0}}(a_{0}),\ldots,h^{J,i}_{w_{\lvert w \rvert-1}}(a_{\lvert w \rvert-1})). \end{align*} Therefore $j\in V^{J,i,s}(F^{\mathbf{A}(J)}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert}), F^{\mathbf{A}^{i}}_{\sigma}((h^{J,i}_{w_{\alpha}}(a_{\alpha}))_{\alpha\in\lvert w \rvert}))$. Hence, since $\mathcal{F}$ is a filter, we have that $V^{J,i,s}(F^{\mathbf{A}(J)}_{\sigma}((a_{\alpha})_{\alpha\in\lvert w \rvert}), F^{\mathbf{A}^{i}}_{\sigma}((h^{J,i}_{w_{\alpha}}(a_{\alpha}))_{\alpha\in\lvert w \rvert}))\in \mathcal{F}$. So $h^{J,i} = (h^{J,i}_{s})_{s\in S}$ is a homomorphism from $\mathbf{A}(J) = \prod_{j\in J}\mathbf{A}^{j}$ to $\mathbf{A}^{i}$. After having proved that, for every $i\in I$ and every $J,K\in \mathcal{F}$, if $K\supseteq J$, the homomorphism $\mathrm{p}^{K,J}$ from $\mathbf{A}(K)$ to $\mathbf{A}(J)$ is such that $h^{J,i}\circ \mathrm{p}^{K,J} = h^{K,i}$ and that $h^{J,i} = (h^{J,i}_{s})_{s\in S}$ is a homomorphism from $\mathbf{A}(J)$ to $\mathbf{A}^{i}$, we can assert, by the universal property of the inductive limit, that, for every $i\in I$, there exist a unique homomorphism $h^{i}$ from $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$ to $\mathbf{A}^{i}$ such that, for every $J\in \mathcal{F}$, $h^{J,i} = h^{i}\circ \mathrm{p}^{J}$, where $\mathrm{p}^{J}$ is the canonical homomorphism from $\mathbf{A}(J)$ to $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$. Our next goal is to show that, for every $i,k\in I$, if $i\leq k$, then the homomorphisms $f^{k,i}\circ h^{k}$ and $h^{i}$ from $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$ to $\mathbf{A}^{i}$ are equal. To do this, we begin by showing that, for every $J\in \mathcal{F}$ and every $i,k\in I$, if $i\leq k$, then $h^{J,i} = f^{k,i}\circ h^{J,k}$. Let us recall that, for every $s\in S$, the mapping $h^{J,i}_{s}$ from $A(J)_{s}$ to $A^{i}_{s}$ is defined by assigning to $x\in A(J)_{s}$ the unique $y\in A^{i}_{s}$ such that $V^{J,i,s}(x,y)\in \mathcal{F}$, where $$ V^{J,i,s}(x,y) = \{j\in J\cap\Uparrow\!i\mid f^{j,i}_{s}(x_{j}) = y\}. $$ It happens that $V^{J,k,s}(x,h^{J,k}_{s}(x))\subseteq V^{J,i,s}(x,f^{k,i}_{s}(h^{J,k}_{s}(x)))$. In fact, let $j$ be an element of $J\cap\Uparrow\!k$ such that $f^{j,k}_{s}(x_{j}) = h^{J,k}_{s}(x)$. Then, since $i\leq k$, we have that $j\in J\cap\Uparrow\!i$. It only remains to verify that $f^{j,i}_{s}(x_{j}) = f^{k,i}_{s}(h^{J,k}_{s}(x))$. But this follows from $f^{j,i} = f^{k,i}\circ f^{j,k}$ and $f^{j,k}_{s}(x_{j}) = h^{J,k}_{s}(x)$. However, since $V^{J,k,s}(x,h^{J,k}_{s}(x))\in \mathcal{F}$, we have that $V^{J,i,s}(x,f^{k,i}_{s}(h^{J,k}_{s}(x)))\in \mathcal{F}$. Thus, for every $s\in S$ and every $x\in A(J)_{s}$, $h^{J,i}_{s}(x) = f^{k,i}_{s}(h^{J,k}_{s}(x))$. Therefore $h^{J,i} = f^{k,i}\circ h^{J,k}$. We are now in a position to show that, for $i\leq j$, $f^{k,i}\circ h^{k} = h^{i}$. In fact, we know that given $i,k\in I$ such that $i\leq k$, for every $J\in \mathcal{F}$, $h^{J,i} = f^{k,i}\circ h^{J,k}$, $h^{J,i} = h^{i}\circ \mathrm{p}^{J}$, and $h^{J,k} = h^{k}\circ \mathrm{p}^{J}$ or, what is equivalent, that the outer, the left and the right triangles of the following diagram commute: $$\xymatrix@C=40pt@R=40pt{ {} & \mathbf{A}(J)\ar[d]_{\mathrm{p}^{J}}\ar@/_1pc/[ldd]_{h^{J,k}}\ar@/^1pc/[rdd]^{h^{J,i}} & {} \\ {} & \varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F}) \ar[ld]_{h^{k}} \ar[rd]^{h^{i}} & {} \\ \mathbf{A}_{k} \ar[rr]_{f^{k,i}} & {} & \mathbf{A}_{i} } $$ Therefore $(f^{k,i}\circ h^{k})\circ \mathrm{p}^{J} = h^{i}\circ \mathrm{p}^{J}$. But any inductive limit is an (extremal epi)-sink, thus $f^{k,i}\circ h^{k} = h^{i}$. After having proved that, for every $i,k\in I$, if $i\leq j$, then $f^{k,i}\circ h^{k} = h^{i}$, we can assert, by the universal property of the projective limit, that there exist a unique homomorphism $h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}$ from $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$ to $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ such that, for every $i\in I$, $f^{i}\circ h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}} = h^{i}$, where $f^{i}$ is the canonical homomorphism from $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ to $\mathbf{A}^{i}$. Finally, we proceed to show that $h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}\circ \mathrm{pr}^{\equiv^{\mathcal{F}}}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = \mathrm{id}_{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}$, where, we recall, $\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}$ is the canonical embedding of $\mathbf{A} = \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ into $\prod_{i\in I}\mathbf{A}^{i}$ and $\mathrm{pr}^{\equiv^{\mathcal{F}}}$ the canonical projection from $\prod_{i\in I}\mathbf{A}^{i}$ to $\prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}}$ which, we remark, coincides with $\mathrm{p}^{I}$, the canonical homomorphism from $\mathbf{A}(I) = \prod_{i\in I}\mathbf{A}^{i}$ to $\varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F})$. But $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ is a projective limit and any projective limit is an (extremal mono)-source. Thus, to prove the above equality it suffices to prove that, for every $i\in I$, we have that $$ f^{i}\circ (h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}\circ \mathrm{pr}^{\equiv^{\mathcal{F}}}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}) = f^{i}\circ \mathrm{id}_{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = f^{i}. $$ We draw the following picture to provide a visual description of the current situtation. $$\xymatrix@C=50pt@R=50pt{ \textstyle \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}} \ar[r]^{\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}} \ar[rd]_{f^{i}} & \mathbf{A}(I) = \prod_{i\in I}\mathbf{A}^{i} \ar[r]^-{\mathrm{pr}^{\equiv^{\mathcal{F}}} = \mathrm{p}^{I}} \ar @<1ex>[d]^-{h^{I,i}}\ar @<-1ex>[d]_-{\mathrm{pr}^{I,i}}& \prod_{i\in I}\mathbf{A}^{i}/{\equiv}^{\mathcal{F}} \cong \varinjlim_{\boldsymbol{\mathcal{F}}}\boldsymbol{\mathcal{A}}(\mathcal{F}) \ar[d]^{h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}}\ar[dl]_{h^{i}} \\ {} & \mathbf{A}^{i} & \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}\ar[l]^{f^{i}} } $$ Let $i$ be an element of $I$. Then, as we have shown before, $f^{i}\circ h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}} = h^{i}$ and $h^{i}\circ \mathrm{pr}^{\equiv^{\mathcal{F}}} = h^{i}\circ \mathrm{p}^{I} = h^{I,i}$. And, by definition of the canonical homomorphism $f^{i}$ of the projective limit $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$, we have that $\mathrm{pr}^{I,i}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = f^{i}$. Thus it only remains to prove that $h^{I,i}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = \mathrm{pr}^{I,i}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}$. Let $s$ be an element of $S$ and $x$ an element of the $s$-th component of the underlying $S$-sorted set of $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$. Then, taking into account that $\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}_{s}(x) = x$ and $\mathrm{pr}^{I,i}_{s}(\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}_{s}(x)) = x_{i}$, the sets $$ V^{I,i,s}(\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}_{s}(x), \mathrm{pr}^{I,i}_{s}(\mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}_{s}(x))) = \{j\in I\cap\Uparrow\!i\mid f^{j,i}_{s}(x_{j}) = x_{i}\} $$ and $\Uparrow\!i$ are, obviously, equal. But $\Uparrow\!i\in \mathcal{F}$. Hence $h^{I,i}_{s}(x) = x_{i} = \mathrm{pr}^{I,i}_{s}(x)$. Therefore $h^{I,i}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = \mathrm{pr}^{I,i}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = f^{i}$. We are now able to assert that $h^{(\mathbf{I},\mathcal{F}),\boldsymbol{\mathcal{A}}}\circ \mathrm{pr}^{\equiv^{\mathcal{F}}}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}} = \mathrm{id}_{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}}$, thereby completing the proof. \end{proof} \begin{remark} If, following L. Ribes and P. Zalesskii in~\cite{rz00}, but for many-sorted algebras, one defines a profinite $\Sigma$-algebra as a projective limit of a projective system of finite $\Sigma$-algebras $\boldsymbol{\mathcal{A}}$ relative to a nonempty upward directed \emph{poset} $\mathbf{I}$ such that the transition homomorphisms of $\boldsymbol{\mathcal{A}}$ are \emph{surjective}, then the just proved theorem still holds, since, by Proposition~\ref{propssupport}, the surjectivity of the transition homomorphisms entails that the $I$-indexed family of $\Sigma$-algebras $(\mathbf{A}^{i})_{i\in I}$ is with constant support. This fact, we think, shows the naturalness of the condition imposed on $(\mathbf{A}^{i})_{i\in I}$. \end{remark} \section{A category-theoretic view of the many-sorted version of Mariano-Miraglia theorem.} Our objective in this section is to provide a categorial rendering of the many-sorted version of Mariano-Miraglia theorem stated in the previous section. To that purpose we consider, by means of the Grothendieck construction for a covariant functor $\mathrm{Uffs}$ from the category $\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}$, of nonempty upward directed preordered sets and injective, isotone, and cofinal mappings between them, to the category of sets, the category $\mathbf{Uffs} = \int_{\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}}\mathrm{Uffs}$, in which the objects are the pairs formed by a nonempty upward directed preordered set and by an ultrafilter containing the filter of the final sections of it. Specifically, we show that there exists a functor from the category $\mathbf{Uffs}$ whose object mapping assigns to an object of it a natural transformation between two functors from a suitable category of projective systems of $\Sigma$-algebras to the category of $\Sigma$-algebras, which is a retraction. This is precisely the category-theoretic counterpart of the aforementioned theorem. But before doing that, since it will prove to be necessary later, we next recall that given a mapping $\varphi$ from a nonempty set $I$ to another $P$ and given an ultrafilter $\mathcal{F}$ on $I$ the co-optimal lift of $\varphi\colon (I,\mathcal{F})\usebox{\xymor} P$ is an ultrafilter on $P$. \begin{proposition}\label{co-optimal lift Ulf} Let $I$ be a nonempty set, $\mathcal{F}$ an ultrafilter on $I$, and $\varphi$ a mapping from $I$ to $P$. Then $$ \mathcal{F}_{\varphi[\![\mathcal{F}]\!]} = \{Q\subseteq P\mid \exists\,J\in \mathcal{F}\,(\varphi[J]\subseteq Q)\}, $$ the co-optimal lift of $\varphi\colon (I,\mathcal{F})\usebox{\xymor} P$, i.e., the filter on $P$ generated by the filter basis $\varphi[\![\mathcal{F}]\!] = \{\varphi[J]\mid J\in \mathcal{F}\}$ on $P$, is an ultrafilter on $P$. \end{proposition} We warn the reader that in what follows the assumption at the beginning of the above section remains in force, i.e., we assume that all preordered sets are nonempty and upward directed. To achieve the previously mentioned objective we start by defining a convenient category, $\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}$, and then a suitable functor, $\mathrm{Uffs}$, from it to $\mathbf{Set}$ from which, by means of the Grothendieck construction, we will obtain the category, $\int_{\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}}\mathrm{Uffs}$, which is at the basis of the aforesaid categorial rendering. \begin{definition} We denote by $\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}$ the category whose objects are the preordered sets $\mathbf{I}$ and whose morphisms from $\mathbf{I}$ to $\mathbf{P}$ are the injective, isotone, and cofinal mappings $\varphi$ from $\mathbf{I}$ to $\mathbf{P}$ (recall that $\varphi$ is cofinal if for every $p\in P$ there exists an $i\in I$ such that $p\leq \varphi(i)$). \end{definition} \begin{proposition} There exists a functor $\mathrm{Uffs}$ from $\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}$ to $\mathbf{Set}$ which sends $\mathbf{I}$ to $\mathrm{Uffs}(\mathbf{I}) = \{\mathcal{F}\in \mathrm{Ufilt}(I)\mid \{\Uparrow\!i\mid i\in I\}\subseteq \mathcal{F}\}$, where $\mathrm{Ufilt}(I)$ is the set of all ultrafilters on $I$, and $\varphi\colon\mathbf{I}\usebox{\xymor}\mathbf{P}$ to the mapping $\mathrm{Uffs}(\varphi)$ from $\mathrm{Uffs}(\mathbf{I})$ to $\mathrm{Uffs}(\mathbf{P})$ that assigns to each $\mathcal{F}$ in $\mathrm{Uffs}(\mathbf{I})$ precisely $\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}$ in $\mathrm{Uffs}(\mathbf{P})$. \end{proposition} \begin{proof} We begin by proving that $\mathrm{Uffs}(\varphi)$ is well defined. This is so because, on the one hand, by Proposition~\ref{co-optimal lift Ulf}, $\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}$ is an ultrafilter on $P$ and, on the other hand, since $\varphi$ is isotone and cofinal, the filter basis $\{\Uparrow\!p\mid p\in P\}$ is included in $\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}$. Since, evidently, $\mathrm{Uffs}$ preserves identities, let us show that if $\psi$ a morphism from $\mathbf{P}$ to $\mathbf{W}$, then $\mathrm{Uffs}(\psi\circ\varphi) = \mathrm{Uffs}(\psi)\circ\mathrm{Uffs}(\varphi)$, i.e., for every $\mathcal{F}\in\mathrm{Uffs}(\mathbf{I})$, we have that $\mathcal{F}_{(\psi\circ\varphi)[\![\mathcal{F}]\!]} = \mathcal{F}_{\psi[\![\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}]\!]}$. Let $\mathcal{F}$ be an element of $\mathrm{Uffs}(\mathbf{I})$ and $X\subseteq W$ an element of $\mathcal{F}_{(\psi\circ\varphi)[\![\mathcal{F}]\!]}$. Then there exists a $J\in \mathcal{F}$ such that $\psi[\varphi[J]]\subseteq X$. Therefore, for $Q = \varphi[J]\in \mathcal{F}_{\varphi[\![\mathcal{F}]\!]}$, we have that $\psi[Q]\subseteq X$. Hence $X\in\mathcal{F}_{\psi[\![\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}]\!]}$. Thus $\mathcal{F}_{(\psi\circ\varphi)[\![\mathcal{F}]\!]} \subseteq\mathcal{F}_{\psi[\![\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}]\!]}$. But $\mathcal{F}_{(\psi\circ\varphi)[\![\mathcal{F}]\!]}$ is an ultrafilter on $W$, consequently, $\mathcal{F}_{(\psi\circ\varphi)[\![\mathcal{F}]\!]} = \mathcal{F}_{\psi[\![\mathcal{F}_{\varphi[\![\mathcal{F}]\!]}]\!]}$. \end{proof} \begin{definition} We denote by $\mathbf{Uffs}$ the category $\int_{\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}}\mathrm{Uffs}$ (obtained by means of the Grothendieck construction for the covariant functor $\mathrm{Uffs}$) whose objects are the ordered pairs $(\mathbf{I},\mathcal{F}_{\mathbf{I}})$ where $\mathbf{I}$ is an object of $\mathbf{UdPros}_{\neq\varnothing,\mathrm{cof}}^{\mathrm{inj}}$ and $\mathcal{F}_{\mathbf{I}}\in \mathrm{Uffs}(\mathbf{I})$, i.e., an ultrafilter on $I$ such that the filter of the final sections of $\mathbf{I}$ is contained in $\mathcal{F}_{\mathbf{I}}$, and whose morphisms from $(\mathbf{I},\mathcal{F}_{\mathbf{I}})$ to $(\mathbf{P},\mathcal{F}_{\mathbf{P}})$ are the injective, isotone, and cofinal mappings $\varphi$ from $\mathbf{I}$ to $\mathbf{P}$ such that $\mathcal{F}_{\varphi[\![\mathcal{F}_{\mathbf{I}}]\!]} = \mathcal{F}_{\mathbf{P}}$. \end{definition} \begin{proposition}\label{NatTrans} Let $(\mathbf{I},\mathcal{F}_{\mathbf{I}})$ be an object of the category $\mathbf{Uffs}$. Then we have the functor $\varprojlim_{\mathbf{I}}\colon \mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}\usebox{\xymor} \mathbf{Alg}(\Sigma)$ which sends a projective system $\boldsymbol{\mathcal{A}} = ((\mathbf{A}^{i})_{i\in I},(f^{j,i})_{(i,j)\in \leq})$ relative to $\mathbf{I}$ to $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ and a morphism $u = (u^{i})_{i\in I}$ from $\boldsymbol{\mathcal{A}}$ to $\boldsymbol{\mathcal{B}} = ((\mathbf{B}^{i})_{i\in I},(g^{j,i})_{(i,j)\in \leq})$ to the homomorphism $\varprojlim_{\mathbf{I}}u$ from $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}$ to $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{B}}$. Moreover, we have the functor $D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\colon \mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}\usebox{\xymor} \mathbf{Alg}(\Sigma)^{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}$ which sends a projective system $\boldsymbol{\mathcal{A}}$ relative to $\mathbf{I}$ to the inductive system $\boldsymbol{\mathcal{A}}(\mathcal{F}_{\mathbf{I}})$ relative $\boldsymbol{\mathcal{F}_{\mathbf{I}}}$ and a morphism $u$ from $\boldsymbol{\mathcal{A}}$ to $\boldsymbol{\mathcal{B}}$ to the morphism $(u(J))_{J\in \mathcal{F}_{\mathbf{I}}}$ from $\boldsymbol{\mathcal{A}}(\mathcal{F}_{\mathbf{I}})$ to $\boldsymbol{\mathcal{B}}(\mathcal{F}_{\mathbf{I}})$. In addition, we have the functor $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\colon \mathbf{Alg}(\Sigma)^{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\usebox{\xymor} \mathbf{Alg}(\Sigma)$. Therefore, we have the functors $\varprojlim_{\mathbf{I}}$ and $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ from $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}$ to $\mathbf{Alg}(\Sigma)$. If we denote by $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}}$ the full subcategory of $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}$ determined by the projective systems $\boldsymbol{\mathcal{A}}$ relative to $\mathbf{I}$ such that $(\mathbf{A}^{i})_{i\in I}$ is with constant support and, for every $i\in I$, $\mathbf{A}^{i}$ is finite, and, for simplicity of notation, we let $\varprojlim_{\mathbf{I}}$ and $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ stand for the restrictions to $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}}$ of the previous functors $\varprojlim_{\mathbf{I}}$ and $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$, then it happens that $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}} = (h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{A}}})_{\boldsymbol{\mathcal{A}}\in \mathrm{Ob}(\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}})}$ is a natural transformation from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ to $\varprojlim_{\mathbf{I}}$, i.e., for every morphism $u$ from $\boldsymbol{\mathcal{A}}$ to $\boldsymbol{\mathcal{B}}$, the following diagram commutes: $$\xymatrix@C=40pt@R=40pt{ \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}(\mathcal{F}_{\mathbf{I}}) \ar[d]_{\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}} (u(J))_{J\in \mathcal{F}_{\mathbf{I}}}} \ar[r]^-{h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{A}}}} & \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}} \ar[d]^{\varprojlim_{\mathbf{I}}u} \\ \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{B}}(\mathcal{F}_{\mathbf{I}}) \ar[r]_-{h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{B}}}} & \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{B}} } $$ Moreover, we have that $(\mathrm{p}^{I}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}})_{\boldsymbol{\mathcal{A}}\in \mathrm{Ob}(\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}})}$ is a natural transformation from $\varprojlim_{\mathbf{I}}$ to $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ and a right inverse for $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}$, i.e., $$ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\circ (\mathrm{p}^{I}\circ \mathrm{in}^{\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}})_{\boldsymbol{\mathcal{A}}\in \mathrm{Ob}(\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}})} = \mathrm{id}_{\varprojlim_{\mathbf{I}}}, $$ where $\mathrm{id}_{\varprojlim_{\mathbf{I}}}$ is the identity natural transformation at the functor $\varprojlim_{\mathbf{I}}$. \end{proposition} \begin{proof} We restrict ourselves to show that $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}$ is a natural transformation from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ to $\varprojlim_{\mathbf{I}}$. Let $u = (u^{i})_{i\in I}$ be a morphism from $\boldsymbol{\mathcal{A}}$ to $\boldsymbol{\mathcal{B}}$. We claim that $\varprojlim_{\mathbf{I}} u\circ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{A}}} = h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{B}}}\circ \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}} (u(J))_{J\in \mathcal{F}_{\mathbf{I}}}$. Indeed, this follows from the following facts: (1) $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}(\mathcal{F}_{\mathbf{I}})$ is an (extremal epi)-sink, (2) $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{B}}$ is an (extremal mono)-source, and (3), for every $J\in \mathcal{F}_{\mathbf{I}}$ and every $i\in I$, the homomorphisms $u^{i}\circ h^{J,i}$ and $h^{J,i}\circ u(J)$ from $\mathbf{A}(J)$ to $\mathbf{B}^{i}$ are equal, where, by abuse of notation, we have used the same symbol $h^{J,i}$ for the homomorphisms from $\mathbf{A}(J)$ to $\mathbf{A}^{i}$ and from $\mathbf{B}(J)$ to $\mathbf{B}^{i}$. With regard to the last fact, we recall that, for $s\in \mathrm{supp}_{S}(A^{i})$, $x\in A(J)_{s}$, and $y\in A^{i}_{s}$, $V^{J,i,s}(x,y) = \{j\in J\cap\Uparrow\!i\mid f^{j,i}_{s}(x_{j}) = y\}$ and that $h^{J,i}_{s}(x) = y$ if, and only if, $V^{J,i,s}(x,y)\in \mathcal{F}_{\mathbf{I}}$. Thus, for $j\in V^{J,i,s}(x,y)$, since, by hypothesis, $u$ is a morphism from $\boldsymbol{\mathcal{A}}$ to $\boldsymbol{\mathcal{B}}$, we have that $g^{j,i}(u^{j}_{s}(x_{j})) = u^{i}_{s}(f^{j,i}_{s}(x_{j})) = u^{i}_{s}(y)$, and so $j\in V^{J,i,s}((u^{j}_{s}(x_{j}))_{j\in J},u^{i}_{s}(y))$. Hence $V^{J,i,s}((u^{j}_{s}(x_{j}))_{j\in J},u^{i}_{s}(y))\in \mathcal{F}_{\mathbf{I}}$, i.e., $h^{J,i}((u^{j}_{s}(x_{j}))_{j\in J}) = u^{i}_{s}(y)$. Therefore $h^{J,i}\circ u(J) = u^{i}\circ h^{J,i}$. \end{proof} \begin{conventions} In what follows, for simplicity of notation, given a functor $F$ from $\mathbf{A}$ to $\mathbf{B}$ and a natural transformation $\eta$ from $G$ to $H$, where $G$ and $H$ are functors from $\mathbf{B}$ to $\mathbf{C}$, $\eta\ast F$ stands for $\eta\ast \mathrm{id}_{F}$, the horizontal composition of $\mathrm{id}_{F}$ and $\eta$, where $\mathrm{id}_{F}$ is the identity natural transformation at $F$, and we write $F\circ F$ for $ \mathrm{id}_{F}\circ \mathrm{id}_{F}$, the vertical composition of $\mathrm{id}_{F}$ with itself. Moreover, if $\mathbf{X}$ and $\mathbf{Y}$ are subcategories of $\mathbf{A}$ and $\mathbf{B}$, respectively, and there exists the bi-restriction of $F$ to $\mathbf{X}$ and $\mathbf{Y}$, then we denote it briefly by $F$. \end{conventions} \begin{proposition}\label{NatTrans p and q} Let $\varphi\colon (\mathbf{I},\mathcal{F}_{\mathbf{I}}) \usebox{\xymor}(\mathbf{P},\mathcal{F}_{\mathbf{P}})$ be a morphism in $\mathbf{Uffs}$. Then $\varphi$ determines a functor $\mathrm{Alg}(\Sigma)^{\varphi}\colon \mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}\usebox{\xymor} \mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}$ which assigns to a projective system $\boldsymbol{\mathcal{A}} = ((\mathbf{A}^{p})_{p\in P},(f^{q,p})_{(p,q)\in \leq})$ in $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}$ the projective system $\boldsymbol{\mathcal{A}}^{\varphi} = ((\mathbf{A}^{\varphi(i)})_{i\in I},(f^{\varphi(j),\varphi(i)})_{(i,j)\in \leq})$ in $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}$ and to a morphism $u$ from $\boldsymbol{\mathcal{A}}$ to $\boldsymbol{\mathcal{B}}$ in $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}$ the morphism $u^{\varphi} = (u^{\varphi(i)})_{i\in I}$ from $\boldsymbol{\mathcal{A}}^{\varphi}$ to $\boldsymbol{\mathcal{B}}^{\varphi}$ in $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}$. Therefore, for the categories $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}}$ and $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}}$, since there exists the bi-restriction of the functor $\mathrm{Alg}(\Sigma)^{\varphi}$ to them and, by Proposition~\ref{NatTrans}, $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}$ is a natural transformation from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ to $\varprojlim_{\mathbf{I}}$, we have a natural transformation $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}^{\varphi} (= h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{id}_{\mathrm{Alg}^{\varphi}})$ from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\varphi}$ to $\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}$. Moreover, there exists a natural transformation $\mathfrak{p}^{\varphi}$ from $\varprojlim_{\mathbf{P}}$ to $\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}$. On the other hand, also by Proposition~\ref{NatTrans}, for $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}}$, we have a natural transformation $h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}$ from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}$ to $\varprojlim_{\mathbf{P}}$. Besides, there exists a natural transformation $\mathfrak{q}^{\varphi}$ from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\varphi}$ to $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}$. \end{proposition} \begin{proof} Let $\boldsymbol{\mathcal{A}}$ be a projective system in $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}}$. Then, by the universal property of $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\varphi}$, since, for every $(i,j)\in \leq$, $f^{\varphi(i)} = f^{\varphi(j),\varphi(i)}\circ f^{\varphi(j)}$, there exists a unique homomorphism $\mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}}$ from $\varprojlim_{\mathbf{P}}\boldsymbol{\mathcal{A}}$ to $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\varphi}$ such that, for every $i\in I$, $f^{\varphi,i}\circ \mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}} = f^{\varphi(i)}$, where $f^{\varphi,i}$ is the canonical homomorphism from $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\varphi}$ to $\mathbf{A}^{\varphi(i)}$, and then $\mathfrak{p}^{\varphi} = (\mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}})_{\boldsymbol{\mathcal{A}}\in \mathrm{Ob}(\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}})}$ is, obviously, a natural transformation from $\varprojlim_{\mathbf{P}}$ to $\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}$. By a similar argument, but for inductive limits, it follows the existence of $\mathfrak{q}^{\varphi}$. \end{proof} \begin{proposition}\label{Cylinder equation} Let $\varphi\colon (\mathbf{I},\mathcal{F}_{\mathbf{I}}) \usebox{\xymor}(\mathbf{P},\mathcal{F}_{\mathbf{P}})$ be a morphism in $\mathbf{Uffs}$. Then, by restricting to $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}}$ and $\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}}$, we have that $$ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}^{\varphi} = \mathfrak{p}^{\varphi}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi}, $$ i.e., in the following diagram the involved natural transformations satisfy the just stated equation: $$ \xymatrix@C=86pt@R=38pt{ \mathbf{Alg}(\Sigma) \ar[rd]^{\mathrm{Id}_{\mathbf{Alg}(\Sigma)}}="T" \ar@/^60pt/[dd];[]|(0.50)*+[l]{\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\varphi}}="Kl" \ar@/_40pt/[dd];[]|*+[r]{\varprojlim_{\mathbf{I}}\circ\mathrm{Alg}(\Sigma)^{\varphi}}="Kl'" \\ & \mathbf{Alg}(\Sigma) \ar@/^44pt/[dd];[]|(0.50)*+[l]{\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}}="Kr" \ar@/_40pt/[dd];[]|*+[r]{\varprojlim_{\mathbf{P}}}="Kr'" \\ \mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}} \ar[rd]_{\mathrm{Id}_{\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}}}}="T'" \\ & \mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}} \ar @{} "Kl";"Kl'"|{\dir{==>}}^*+{\hspace{0.6cm}h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast\mathrm{Alg}(\Sigma)^{\varphi}} \ar @{} "Kr";"Kr'"|{\dir{==>}}^*+{h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}} \ar @{} "Kl";"Kr"|*:a(-0)@_{==>}^*+{\mathfrak{q}^{\varphi}} \ar @{} "Kl'";"Kr'"|*:a(-180)@_{==>}^*+{\hspace{0.2cm}{\mathfrak{p}^{\varphi}}} } $$ \end{proposition} \begin{proof} Let $\boldsymbol{\mathcal{A}}$ be a projective system in $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}}$. We want to show that the homomorphisms $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{A}}^{\varphi}}$ and $\mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\mathcal{A}}}\circ \mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}}$ from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}^{\varphi}(\mathcal{F}_{\mathbf{I}})$ to $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\varphi}$ are identical. To this end, taking into account that $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\varphi}$ is an (extremal mono)-source, it suffices to verify that, for every $i\in I$, $f^{\varphi,i}\circ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{A}}^{\varphi}}$ is identical to $f^{\varphi,i}\circ \mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\mathcal{A}}}\circ \mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}}$, where $f^{\varphi,i}$ is the canonical homomorphism from $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\varphi}$ to $\mathbf{A}^{\varphi(i)}$. Moreover, one should bear in mind that, since $\varphi$ is, in particular injective, for every $J\in \mathcal{F}_{\mathbf{I}}$, the $\Sigma$-algebras $\prod_{j\in J}\mathbf{A}^{\varphi(j)}$ and $\prod_{\varphi(j)\in \varphi[J]}\mathbf{A}^{\varphi(j)}$ are isomorphic. We know that $f^{\varphi,i}\circ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\mathcal{A}}^{\varphi}} = h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}$, where $h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}$ is the unique homomorphism from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}^{\varphi}(\mathcal{F}_{\mathbf{I}})$ to $\mathbf{A}^{\varphi(i)}$ such that, for every $J\in \mathcal{F}_{\mathbf{I}}$, $h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}\circ \mathrm{p}^{J} = h^{\varphi[J],\varphi(i)}$. On the other hand, by definition of $\mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}}$, we have that $f^{\varphi,i}\circ \mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}} = f^{\varphi(i)}$. Moreover, since $h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\mathcal{A}}}$ is the unique homomorphism from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\boldsymbol{\mathcal{A}}(\mathcal{F}_{\mathbf{P}})$ to $\varprojlim_{\mathbf{P}}\boldsymbol{\mathcal{A}}$ such that, for every $p\in P$, $f^{p}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\mathcal{A}}} = h^{\boldsymbol{\mathcal{A}},p}$, where $f^{p}$ is the canonical homomorphism from $\varprojlim_{\mathbf{P}}\boldsymbol{\mathcal{A}}$ to $\mathbf{A}^{p}$, we have that, for every $i\in I$, taking $p = \varphi(i)$, it happens that $f^{\varphi(i)}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\mathcal{A}}} = h^{\boldsymbol{\mathcal{A}},\varphi(i)}$. Now, from $\mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}}$, which is the unique homomorphism from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}^{\varphi}(\mathcal{F}_{\mathbf{I}})$ to $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\boldsymbol{\mathcal{A}}(\mathcal{F}_{\mathbf{P}})$ such that, for every $J\in \mathcal{F}_{\mathbf{I}}$, $\mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}}\circ \mathrm{p}^{J} = \mathrm{p}^{\varphi[J]}$ (recall that $\prod_{j\in J}\mathbf{A}^{\varphi(j)}\cong\prod_{\varphi(j)\in \varphi[J]}\mathbf{A}^{\varphi(j)}$), we obtain the homomorphism $h^{\boldsymbol{\mathcal{A}},\varphi(i)}\circ \mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}}$ from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}^{\varphi}(\mathcal{F}_{\mathbf{I}})$ to $\mathbf{A}^{\varphi(i)}$. But it also happens that $h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}$ is a homomorphism from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}^{\varphi}(\mathcal{F}_{\mathbf{I}})$ to $\mathbf{A}^{\varphi(i)}$. Therefore, since $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\boldsymbol{\mathcal{A}}^{\varphi}(\mathcal{F}_{\mathbf{I}})$ is an (extremal epi)-sink, to show that $h^{\boldsymbol{\mathcal{A}},\varphi(i)}\circ \mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}} = h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}$ it suffices to prove that, for every $J\in \mathcal{F}_{\mathbf{I}}$, the homomorphisms $h^{\boldsymbol{\mathcal{A}},\varphi(i)}\circ \mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}}\circ \mathrm{p}^{J}$ and $h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}\circ \mathrm{p}^{J}$ from $\prod_{j\in J}\mathbf{A}^{\varphi(j)}\cong\prod_{\varphi(j)\in \varphi[J]}\mathbf{A}^{\varphi(i)}$ to $\mathbf{A}^{\varphi(i)}$ are equal. But both homomorphism are identical to $h^{\varphi[J],\varphi(i)}$. Therefore $h^{\boldsymbol{\mathcal{A}},\varphi(i)}\circ \mathfrak{q}^{\varphi}_{\boldsymbol{\mathcal{A}}} = h^{\boldsymbol{\mathcal{A}}^{\varphi},\varphi(i)}$. \end{proof} \begin{proposition} Let $\varphi\colon (\mathbf{I},\mathcal{F}_{\mathbf{I}}) \usebox{\xymor}(\mathbf{P},\mathcal{F}_{\mathbf{P}})$ and $\psi\colon (\mathbf{P},\mathcal{F}_{\mathbf{P}}) \usebox{\xymor}(\mathbf{W},\mathcal{F}_{\mathbf{W}})$ be two morphisms in $\mathbf{Uffs}$. Then, from the functors $\mathrm{Alg}(\Sigma)^{\varphi}$ and $\mathrm{Alg}(\Sigma)^{\psi}$, we obtain the functor: $$ \mathrm{Alg}(\Sigma)^{\psi\circ \varphi} = \mathrm{Alg}(\Sigma)^{\varphi}\circ \mathrm{Alg}(\Sigma)^{\psi}\colon \mathbf{Alg}(\Sigma)^{\mathbf{W}^{\mathrm{op}}}\usebox{\xymor}\mathbf{Alg}(\Sigma)^{\mathbf{I}^{\mathrm{op}}}. $$ Moreover, we have the following natural transformations: \begin{enumerate} \item $\mathfrak{p}^{\varphi}\colon\varprojlim_{\mathbf{P}}\usebox{\xycel}\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}$, \item $\mathfrak{q}^{\varphi}\colon\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\varphi}\usebox{\xycel}\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}$, \item $\mathfrak{p}^{\psi}\colon\varprojlim_{\mathbf{W}}\usebox{\xycel}\varprojlim_{\mathbf{P}}\circ \mathrm{Alg}(\Sigma)^{\psi}$, \item $\mathfrak{q}^{\psi}\colon\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}\circ \mathrm{Alg}(\Sigma)^{\psi}\usebox{\xycel} \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{W}}}}\circ D_{(\mathbf{W},\mathcal{F}_{\mathbf{W}})}$, \item $\mathfrak{p}^{\psi\circ \varphi}\colon\varprojlim_{\mathbf{W}}\usebox{\xycel}\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\psi\circ \varphi}$, \item $\mathfrak{q}^{\psi\circ \varphi}\colon\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\psi\circ \varphi}\usebox{\xycel} \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{W}}}}\circ D_{(\mathbf{W},\mathcal{F}_{\mathbf{W}})}$, \item $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\varphi}\colon \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\varphi}\usebox{\xycel}\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}$, \item $h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\psi}\colon \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}\circ \mathrm{Alg}(\Sigma)^{\psi}\usebox{\xycel}\varprojlim_{\mathbf{P}}\circ \mathrm{Alg}(\Sigma)^{\psi}$, \item $h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\psi\circ\varphi}\colon \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ \mathrm{Alg}(\Sigma)^{\psi\circ\varphi}\usebox{\xycel}\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\psi\circ\varphi}$, \text{and} \item $h^{(\mathbf{W},\mathcal{F}_{\mathbf{W}}),\boldsymbol{\cdot}}\colon \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{W}}}}\circ D_{(\mathbf{W},\mathcal{F}_{\mathbf{W}})}\usebox{\xycel}\varprojlim_{\mathbf{W}}$. \end{enumerate} Then, from $\mathfrak{p}^{\varphi}\colon\varprojlim_{\mathbf{P}}\usebox{\xycel}\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}$ and the functor $\mathrm{Alg}(\Sigma)^{\psi}$, we obtain the natural transformation: $$ \textstyle \mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}\colon\varprojlim_{\mathbf{P}}\circ \mathrm{Alg}(\Sigma)^{\psi}\usebox{\xycel}\varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}\circ \mathrm{Alg}(\Sigma)^{\psi}, $$ and, from $\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}$ and $\mathfrak{p}^{\psi}$, we obtain the natural transformation: $$ \textstyle (\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ \mathfrak{p}^{\psi}\colon \varprojlim_{\mathbf{W}}\usebox{\xycel} \varprojlim_{\mathbf{I}}\circ \mathrm{Alg}(\Sigma)^{\varphi}\circ \mathrm{Alg}(\Sigma)^{\psi}. $$ Similarly, from $\mathfrak{q}^{\varphi}\colon\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ\mathrm{Alg}(\Sigma)^{\varphi}\usebox{\xycel} \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}$ and the functor $\mathrm{Alg}(\Sigma)^{\psi}$, we obtain the natural transformation: $$ \textstyle \mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}\colon\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ\mathrm{Alg}(\Sigma)^{\varphi}\circ \mathrm{Alg}(\Sigma)^{\psi}\usebox{\xycel}\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}\circ \mathrm{Alg}(\Sigma)^{\psi}, $$ and, from $\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}$ and $\mathfrak{q}^{\psi}$, we obtain the natural transformation: $$ \textstyle \mathfrak{q}^{\psi}\circ(\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\colon \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}\circ\mathrm{Alg}(\Sigma)^{\varphi}\circ \mathrm{Alg}(\Sigma)^{\psi} \usebox{\xycel}\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{W}}}}\circ D_{(\mathbf{W},\mathcal{F}_{\mathbf{W}})}. $$ Then it happens that $\mathfrak{p}^{\psi\circ\varphi} = (\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ \mathfrak{p}^{\psi}$ and $\mathfrak{q}^{\psi\circ\varphi} = \mathfrak{q}^{\psi}\circ(\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})$. Therefore $$ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\psi\circ\varphi} = \mathfrak{p}^{\psi\circ\varphi}\circ h^{(\mathbf{W},\mathcal{F}_{\mathbf{W}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\psi\circ\varphi}. $$ \end{proposition} \begin{proof} To show that $\mathfrak{p}^{\psi\circ\varphi} = (\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ \mathfrak{p}^{\psi}$ it suffices to verify that, for every projective system $\boldsymbol{\mathcal{A}}$ in $\mathbf{Alg}(\Sigma)^{\mathbf{W}^{\mathrm{op}}}_{\mathrm{f,cs}}$, the homomorphisms $$ \textstyle ((\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ \mathfrak{p}^{\psi})_{\boldsymbol{\mathcal{A}}} = \mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}^{\psi}}\circ\mathfrak{p}^{\psi}_{\boldsymbol{\mathcal{A}}},\,\, \mathfrak{p}^{\psi\circ\varphi}_{\boldsymbol{\mathcal{A}}}\colon \varprojlim_{\mathbf{W}}\boldsymbol{\mathcal{A}}\usebox{\xymor} \varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\psi\circ\varphi} $$ are equal. But it happens that $\mathfrak{p}^{\psi\circ\varphi}_{\boldsymbol{\mathcal{A}}}$ is the unique homomorphism from $\varprojlim_{\mathbf{W}}\boldsymbol{\mathcal{A}}$ to $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\psi\circ\varphi}$ such that, for every $i\in I$, $f^{\psi\circ\varphi,i}\circ \mathfrak{p}^{\psi\circ\varphi}_{\boldsymbol{\mathcal{A}}} = f^{\psi(\varphi(i))}$, where $f^{\psi\circ\varphi,i}$ is the canonical homomorphism from $\varprojlim_{\mathbf{I}}\boldsymbol{\mathcal{A}}^{\psi\circ\varphi}$ to $\mathbf{A}^{\psi(\varphi(i))}$, and, for every $i\in I$, we have that \begin{align} f^{\psi\circ\varphi,i}\circ (\mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}^{\psi}}\circ\mathfrak{p}^{\psi}_{\boldsymbol{\mathcal{A}}}) &= (f^{\psi\circ\varphi,i}\circ\mathfrak{p}^{\varphi}_{\boldsymbol{\mathcal{A}}^{\psi}}) \circ\mathfrak{p}^{\psi}_{\boldsymbol{\mathcal{A}}} \notag \\ &= f^{\psi,\varphi(i)}\circ \mathfrak{p}^{\psi}_{\boldsymbol{\mathcal{A}}} \notag \\ &= f^{\psi(\varphi(i))}. \notag \end{align} Therefore $((\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ \mathfrak{p}^{\psi})_{\boldsymbol{\mathcal{A}}} = \mathfrak{p}^{\psi\circ\varphi}_{\boldsymbol{\mathcal{A}}}$. Hence $\mathfrak{p}^{\psi\circ\varphi} = (\mathfrak{p}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ \mathfrak{p}^{\psi}$. By a similar argument it follows that $\mathfrak{q}^{\psi\circ\varphi} = \mathfrak{q}^{\psi}\circ(\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi})$. It remains to show that $$ h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\psi\circ\varphi} = \mathfrak{p}^{\psi\circ\varphi}\circ h^{(\mathbf{W},\mathcal{F}_{\mathbf{W}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\psi\circ\varphi}. $$ But we have that \begin{alignat}{2} \mathfrak{p}^{\psi\circ\varphi}\circ h^{(\mathbf{W},\mathcal{F}_{\mathbf{W}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\psi\circ\varphi} &= ((\mathfrak{p}^{\varphi}\ast\mathrm{Alg}(\Sigma)^{\psi})\circ\mathfrak{p}^{\psi})\circ h^{(\mathbf{W},\mathcal{F}_{\mathbf{W}}),\boldsymbol{\cdot}}\circ (\mathfrak{q}^{\psi}\circ(\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}) & & \text{(by def.)}\notag \\ &= (\mathfrak{p}^{\varphi}\ast\mathrm{Alg}(\Sigma)^{\psi})\circ(\mathfrak{p}^{\psi}\circ h^{(\mathbf{W},\mathcal{F}_{\mathbf{W}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\psi})\circ(\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}) & & \text{(by ass.)} \notag \\ &= (\mathfrak{p}^{\varphi}\ast\mathrm{Alg}(\Sigma)^{\psi})\circ(h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\psi})\circ(\mathfrak{q}^{\varphi}\ast \mathrm{Alg}(\Sigma)^{\psi}) & & \text{(by def.)} \notag \\ &= (\mathfrak{p}^{\varphi}\ast\mathrm{Alg}(\Sigma)^{\psi})\circ ((h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi})\ast \mathrm{Alg}(\Sigma)^{\psi}) & & \hspace{-1.15cm}\text{(Godement law)} \notag \\ &= (\mathfrak{p}^{\varphi}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi})\ast \mathrm{Alg}(\Sigma)^{\psi}& & \hspace{-1.15cm}\text{(Godement law)} \notag \\ &= (h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast\mathrm{Alg}(\Sigma)^{\varphi})\ast\mathrm{Alg}(\Sigma)^{\psi} & & \text{(by def.)}\notag \\ &= h^{(\mathbf{I},\mathcal{F}_{\mathbf{I}}),\boldsymbol{\cdot}}\ast \mathrm{Alg}(\Sigma)^{\psi\circ\varphi} & & \hspace{-1.35cm}\text{(by ass. and def.)}\notag \end{alignat} Regarding the natural transformations annotated ``Godement law'', in the equations listed above, one should take into account that for the functor $\mathrm{Alg}(\Sigma)^{\psi}$ from $\mathbf{Alg}(\Sigma)^{\mathbf{W}^{\mathrm{op}}}$ to $\mathbf{Alg}(\Sigma)^{\mathbf{P}^{\mathrm{op}}}$ since, as a particular case of the conventions stated just before Proposition~\ref{NatTrans p and q}, $$ \mathrm{Alg}(\Sigma)^{\psi}\circ \mathrm{Alg}(\Sigma)^{\psi} = \mathrm{id}_{\mathrm{Alg}(\Sigma)^{\psi}}\circ \mathrm{id}_{\mathrm{Alg}(\Sigma)^{\psi}} = \mathrm{id}_{\mathrm{Alg}(\Sigma)^{\psi}} = \mathrm{Alg}(\Sigma)^{\psi}, $$ we have that \begin{gather} (h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi})\ast \mathrm{Alg}(\Sigma)^{\psi} = (h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi})\ast (\mathrm{Alg}(\Sigma)^{\psi}\circ \mathrm{Alg}(\Sigma)^{\psi}) \text{ and} \notag \\ (\mathfrak{p}^{\varphi}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi})\ast \mathrm{Alg}(\Sigma)^{\psi} = (\mathfrak{p}^{\varphi}\circ h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}\circ \mathfrak{q}^{\varphi})\ast (\mathrm{Alg}(\Sigma)^{\psi}\circ \mathrm{Alg}(\Sigma)^{\psi}).\notag \end{gather} \end{proof} We would like to conclude this article by pointing out that, from the above results and taking into account the work done in~\cite{cs10}, it seems to us that a generalization of the results stated in this section to a 2-categorial setting is feasible. Let us begin by noticing that the above category-theoretic rendering of Mariano and Miraglia theorem has been done by fixing a pair $\mathbf{\Sigma} = (S,\Sigma)$, where $S$ is a set of sorts and $\Sigma$ an $S$-sorted signature. In making so we have assigned to every object $(\mathbf{I},\mathcal{F}_{\mathbf{I}})$ of $\mathbf{Uffs}$ a natural transformation $h^{(\mathbf{P},\mathcal{F}_{\mathbf{P}}),\boldsymbol{\cdot}}$, and to every morphism $\varphi$ from $(\mathbf{I},\mathcal{F}_{\mathbf{I}})$ to $(\mathbf{P},\mathcal{F}_{\mathbf{P}})$ in $\mathbf{Uffs}$ a pair of natural transformations $(\mathfrak{p}^{\varphi},\mathfrak{q}^{\varphi})$ satisfying the equation stated in Proposition~\ref{Cylinder equation}. Moreover, we have shown that such a correspondence is, in fact, a functor. Faced with such a situation, the next, natural, step would be to investigate what happens if one allows the variation of $\mathbf{\Sigma} = (S,\Sigma)$. In this regard, we would note that there exists a contravariant functor $\mathrm{Sig}$ from $\mathbf{Set}$ to $\mathbf{Cat}$. Its object mapping sends each set of sorts $S$ to $\mathrm{Sig}(S) = \mathbf{Sig}(S)$ (= $\mathbf{Set}^{S^{\star}\times S}$), the category of all $S$-sorted signatures; its arrow mapping sends each mapping $\alpha$ from $S$ to $T$ to the functor $\mathrm{Sig}(\alpha)$ from $\mathbf{Sig}(T)$ to $\mathbf{Sig}(S)$ which relabels $T$-sorted signatures into $S$-sorted signatures, i.e., $\mathrm{Sig}(\alpha)$ assigns to a $T$-sorted signature $\Lambda\colon T^{\star}\times T\usebox{\xymor} \boldsymbol{\mathcal{U}}$ the $S$-sorted signature $\mathrm{Sig}(\alpha)(\Lambda) = \Lambda_{\alpha^{\star}\times\alpha}$, where $\Lambda_{\alpha^{\star}\times\alpha}$ is the composition of $\alpha^{\star}\times\alpha\colon S^{\star}\times S\usebox{\xymor} T^{\star}\times T$ and $\Lambda$, and assigns to a morphism of $T$-sorted signatures $d$ from $\Lambda$ to $\Lambda'$ the morphism of $S$-sorted signatures $\mathrm{Sig}(\alpha)(d) = d_{\alpha^{\star}\times \alpha}$ from $\Lambda_{\alpha^{\star}\times \alpha}$ to $\Lambda'_{\alpha^{\star}\times \alpha}$. Then the category $\mathbf{Sig}$, of \emph{many-sorted signatures} and \emph{many-sorted signature morphisms}, is given by $\mathbf{Sig} = \int^{\mathbf{Set}}\mathrm{Sig}$. Therefore $\mathbf{Sig}$ has as objects the pairs $\mathbf{\Sigma} = (S,\Sigma)$, where $S$ is a set of sorts and $\Sigma$ an $S$-sorted signature, and, as many-sorted signature morphisms from $\mathbf{\Sigma} = (S,\Sigma)$ to $\mathbf{\Lambda} = (T,\Lambda)$, the pairs $\mathbf{d} = (\alpha,d)$, where $\alpha\colon S\usebox{\xymor} T$ is a morphism in $\mathbf{Set}$ while $d\colon \Sigma\usebox{\xymor} \Lambda_{\alpha^{\star}\times \alpha}$ is a morphism in $\mathbf{Sig}(S)$ (for details see~\cite{cs10}). Moreover, there exists a contravariant functor $\mathrm{Alg}$ from $\mathbf{Sig}$ to $\mathbf{Cat}$. Its object mapping sends each signature $\mathbf{\Sigma}$ to $\mathrm{Alg}(\mathbf{\Sigma}) = \mathbf{Alg}(\mathbf{\Sigma})$, the category of $\mathbf{\Sigma}$-algebras; its arrow mapping sends each signature morphism $\mathbf{d}\colon\mathbf{\Sigma}\usebox{\xymor}\mathbf{\Lambda}$ to the functor $\mathrm{Alg}(\mathbf{d}) = \mathbf{d}^{\ast}\colon \mathbf{Alg}(\mathbf{\Lambda})\usebox{\xymor} \mathbf{Alg}(\mathbf{\Sigma})$ defined as follows: its object mapping sends each $\mathbf{\Lambda}$-algebra $\mathbf{B} = (B,G)$ to the $\mathbf{\Sigma}$-algebra $\mathbf{d}^{\ast}(\mathbf{B}) = (B_{\alpha},G^{\mathbf{d}})$, where $B_{\alpha}$ is $(B_{\alpha(s)})_{s\in S}$ and $G^{\mathbf{d}}$ is the composition of the $S^{\star}\times S$-sorted mappings $d$ from $\Sigma$ to $\Lambda_{\alpha^{\star}\times \alpha}$ and $G_{\alpha^{\star}\times \alpha}$ from $\Lambda_{\alpha^{\star}\times \alpha}$ to $\mathcal{O}_{T}(B)_{\alpha^{\star}\times \alpha}$, where $\mathcal{O}_{T}(B)$ stands for the $T^{\star}\times T$-sorted set $(\mathrm{Hom}(B_{u},B_{t}))_{(u,t)\in T^{\star}\times T}$, of the finitary operations on the $T$-sorted set $B$; its arrow mapping sends each $\mathbf{\Lambda}$-homomorphism $f$ from $\mathbf{B}$ to $\mathbf{B}'$ to the $\mathbf{\Sigma}$-homomorphism $\mathbf{d}^{\ast}(f) = f_{\alpha}$ from $\mathbf{d}^{\ast}(\mathbf{B})$ to $\mathbf{d}^{\ast}(\mathbf{B}')$, where $f_{\alpha}$ is $(f_{\alpha(s)})_{s\in S}$. Then the category $\mathbf{Alg}$, of \emph{many-sorted algebras} and \emph{many-sorted algebra homomorphisms}, is given by $\mathbf{Alg} = \int^{\mathbf{Sig}}\mathrm{Alg}$. Therefore the category $\mathbf{Alg}$ has as objects the pairs $(\mathbf{\Sigma},\mathbf{A})$, where $\mathbf{\Sigma}$ is a signature and $\mathbf{A}$ a $\mathbf{\Sigma}$-algebra, and as morphisms from $(\mathbf{\Sigma},\mathbf{A})$ to $(\mathbf{\Lambda},\mathbf{B})$, the pairs $(\mathbf{d},f)$, with $\mathbf{d}$ a signature morphism from $\mathbf{\Sigma}$ to $\mathbf{\Lambda}$ and $f$ a $\mathbf{\Sigma}$-homomorphism from $\mathbf{A}$ to $\mathbf{d}^{\ast}(\mathbf{B})$ (for details see~\cite{cs10}). Thus, the new goal would be to assigns to an object $((\mathbf{I},\mathcal{F}_{\mathbf{I}}),\mathbf{\Sigma})$ of the category $\mathbf{Uffs}\times \mathbf{Sig}$ a natural transformation $h^{((\mathbf{I},\mathcal{F}_{\mathbf{I}}),\mathbf{\Sigma}),\boldsymbol{\cdot}}$ from $\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}$ to $\varprojlim_{\mathbf{I}}$, and to a morphism $(\varphi,\mathbf{d})$ from $((\mathbf{I},\mathcal{F}_{\mathbf{I}}),\mathbf{\Sigma})$ to $((\mathbf{P},\mathcal{F}_{\mathbf{P}}),\mathbf{\Lambda})$ a suitable pair of natural transformations $(\mathfrak{p}^{(\varphi,\mathbf{d})},\mathfrak{q}^{(\varphi,\mathbf{d})})$, where \begin{gather} \textstyle \mathfrak{p}^{(\varphi,\mathbf{d})}\colon \mathbf{d}^{\ast}\ast \varprojlim_{\mathbf{P}}\usebox{\xycel} \varprojlim_{\mathbf{I}}\ast((\mathbf{d}^{\ast})^{\mathbf{I}^{\mathrm{op}}}\circ\mathrm{Alg}(\mathbf{\Lambda})^{\varphi}) \text{ and} \notag \\ \textstyle \mathfrak{q}^{(\varphi,\mathbf{d})}\colon (\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})})\ast ((\mathbf{d}^{\ast})^{\mathbf{I}^{\mathrm{op}}}\circ\mathrm{Alg}(\mathbf{\Lambda})^{\varphi})\usebox{\xycel} \mathbf{d}^{\ast}\ast \varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}.\notag \end{gather} To assist the reader in identifying the just stated natural transformations, we add the following diagram: $$ \xymatrix@C=96pt@R=65pt{ \mathbf{Alg}(\mathbf{\Sigma})^{\mathbf{I}^{\mathrm{op}}}_{\mathrm{f,cs}} \ar@/^20pt/[r]^{\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{I}}}}\circ D_{(\mathbf{I},\mathcal{F}_{\mathbf{I}})}}="a" \ar@/_20pt/[r]_{\varprojlim_{\mathbf{I}}}="b" & \mathbf{Alg}(\mathbf{\Sigma}) \\ \mathbf{Alg}(\mathbf{\Lambda})^{\mathbf{P}^{\mathrm{op}}}_{\mathrm{f,cs}} \ar[u]^{(\mathbf{d}^{\ast})^{\mathbf{I}^{\mathrm{op}}}\circ\mathrm{Alg}(\mathbf{\Lambda})^{\varphi}} \ar@/^20pt/[r]^{\varinjlim_{\boldsymbol{\mathcal{F}_{\mathbf{P}}}}\circ D_{(\mathbf{P},\mathcal{F}_{\mathbf{P}})}}="c" \ar@/_20pt/[r]_{\varprojlim_{\mathbf{P}}}="d" & \mathbf{Alg}(\mathbf{\Lambda})\ar[u]_{\mathbf{d}^{\ast}} \ar @{} "a";"b" |{\dir{=>}}^{\,h^{((\mathbf{I},\mathcal{F}_{\mathbf{I}}),\mathbf{\Sigma}),\boldsymbol{\cdot}}} \ar @{}"c";"d" |{\dir{=>}}^{\,h^{((\mathbf{P},\mathcal{F}_{\mathbf{P}}),\mathbf{\Lambda}),\boldsymbol{\cdot}}} } $$ Moreover, since for two morphisms $(\varphi,\mathbf{d}),\,(\varphi',\mathbf{d}')\colon((\mathbf{I},\mathcal{F}_{\mathbf{I}}),\mathbf{\Sigma})\usebox{\xymor} ((\mathbf{P},\mathcal{F}_{\mathbf{P}}),\mathbf{\Lambda})$ there exists a natural notion of 2-cell from $\mathbf{d}$ to $\mathbf{d}'$ (for details see~\cite{cs10}) and an obvious notion of 2-cell from $\varphi$ to $\varphi'$ (actually, there exists a 2-cell from $\varphi$ to $\varphi'$ if, and only if, for every $i\in I$, $\varphi(i)\leq \varphi'(i)$), we have 2-cells from $(\varphi,\mathbf{d})$ to $(\varphi',\mathbf{d}')$, and, surely, the process described above would be 2-categorial. \end{document}
arXiv
A drawer in a darkened room contains $100$ red socks, $80$ green socks, $60$ blue socks and $40$ black socks. A youngster selects socks one at a time from the drawer but is unable to see the color of the socks drawn. What is the smallest number of socks that must be selected to guarantee that the selection contains at least $10$ pairs? (A pair of socks is two socks of the same color. No sock may be counted in more than one pair.) $\textbf{(A)}\ 21\qquad \textbf{(B)}\ 23\qquad \textbf{(C)}\ 24\qquad \textbf{(D)}\ 30\qquad \textbf{(E)}\ 50$ Suppose that you wish to draw one pair of socks from the drawer. Then you would pick $5$ socks (one of each kind, plus one). Notice that in the worst possible situation, you will continue to draw the same sock, until you get $10$ pairs. This is because drawing the same sock results in a pair every $2$ of that sock, whereas drawing another sock creates another pair. Thus the answer is $5+2\cdot(10-1) = \boxed{23}$.
Math Dataset
\begin{definition}[Definition:Faithful Module] Let $R$ be a commutative ring with unity. Let $M$ be an $R$-module. Then $M$ is '''faithful''' {{iff}} its annihilator is zero. \end{definition}
ProofWiki
\begin{document} \title{On the duality of moduli in arbitrary codimension} \author{Atte Lohvansuu} \let\thefootnote\relax\footnote{\emph{Mathematics Subject Classification 2010:} Primary 30L10, Secondary 30C65, 28A75, 51F99.} \thanks{The author was supported by the Academy of Finland, grant no. 308659, and also by the Vilho, Yrj\"o and Kalle V\"ais\"al\"a foundation.} \begin{abstract} We study the duality of moduli of $k$- and $(n-k)$-dimensional slices of euclidean $n$-cubes, and establish the optimal upper bound 1. \end{abstract} \maketitle \section{Introduction and the main result} Suppose $D\subset{\mathbb R}^2$ is a Jordan domain, whose boundary is divided into four segments $\zeta_1, \ldots, \zeta_4$, in cyclic order. Let $\Gamma(\zeta_1, \zeta_3; D)$ be the family of all paths of $D$ that connect $\zeta_1$ and $\zeta_3$. Then for every $1<p<\infty$ \begin{equation}\label{eq:polku1} (\textnormal{mod}_p\Gamma(\zeta_1, \zeta_3; D))^{1/p}(\textnormal{mod}_{q}\Gamma(\zeta_2, \zeta_4; D))^{1/q}=1. \end{equation} Here $q=\frac{p}{p-1}$ and the $p$-modulus of a path family $\Gamma$ is defined by \[ \textnormal{mod}_p\Gamma=\inf_\rho\int_D\rho^p\, d\mathcal{H}^2, \] where the infimum is taken over all positive Borel-functions $\rho$ with \[ \int_\gamma\rho\, ds\geqslant 1 \] for every locally rectifiable path $\gamma\in\Gamma$. For conformal moduli, that is $p=2=q$, the duality \eqref{eq:polku1} was already known to Beurling and Ahlfors, see e.g. \cite[Lemma 4]{AhlforsBeurling1950} and \cite[Ch. 14]{AhlforsSarioRS}, although instead of moduli they considered their reciprocals, called \emph{extremal lengths}. For general $p$ the identity \eqref{eq:polku1} follows from the results of \cite{Ziemer1967}. It has found applications in connection with uniformization theorems \cite{Rajala2017, Ikonen2019} and Sobolev extension domains \cite{Zhang2020}. The \emph{duality of moduli} phenomenon \eqref{eq:polku1} is also present in euclidean spaces \cite{FreedmanHe1991, Gehring1962, Ziemer1967} of higher dimension and sufficiently regular metric spaces \cite{JonesLahti2019, Lohvansuu2020, LohvansuuRajala2018}. For example, in \cite{Ziemer1967} it is shown that \begin{equation}\label{eq:polku2} (\textnormal{mod}_p\Gamma(E, F; G))^{1/p}(\textnormal{mod}_{q}\Gamma^*(E, F; G))^{1/q}=1, \end{equation} where $G\subset{\mathbb R}^n$ is open and connected, $E$ and $F$ are disjoint, compact and connected subsets of $G$ and $\Gamma^*(E, F; G)$ is the set of all compact sets of $G$ that separate $E$ from $F$. The modulus of separating sets is a natural generalization of the definition of the path modulus. See Section \ref{section:preli} for definitions of moduli and other concepts appearing in the introduction. Separating sets are generally of codimension $1$, so \eqref{eq:polku1} and \eqref{eq:polku2} deal with objects of either dimension or codimension 1. In fact, this is a common theme in all of the results cited above. However, an observation by Freedman and He (see the discussion after Theorem 2.5 in \cite{FreedmanHe1991}) hints that a similar duality result could be true for objects of higher (co)dimension as well. In this paper we explore this question in the setting of cubes of ${\mathbb R}^n$. Moduli of higher (co)dimensional objects have appeared in \cite{HeinonenWu2010, PankkaWu2014}, where the nonexistence of quasisymmetric parametrizations of certain spaces was established. Indeed, one of the main motivations for studying more general moduli is finding tools to approach parametrization problems in higher dimensions. Our first problem is defining suitable classes of $k$- and $(n-k)$-dimensional objects, since simple descriptions such as ``connecting paths" or ``separating surfaces" do not seem to exist. We follow \cite{FreedmanHe1991} and define the objects as representatives of certain relative homology classes. For example, in the context of \eqref{eq:polku1} we can think of the paths of $\Gamma(\zeta_1, \zeta_3; D)$ as singular relative cycles, that are representatives of either generator of $H_1(D, \zeta_1\cup\zeta_3)\simeq\mathbb{Z}$. Since we also want to integrate over the chains, we need to assume some regularity. For this reason we will consider Lipschitz chains instead of singular chains. Let $Q\subset {\mathbb R}^n$ be a compact set homeomorphic to the closed unit $n$-cube $I^n$. Fix a homeomorphism $h: Q\rightarrow I^n$ and an integer $0<k<n$ and let \[ A=h^{-1}(\partial I^k\times I^{n-k})\text{ and } B=h^{-1}(I^k\times\partial I^{n-k}). \] Then $A$ and $B$ are $(n-1)$-dimensional submanifolds of $\partial Q$ with $\partial Q=A\cup B$ and $\partial A=A\cap B=\partial B$. We assume that $A, B$ and $Q$ are locally Lipschitz neighborhood retracts. This includes triples $(Q, A, B)$ that are smooth or polygonal, and cubes that are images of the standard cube under biLipschitz automorphisms of ${\mathbb R}^n$. We denote the Lipschitz homology groups by $H^L_*$. We consider only groups with integer coefficients. This notation should not be confused with the Hausdorff measures, which are denoted by $\mathcal{H}^*$. Note that \[ H_k^L(Q, A)\simeq\mathbb{Z}\simeq H^L_{n-k}(Q, B), \] since the same is true for singular homology, and the two homology theories are equivalent for pairs of locally Lipschitz retracts (see Lemma \ref{lemma:liphomo}). Let $\Gamma_A$ (resp. $\Gamma_B$) be the collection of the images of relative Lipschitz $k$-cycles of $Q-B$ that generate $H_k^L(Q, A)$ (($n-k$)-cycles of $Q-A$ that generate $H_{n-k}^L(Q, B)$). Define \[ \textnormal{mod}_p\Gamma_A:=\inf_\rho\int_Q\rho^p\, d\mathcal{H}^n, \] where the infimum is taken over positive Borel-functions $\rho$, for which \[ \int_S\rho\, d\mathcal{H}^k\geqslant 1 \] for every $S\in\Gamma_A$. The moduli $\textnormal{mod}_p\Gamma_B$ are defined analogously. In this paper we will prove the following upper bound. \begin{theorem}\label{thm:main} For every $1<p<\infty$ \[ (\textnormal{mod}_p\Gamma_A)^{1/p}(\textnormal{mod}_{q}\Gamma_B)^{1/q}\leqslant 1, \] where $q=\frac{p}{p-1}$. \end{theorem} It is unknown, whether Theorem \ref{thm:main} holds with an equality. We will prove Theorem \ref{thm:main} in Section \ref{sec:proof}. A similar result for de Rham cohomology classes, with an equality, is proved in the setting of Riemannian manifolds in pages 212-213 of \cite{FreedmanHe1991}. The assumption on $Q$, $A$ and $B$ being locally Lipschitz neighborhood retracts can be relaxed. The proof of Theorem \ref{thm:main} only requires that there exists a pair of Lipschitz chains that generate $H_k(Q, A)$ and $H_{n-k}(Q, B)$. The assumption on retracts was chosen for its simplicity and its use in \cite{GMT}. It is also likely that such minimal assumptions on the upper bound of Theorem \ref{thm:main} are not sufficient for the corresponding lower bound. We will discuss the lower bound in Section \ref{section:remarks}. In light of the results of \cite[Ch. 4]{GMT}, it would be interesting to know whether analogues of Theorem \ref{thm:main} hold for homology classes of integral currents. \section{Definitions}\label{section:preli} \subsection{Lipschitz homology} Let us recall the definition and basic properties of the integral homology groups. See e.g. \cite{DoldAT, HatcherAT} or other texts on basic algebraic topology for more comprehensive treatment. For an integer $k\geqslant 0$ the \emph{standard $k$-simplex} $\Delta_k$ is the convex hull of the standard unit vectors $e_0, \ldots, e_k$ of ${\mathbb R}^{k+1}$. Given a metric space $(X, d)$, a \emph{singular $k$-simplex} is a continuous map from $\Delta_k$ to $X$. Finite formal linear combinations \[ \sigma=\sum_ik_i\sigma_i \] of singular $k$-simplices $\sigma_i$ with integer coefficients $k_i$ are called \emph{singular $k$-chains}. Singular $k$-chains of $X$ form a free abelian group denoted by $C_k(X)$. The \emph{boundary} $\partial\sigma$ of a singular $k$-simplex $\sigma$ is the singular $(k-1)$-chain \[ \partial\sigma=\sum_{i=0}^k(-1)^i\sigma\circ F^i_k, \] where $F^i_k: \Delta_{k-1}\rightarrow \Delta_k$ is the unique linear map that maps each $e_j$ to $e_j$ for $j<i$ and to $e_{j+1}$ for $j\geqslant i$. For singular $0$-simplices we set $\partial\sigma=0$. The boundary defines a collection of homomorphisms $\partial: C_k(X)\rightarrow C_{k-1}(X)$, all denoted by the same symbol $\partial$. Then $\partial\partial=0$. The \emph{image} of a singular $k$-simplex $\sigma$ is the compact set $|\sigma|=\sigma(\Delta_k)$. The image of a $k$-chain $\sigma=\sum_ik_i\sigma_i$ is the compact set $|\sigma|=\bigcup_i|\sigma_i|$. Given a subspace $Y\subset X$, we identify each singular simplex $\sigma$ of $Y$ with the singular simplex $i_Y\circ\sigma$ of $X$, where $i_Y: Y\hookrightarrow X$ is the inclusion map. We define the groups of \emph{relative chains} by \[ C_k(X, Y):=\frac{C_k(X)}{C_k(Y)}, \] with the convention $C_k(X, \emptyset)=C_k(X)$. The boundary map induces homomorphisms $\partial: C_k(X, Y)\rightarrow C_{k-1}(X, Y)$, which are again denoted by the same symbol. A chain $\sigma\in C_k(X)$ is called a \emph{cycle} relative to $Y$, if $\partial\sigma\in C_{k-1}(Y)$, or simply a relative cycle if the choice of $Y$ is clear from the context. Similarly, $\sigma$ is called a relative \emph{boundary} if $\sigma=\partial\sigma'+\sigma''$, where $\sigma'\in C_{k+1}(X)$ and $\sigma''\in C_k(Y)$. The \emph{singular relative homology groups} of the pair $(X, Y)$ are the quotient groups \[ H_k(X, Y):=\frac{\mathrm{ker}(\partial: C_k(X, Y)\rightarrow C_{k-1}(X, Y))}{\mathrm{im}(\partial: C_{k+1}(X, Y)\rightarrow C_k(X, Y))}. \] The homology groups of $X$ are the groups $H_k(X):=H_k(X, \emptyset)$. The homology class of a (relative) chain $\sigma$ is denoted by $[\sigma]$. The homology classes of $H_k(X, Y)$ are represented by relative $k$-cycles, and two relative $k$-cycles define the same class if and only if their difference is a relative boundary. If $X'$ is another metric space with a subset $Y'$, and $f: X\rightarrow X'$ is a continuous map with $f(Y)\subset Y'$, we denote by $f_*$ the induced homomorphisms $f_*: C_k(X, Y)\rightarrow C_k(X', Y')$, and also the homomorphisms $f_*: H_k(X, Y)\rightarrow H_k(X', Y')$. These are given by $f_*\sigma=f\circ\sigma$ for singular simplices, $f_*\sum_ik_i\sigma_i=\sum_ik_if_*\sigma_i$ for chains and $f_*[\sigma]=[f_*\sigma]$ for homology classes. Given a continuous homotopy $H: X\times I\rightarrow X'$ with $H(Y\times I)\subset Y'$, there exists a sequence of homomorphisms \[ P: C_k(X, Y)\rightarrow C_{k+1}(X', Y'), \] such that \begin{equation}\label{eq:homotopyformula} H_{1*}-H_{0*}=P\partial+\partial P. \end{equation} Here $H_t(x)=H(x, t)$. Formula \eqref{eq:homotopyformula} is called the \emph{homotopy formula}. A continuous $f: X\rightarrow Y$ is called a \emph{retraction} if $f\circ i_Y=\text{id}_Y$. The set $Y$ is then called a \emph{retract} of $X$. If $Y$ is a retract of one if its neighborhoods in $X$, it is called a \emph{neighborhood retract}. The corresponding objects in the Lipschitz category are obtained by replacing each occurrence of ``singular" or ``continuous" with ``Lipschitz". The homotopies involved in these definitions are then required to be Lipschitz with respect to the metric $d((x, t), (x', t'))=d(x, x')+|t-t'|$. We denote the groups of Lipschitz chains by $C_*^L(X, Y)$ and the Lipschitz homology groups by $H_*^L(X, Y)$. We define \emph{locally} Lipshitz objects similarly. However, due to compactness there is often no difference between the corresponding objects of Lipschitz and locally Lipschitz categories. \begin{lemma}\label{lemma:liphomo} Let $Y\subset X\subset{\mathbb R}^n$ be locally Lipschitz neighborhood retracts. Then the inclusions \[ i: C_*^L(X, Y)\hookrightarrow C_*(X, Y) \] induce isomorphisms on homology. \end{lemma} Lemma \ref{lemma:liphomo} follows from a more general result \cite[Cor. 11.1.2]{Riedweg2011}, which holds for pairs of \emph{locally Lipschitz contractible} metric spaces. It is straightforward to show that the existence of locally Lipschitz neighborhood retractions implies locally Lipschitz contractibility. \iffalse Let $f: U\rightarrow Q$ and $g:V\rightarrow A$ be Lipschitz retractions. Let $H_f(x, t)=tf(x)+(1-t)x$. We define $H_g$ analogously. We may assume that for some open $V''\subset V'\subset V$ we have $H_f(Q\times I)\subset U$, $H_f(A\times I)\subset V''$, $H_g(V''\times I)\subset V'$ and $H_g(V'\times I)\subset V$. It suffices to show that for a given relative cycle $\sigma\in C_l(Q)$ with $\partial\sigma\in C_{l-1}(A)$, there is a chain $\sigma^L\in C_l^L(Q)$ with $\partial\sigma^L\in C_{l-1}^L(A)$ and $[\sigma^L-\sigma]=[0]\in H_l(Q, A)$. Let $\sigma$ be such a relative cycle. Composing the smoothing operator of \cite{LeeSmooth} with $f_*$ and applying the homotopy formula on $H_f$, we obtain a Lipschitz chain $\sigma_1$ with $\sigma_1-\sigma=\partial\eta_1+\eta_2$, where $\sigma_1,\eta_1\in C_*(U)$ and $\partial\sigma_1, \eta_2\in C_*(V'')$. By the homotopy formula of $H_g$ we find a chain $\sigma_2\in C^L_{l-1}(V')$, such that $g_*\partial\sigma_1-\partial\sigma_1=\partial\sigma_2$. We let \[ \sigma^L:=f_*(\sigma_1+\sigma_2). \] Backtracking the definitions shows that this suffices. To show that $[\sigma^L-\sigma]=[0]$, we note that \[ \eta_2+\sigma_2=\partial\eta_3+\eta_4 \] by the homotopy formula of $H_g$, where $\eta_3\in C_{l-1}(V)$ and $\eta_4\in C_{l-1}(A)$. \fi \subsection{Modulus} Given a $1<p<\infty$ and a family $\mathcal{M}$ of Borel measures of ${\mathbb R}^n$, the \emph{$p$-modulus} of $\mathcal{M}$ is the number \begin{equation}\label{eq:modulidefinition} \textnormal{mod}_p\mathcal{M}:=\inf_{\rho}\int_{{\mathbb R}^n}\rho^p\, d\mathcal{H}^n, \end{equation} where the infimum is taken over all Borel functions $\rho: {\mathbb R}^n\rightarrow [0, \infty)$ with \begin{equation}\label{eq:moduliehto} \int_{{\mathbb R}^n}\rho\, d\nu\geqslant 1 \end{equation} for every $\nu\in\mathcal{M}$. Such functions are called \emph{admissible} for $\mathcal{M}$. If there exists a subfamily $\mathcal{N}\subset\mathcal{M}$ such that $\textnormal{mod}_p\mathcal{N}=0$ and \eqref{eq:moduliehto} holds for all $\nu\in\mathcal{M}-\mathcal{N}$, we say that $\rho$ is \emph{$p$-weakly admissible} or simply \emph{weakly admissible} if the choice of $p$ is clear from the context. It follows that the infimum in \eqref{eq:modulidefinition} does not change if we take it over $p$-weakly admissible functions instead. Let us list some useful properties of the modulus. \begin{lemma}\label{lemma:modulilemma} Let $\mathcal{M}$ be a collection of Borel measures of ${\mathbb R}^n$. Let $1<p<\infty$. \begin{enumerate}[i)] \item If $\rho_i$ are $p$-integrable Borel functions that converge to a function $\rho$ in $L^p$, there exists a subsequence $(\rho_{i_j})_j$ for which \[ \int_{{\mathbb R}^n}\rho_{i_j}\, d\nu\overset{j\rightarrow\infty}{\longrightarrow}\int_{{\mathbb R}^n}\rho\, d\nu \] for almost every $\nu\in\mathcal{M}$. In particular, Borel representatives of $L^p$-limits of admissible functions are weakly admissible. \item If $\textnormal{mod}_p\mathcal{M}<\infty$, then \[ \textnormal{mod}_p\mathcal{M}=\int_{{\mathbb R}^n}\rho^p\, d\mathcal{H}^n \] for a weakly admissible minimizer $\rho$, unique up to sets of $\mathcal{H}^n$-measure zero. Moreover, \[ \textnormal{mod}_p\mathcal{M}\leqslant \int_{{\mathbb R}^n}\phi\rho^{p-1}\, d\mathcal{H}^n \] for any other $p$-integrable weakly admissible $\phi$. \item If $\mathcal{M}=\bigcup_{i=1}^\infty\mathcal{M}_i$ with $\mathcal{M}_i\subset\mathcal{M}_{i+1}$ for all $i$, then \[ \textnormal{mod}_p\mathcal{M}=\lim_{i\rightarrow\infty}\textnormal{mod}_p\mathcal{M}_i. \] \end{enumerate} \end{lemma} Claim $i)$ is often referred to as \emph{Fuglede's lemma}. Proofs for $i)$ and the first part of $ii)$ can be found in \cite[Thm. 3]{Fuglede1957}. The second part of $ii)$ and $iii)$ are generalizations of \cite[Lemma 5.2]{LohvansuuRajala2018} and \cite[Lemma 2.3]{Ziemer1969}, respectively. The same proofs apply. In this paper we abbreviate \[ \textnormal{mod}_p\Gamma_A=\textnormal{mod}_p\{\mathcal{H}^k\mres S\ |\ S\in\Gamma_A\}, \] and \[ \textnormal{mod}_q\Gamma_B=\textnormal{mod}_q\{\mathcal{H}^{n-k}\mres S^*\ |\ S^*\in\Gamma_B\}. \] \subsection{Rectifiable sets} A subset of ${\mathbb R}^n$ is \emph{$k$-rectifiable} if it is covered by the image of a subset of ${\mathbb R}^k$ under a Lipschitz map. A subset of ${\mathbb R}^n$ is \emph{countably $k$-rectifiable} if $\mathcal{H}^k$-almost all of it is contained in a countable union of $k$-rectifiable sets. See e.g. \cite{GMT, SimonGMT} for basic theory on rectifiable sets. Note that the definition of countable rectifiability in \cite[3.2.14]{GMT} is slightly different from ours. Let us record some useful facts on rectifiable sets. The following Fubini-type lemma is an application of \cite[3.2.23]{GMT} and \cite[2.6.2]{GMT}. \begin{lemma}\label{lemma:fubini} Suppose $S^*$ is a countably $k$-rectifiable subset of ${\mathbb R}^n$ and $S$ is a countable union of $l$-rectifiable subsets of ${\mathbb R}^m$. Then $S^*\times S$ is a countably $(k+l)$-rectifiable subset of ${\mathbb R}^{n}\times{\mathbb R}^m$, and \[ \int_{S^*\times S}g(x, y)\, d\mathcal{H}^{k+l}(x, y)=\int_{S^*}\int_{S} g(x, y)\, d\mathcal{H}^l(y)\, d\mathcal{H}^{k}(x) \] for any positive Borel function $g$ on ${\mathbb R}^n\times {\mathbb R}^m$. \end{lemma} Lemma \ref{lemma:fubini} is not true for general countably $k$-rectifiable sets $S$, see \cite[3.2.24]{GMT}. The second tool we need is the coarea formula, see e.g. \cite[12.7]{SimonGMT}. \begin{lemma}\label{lemma:coarea} Suppose $m\leqslant k$. Let $S$ be a countably $k$-rectifiable subset of ${\mathbb R}^n$ and let $u: S\rightarrow {\mathbb R}^m$ be locally Lipschitz. Then \begin{equation}\label{eq:rectcoarea} \int_{{\mathbb R}^m}\int_{u^{-1}(z)}g\, d\mathcal{H}^{k-m}\, d\mathcal{H}^m(z)=\int_SgJ^S_u\, d\mathcal{H}^k \end{equation} for every positive Borel function $g$ on $S$. \end{lemma} Let us define the jacobian $J^S_u$ appearing in \eqref{eq:rectcoarea}. Details can be found in \cite[ยง12]{SimonGMT}. Suppose first, that $S$ is an embedded $C^1$ $k$-submanifold (without boundary) of ${\mathbb R}^n$. Then $u$ is differentiable at $\mathcal{H}^k$-almost every $x\in S$. Fix such an $x$, and let $\{E_1, \ldots, E_k\}$ be an orthonormal basis for the tangent space of $S$ at $x$. Let $Du(x)$ be the jacobian matrix of $u$ at $x$ with respect to standard bases of ${\mathbb R}^n$ and ${\mathbb R}^m$. We set \[ J^S_u(x):=\sqrt{\det(d^Su(x)d^Su(x)^t)}, \] where $d^Su(x)$ is the matrix with columns $Du(x)E_i$. It can be shown that $J^S_u(x)$ does not depend on the choice of the basis $\{E_i\}$. More generally, every countably $k$-rectifiable set $S$ can be expressed as a disjoint union $S=\bigcup_{i=0}^\infty M_i$, where $\mathcal{H}^k(M_0)=0$ and each $M_i$ for $i\geqslant 1$ is contained in an embedded $C^1$ $k$-submanifold $N_i$ of ${\mathbb R}^n$. Given an $x\in M_i$ with $i\geqslant 1$, we set \[ J^S_u(x):=J^{N_i}_u(x). \] Then $J^S_u$ is well defined $\mathcal{H}^k$-almost everywhere on $S$. It can be shown that $J^S_u$ does not depend on the decomposition $S=\bigcup_{i=0}^\infty M_i$, up to sets of $\mathcal{H}^k$-measure zero. \section{Proof of Theorem \ref{thm:main}}\label{sec:proof} Given any set $S\subset{\mathbb R}^n$ and a vector $y\in{\mathbb R}^n$ we denote \[ S_y=\{x+y\ |\ x\in S\} \] and \[ N_\varepsilon(S)=\{x\ |\ d(x, S)<\varepsilon\}. \] Denote by $\Gamma_A^*$ the collection of ($n-k$)-rectifiable subsets $S^*$ of $Q-A$, such that the homomorphism \[ i_*: H_k^L(Q-S^*, A)\rightarrow H_k^L(Q, A) \] induced by inclusion is trivial. Lemma \ref{lemma:leikkauslemma} below implies that $\Gamma_B\subset\Gamma_A^*$. Every set $S^*\in\Gamma_A^*$ intersects with every $S\in\Gamma_A$ in a nonempty set. To see this, note that if $|\sigma|\cap S^*$ is empty for some Lipschitz cycle $\sigma\in C_k(Q)$ relative to $A$, then $[\sigma]=i_*[\sigma]=0$ in $H_k^L(Q, A)$ by the definition of $\Gamma_A^*$. We abbreviate \[ \textnormal{mod}_q\Gamma_A^*:=\textnormal{mod}_q\{\mathcal{H}^{n-k}\mres S^*\ |\ S^*\in\Gamma_A^*\}. \] Theorem \ref{thm:main} is then implied by the following more general result. \begin{theorem}\label{thm:main2} For every $1<p<\infty$ \[ (\textnormal{mod}_p\Gamma_A)^{1/p}(\textnormal{mod}_q\Gamma_A^*)^{1/q}\leqslant 1, \] where $q=\frac{p}{p-1}$. \end{theorem} The rest of this section is focused on the proof of Theorem \ref{thm:main2}. For each $\delta>0$ let $\Gamma_A^\delta$ be the subcollection of $\Gamma_A$ consisting of those sets whose distance to $B$ is at least $100\delta$. The subcollections $\Gamma_A^{*\delta}$ are defined analogously. In light of $iii)$ of Lemma \ref{lemma:modulilemma}, it suffices to show that \begin{equation}\label{eq:roe1} (\textnormal{mod}_p\Gamma^\delta_A)^{1/p}(\textnormal{mod}_{q}\Gamma^{*\delta}_A)^{1/q}\leqslant 1 \end{equation} for all $\delta$. Fix a $\delta$ for the rest of the proof. We may assume without loss of generality that the moduli in question are nonzero and the collections $\Gamma_A^\delta$ and $\Gamma_A^{*\delta}$ are nonempty. The following intersection property of the elements of $\Gamma_A$ and $\Gamma_A^*$ forms the topological core of Theorem \ref{thm:main2}. \begin{proposition}\label{prop:leikkaus} The intersection $S_z\cap S^*$ is nonempty for every $S\in\Gamma_A^\delta$, $S^*\in\Gamma_A^{*\delta}$ and $|z|< 10\delta$. \end{proposition} We postpone the proof to Subsection \ref{subsec:topology}. Let $S\in \Gamma^{\delta}_A$. Observe that the map \begin{equation}\label{eq:distribuutio} g\mapsto\int_{S} g\, d\mathcal{H}^{k} \end{equation} is a distribution in ${\mathbb R}^n$. Thus we have by \cite[4.1.2]{GMT} that \begin{equation}\label{eq:konvoluutio} \int_Q\phi^{S}_\varepsilon g\, d\mathcal{H}^n \overset{\varepsilon\rightarrow 0}{\longrightarrow}\int_{S} g\, d\mathcal{H}^k \end{equation} for every smooth compactly supported function $g$, where \[ \phi_\varepsilon^{S}(x):=\int_{S}\phi_\varepsilon(x-y)\, d\mathcal{H}^k(y) \] is the \emph{convolution} of the distribution \eqref{eq:distribuutio} with respect to a smooth kernel $\phi$. That is, $\phi_\varepsilon(x)=\varepsilon^{-n}\phi(\varepsilon^{-1}x)$ and $\phi$ is a positive smooth function on ${\mathbb R}^n$ that vanishes outside the unit ball $\mathbb{B}^n$ and satisfies $\int_{\mathbb{B}^n}\phi\, d\mathcal{H}^n=1$. Smoothness is convenient for avoiding tedious technicalities, but to see the geometry behind the arguments that follow, the reader is encouraged to repeat the proof with the nonsmooth kernel $\phi=|\mathbb{B}^n|^{-1}\chi_{\mathbb{B}^n}$. Theorem \ref{thm:main2} follows via \eqref{eq:roe1} from the following proposition. \begin{proposition}\label{lemma:taikalemma} The convolution $\phi^{S_z}_\varepsilon$ is admissible for $\Gamma_A^{*\delta}$ for all $\varepsilon<\delta$ and all $|z|<\delta$. \end{proposition} \begin{proof} Fix an $\varepsilon<\delta$ and a set $S^*\in\Gamma_A^{*\delta}$. Let $z=0$ for now. By Lemma \ref{lemma:fubini} \begin{align*} \int_{S^*}\phi_\varepsilon^S(x)\, d\mathcal{H}^{n-k}(x)&=\int_{S^*}\int_S\phi_\varepsilon(x-y)\, d\mathcal{H}^k(y)d\mathcal{H}^{n-k}(x)\\ &=\int_{S^*}\int_{S\cap N_\varepsilon(S^*)}\phi_\varepsilon(x-y)\, d\mathcal{H}^k(y)d\mathcal{H}^{n-k}(x)\\ &=\int_{(S^*\times S)\cap \{|x-y|<\varepsilon\}}\phi_\varepsilon(x-y)\, d\mathcal{H}^n(x, y). \end{align*} Now we can apply the coarea formula (Lemma \ref{lemma:coarea}) on the map $u(x, y)=x-y$ to obtain \begin{equation}\label{eq:apu1} \int_{S^*}\phi_\varepsilon^S(x)\, d\mathcal{H}^{n-k}(x)\geqslant \int_{\varepsilon\mathbb{B}^n}\int_{(S^*\times S)\cap\{x-y=w\}}\phi_\varepsilon(x-y)\, d\mathcal{H}^0d\mathcal{H}^n(w) \end{equation} since $J_u^{S^*\times S}\leqslant 1$. To see this, note for any $(n-k)$- and $k$-dimensional embedded $C^1$ submanifolds $N^*$ and $N$ of ${\mathbb R}^n$ the matrix $d^{N^*\times N}u$ consists of unit column vectors. Thus $J^{N^*\times N}_u\leqslant 1$. It follows that $J_u^{S^*\times S}\leqslant 1$ as well, since it can be computed via $J_u^{M_i^*\times M_j}$ with $i, j\geqslant 1$, where $S^*=\bigcup_{i=0}^\infty M_i^*$ and $S=\bigcup_{i=0}^\infty M_j$ are decompositions of $S^*$ and $S$ as in the discussion following Lemma \ref{lemma:coarea}. Note that the sets $M_0^*\times S$ and $S^*\times M_0$ have zero $\mathcal{H}^n$-measure by Lemma \ref{lemma:fubini}. Finally, we apply Proposition \ref{prop:leikkaus} on \eqref{eq:apu1} and obtain \[ \int_{S^*}\phi_\varepsilon^S(x)\, d\mathcal{H}^{n-k}(x)\geqslant \int_{\varepsilon\mathbb{B}^n}\phi_\varepsilon(w)\, d\mathcal{H}^n(w)=1. \] The proof in the case of general $z$ reduces to the case $z=0$ via \begin{equation}\label{eq:zsiirto} \phi^{S_{z}}_\varepsilon(x)=\phi^S_\varepsilon(x-z), \end{equation} since Proposition \ref{prop:leikkaus} can still be applied. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main2}] The $q$-modulus of $\Gamma_A^{*\delta}$ is finite by Proposition \ref{lemma:taikalemma}. Let $\rho$ be the unique weak minimizer of $\textnormal{mod}_{q}\Gamma_A^{*\delta}$ given by $ii)$ of Lemma \ref{lemma:modulilemma}. We may assume that $\rho$ vanishes in $N_{10\delta}(A)$ and is defined as zero outside $Q$. Let $g_r$ be the smooth convolution \[ g_r(x):=\int_{r\mathbb{B}^n}\rho^{q-1}(x+y)\phi_r(y)\, d\mathcal{H}^n(y). \] Let $S\in\Gamma_A^\delta$ and let $\varepsilon<\delta$. Proposition \ref{lemma:taikalemma} and $ii)$ of Lemma \ref{lemma:modulilemma} imply \[ \textnormal{mod}_{q}\Gamma_A^{*\delta}\leqslant \int_Q\phi^{S_z}_\varepsilon\rho^{q-1}\, d\mathcal{H}^n \] for all $|z|<\delta$ and $S\in\Gamma_A^\delta$. Note that the product $\phi_\varepsilon^{S_z}\rho^{q-1}$ vanishes in $N_{10\delta}(\partial Q)$, so by \eqref{eq:zsiirto} and a change of variables \[ \textnormal{mod}_q\Gamma_A^{*\delta}\leqslant \int_Q\phi_\varepsilon^S(x)\rho^{q-1}(x+z)\, d\mathcal{H}^n(x) \] for all $|z|<\delta$. Multiplying both sides by $\phi_r(z)$ and integrating over $z$ yields \[ \textnormal{mod}_q\Gamma_A^{*\delta}\leqslant \int_Q\phi_\varepsilon^Sg_r\, d\mathcal{H}^n \] by Fubini's theorem. Letting $\varepsilon\rightarrow 0$ and then $r\rightarrow 0$ yields \begin{align*} \textnormal{mod}_{q}\Gamma_A^{*\delta}\leqslant \int_S\rho^{q-1}\, d\mathcal{H}^k \end{align*} for $\textnormal{mod}_p$-almost every $S\in\Gamma_A^\delta$ by \eqref{eq:konvoluutio} and $i)$ of Lemma \ref{lemma:modulilemma}. Thus \[ \frac{1}{\textnormal{mod}_{q}\Gamma_A^{*\delta}}\rho^{q-1} \] is weakly admissible for $\Gamma_A^\delta$, so \[ \textnormal{mod}_p\Gamma_A^\delta\leqslant (\textnormal{mod}_{q}\Gamma_A^{*\delta})^{1-p}, \] which is a rearrangement of \eqref{eq:roe1}. \end{proof} \subsection{Topological lemmas}\label{subsec:topology} In this subsection we complete the proof of Theorem \ref{thm:main} by proving Proposition \ref{prop:leikkaus} and showing that $\Gamma_B\subset\Gamma_A^*$. These are implied by the following two lemmas. \begin{lemma}\label{lemma:siirtolemma} Suppose $S\in\Gamma_A^\delta$ and $|y|<10\delta$. Then there exists a singular relative cycle $\sigma_y$, such that it generates $H_k(Q, A)$ and its image coincides with $S_y$ outside $N_{100\delta}(A)$. \end{lemma} \begin{lemma}\label{lemma:leikkauslemma} Suppose $\sigma_A$ and $\sigma_B$ are relative singular chains that generate nontrivial elements of $H_k(Q, A)$ and $H_{n-k}(Q, B)$, respectively. Then $|\sigma_A|\cap|\sigma_B|$ is nonempty. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:siirtolemma}] The lemma follows from the homotopy formula \eqref{eq:homotopyformula}. By the definition of $\Gamma_A$ there is a relative cycle $\sigma$ that generates $H_k(Q, A)$ and has $S$ as its image. By applying barycentric subdivision multiple times, if necessary, we may assume that $\sigma$ splits into $\sigma=\sigma_1+\sigma_2$, where $|\sigma_1|\subset N_{30\delta}(A)$ and $|\sigma_2|\subset Q-N_{20\delta}(\partial Q)$. Let $H_t$ be the homotopy $H_t(x)=x+ty$. Then by \eqref{eq:homotopyformula} there exist homomorphisms $P: C_l(U)\rightarrow C_{l+1}(U_y)$ for all $l$ and all open sets $U\subset{\mathbb R}^n$, such that \begin{equation}\label{eq:homotopia} H_{1*}-H_{0*}=\partial P+P\partial. \end{equation} Note that $P(\partial\sigma_2)$ and $H_{1*}\sigma_2$ are chains in $Q-N_{10\delta}(\partial Q)$. We let $\sigma_y=\sigma_1-P(\partial \sigma_2)+H_{1*}\sigma_2$. Then $\sigma_y-\sigma=\partial P\sigma_2$ by \eqref{eq:homotopia}, so $\sigma_y$ belongs to the same relative homology class as $\sigma$. To prove the final part of the lemma, note that $|\partial\sigma_2|\subset N_{30\delta}(A)$, since $|\partial\sigma_2|=|\partial\sigma_1|\cap \mathrm{int}(Q)$. Thus $|P(\partial\sigma_2)|\subset N_{40\delta}(A)$ and $|\sigma_y|$, $|H_{1*}\sigma_2|=|\sigma_2|_y$ and $S_y$ all coincide outside $N_{100\delta}(A)$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:leikkauslemma}] The lemma follows from the theory of intersection numbers developed in \cite{DoldAT}. We may assume that $Q=J^n$, where $J=[-1, 1]$, and respectively $A=\partial J^k\times J^{n-k}$ and $B=J^k\times \partial J^{n-k}$. Let $\sigma_A$ and $\sigma_B$ be representatives of some nontrivial classes of $H_k(Q, A)$ and $H_{n-k}(Q, B)$, respectively. Suppose $|\sigma_A|\cap|\sigma_B|=\emptyset$. Then we can deform $\sigma_A$ and $\sigma_B$ slightly, if necessary, and assume that $|\sigma_A|\cap B=\emptyset=|\sigma_B|\cap A$. This allows us to define the \emph{intersection number} $[\sigma_A]\circ [\sigma_B]\in H_n({\mathbb R}^n, {\mathbb R}^n-\{0\})\simeq \mathbb{Z}$ of the classes $[\sigma_A]$ and $[\sigma_B]$, as in \cite[VII.4]{DoldAT}. The intersection number of the two classes is defined (up to sign) by pushing the outer product \[ [\sigma_A]\times[\sigma_B]\in H_n(Q\times Q, A\times Q\cup Q\times B) \] forward with the map $u(x, y)=x-y$. Notice the analogy with the proof of Proposition \ref{lemma:taikalemma}. We do not describe the definition of the outer product here, as it is rather complicated and would take us too far away from the main topic. Let us compute the intersection number by using two different pairs of representatives for $[\sigma_A]$ and $[\sigma_B]$. On one hand, since the images of the representatives $\sigma_A$ and $\sigma_B$ do not intersect, Propositions 4.5 and 4.6 of \cite[VII]{DoldAT} imply that $[\sigma_A]\circ[\sigma_B]=0$. On the other hand, $[\sigma_A]$ and $[\sigma_B]$ admit representatives that are integer multiples of triangulations of the subspaces $J^k\times \{0\}$ and $\{0\}\times J^{n-k}$, so combining Proposition 4.5 and Example 4.10 of \cite[VII]{DoldAT} shows that $[\sigma_A]\circ[\sigma_B]$ is nontrivial. \end{proof} \section{Lower bound and related open problems}\label{section:remarks} Theorems \ref{thm:main} and \ref{thm:main2} raise the question: \begin{question}\label{q:alaraja} Do the lower bounds \begin{equation}\label{eq:alaraja} 1\leqslant (\textnormal{mod}_p\Gamma_A)^{1/p}(\textnormal{mod}_{q}\Gamma_B)^{1/q} \end{equation} or \begin{equation}\label{eq:alaraja2} 1\leqslant (\textnormal{mod}_p\Gamma_A)^{1/p}(\textnormal{mod}_{q}\Gamma_A^*)^{1/q} \end{equation} hold whenever $Q, A$ and $B$ are as in Theorem \ref{thm:main}? \end{question} Since $\Gamma_B\subset\Gamma_A^*$, \eqref{eq:alaraja} implies \eqref{eq:alaraja2}. All existing proofs, save the one in \cite{FreedmanHe1991}, of such lower bounds rely on some variation of the coarea formula, Lemma \ref{lemma:coarea}. In \cite{FreedmanHe1991} a lower bound is proved for de Rham cohomology classes. Hence it may be possible to answer Question \ref{q:alaraja} by finding a connection between the moduli of $\Gamma_A$ and $\Gamma_B$, which can be seen as moduli of homology classes, and the moduli of suitable cohomology classes. This is of course easier said than done. For instance, it is not very clear what ``suitable cohomology'' should mean, when $Q$ is nonsmooth. It seems these kinds of questions are still largely unexplored. Let us sketch a proof \eqref{eq:alaraja2} in the special case $k=1$. Then $A$ consists of two opposite faces $A_0$ and $A_1$ of $Q$ and, recalling the notation from the introduction, \[ \textnormal{mod}_p\Gamma_A=\textnormal{mod}_p\Gamma(A_0, A_1; Q). \] Moreover, by \cite{Shlyk1993} \begin{equation}\label{eq:capacity} \textnormal{mod}_p\Gamma(A_0, A_1; Q)=\textnormal{cap}_p\Gamma(A_0, A_1; Q), \end{equation} where the (Lipschitz) \emph{capacity} is defined by \[ \textnormal{cap}_p\Gamma(A_0, A_1; Q):=\inf_u\int_Q|\nabla u|^p\, d\mathcal{H}^n, \] and the infimum is taken over Lipschitz functions $u: Q\rightarrow I$ with $u|_{A_0}=0$ and $u|_{A_1}=1$. Then by the coarea formula \[ 1\leqslant \int_I\int_{u^{-1}(t)}\rho\, d\mathcal{H}^{n-1}dt=\int_Q\rho|\nabla u|\, d\mathcal{H}^n \] for any integrable $\rho$ admissible for $\Gamma_A^*$, since by \cite[3.2.15]{GMT} almost every level set $u^{-1}(t)$ is an element of $\Gamma_A^*$. Now the lower bound \eqref{eq:alaraja2} follows from H\"older's inequality and \eqref{eq:capacity}. Similar ideas can be used to prove that Theorems \ref{thm:main} and \ref{thm:main2} are sharp for any $n$ and $k$. Let us show that \eqref{eq:alaraja} holds whenever $Q=Q_1\times Q_2$, where $Q_1\subset{\mathbb R}^k$ and $Q_2\subset{\mathbb R}^{n-k}$ are $k$- and ($n-k$)-dimensional topological cubes as in Theorem \ref{thm:main}, $A=\partial Q_1\times Q_2$ and $B=Q_1\times \partial Q_2$. Then it suffices to show that \[ \textnormal{mod}_p\Gamma_A=\frac{\mathcal{H}^{n-k}(Q_2)}{\mathcal{H}^{k}(Q_1)^{p-1}}\,\text{ and }\,\textnormal{mod}_q\Gamma_B=\frac{\mathcal{H}^{k}(Q_1)}{\mathcal{H}^{n-k}(Q_2)^{q-1}}. \] The proofs of the two formulas are identical, so we only consider $\Gamma_A$. For every $y\in Q_2$ and $\rho$ admissible for $\Gamma_A$ \[ 1\leqslant \int_{Q_1\times\{y\}}\rho\, d\mathcal{H}^{k}, \] so by H\"older's inequality \[ 1\leqslant \left(\int_{Q_1\times\{y\}}\rho^p\, d\mathcal{H}^{k}\right)^{1/p}\mathcal{H}^{k}(Q_1)^{1/q}, \] from which we obtain the inequality "$\geqslant $" by integrating over $y$ and applying Fubini's theorem (or the coarea formula applied on the projection $\pi_2(x, y)=y$). The reverse inequality follows from the observation that $\mathcal{H}^{k}(Q_1)^{-1}\chi_Q$ is admissible for $\Gamma_A$. It is also noteworthy that in this case $\textnormal{mod}_q\Gamma_B=\textnormal{mod}_q\Gamma_A^*$, and both are equal to the $q$-modulus of the slices $\{x\}\times Q_2$. Observe that if we let $\lambda=\mathcal{H}^k(Q_1)^{-1/k}$ and use a scaled projection map $\lambda\pi_1(x, y)=\lambda x$ instead, we find that $\mathcal{H}^k(\lambda \pi_1(Q_1\times Q_2))=1$ and $J_{\lambda\pi_1}=\mathcal{H}^k(Q_1)^{-1}\chi_Q$. That is, the minimizer of $\textnormal{mod}_p\Gamma_A$ is the jacobian of $\lambda\pi_1$. Moreover, the level sets of $\lambda\pi_1$ are elements of $\Gamma_B$. Inspired by this example we extend the definition of the capacity to general $Q$ and $A$ by \[ \textnormal{cap}_p\Gamma_A:=\inf_u\int_QJ_u^p\, d\mathcal{H}^n, \] where the infimum is taken over all such Lipschitz maps $u: (Q, A)\rightarrow (\bar U, \partial U)$, that $U$ is a domain in ${\mathbb R}^k$ normalized with $\mathcal{H}^k(U)=1$, $(\bar U, \partial U)$ is homeomorphic to $(\bar \mathbb{B}^k, \partial\mathbb{B}^k)$, and the induced homomorphism \begin{equation}\label{eq:isomorfismi} u_*: H_k(Q, A)\rightarrow H_k(\bar U, \partial U)\simeq\mathbb{Z} \end{equation} is an isomorphism. We observe that $U\subset u(S)$ for any $S\in\Gamma_A$, so almost every level set of $u$ is in $\Gamma_A^*$, since $H_k(\bar U-\{x\}, \partial U)$ is trivial for all $x\in U$. Moreover, the Cauchy-Binet formula implies that $J_u\geqslant J_u^S$, so \[ \int_SJ_u\, d\mathcal{H}^k\geqslant \int_S J_u^S\, d\mathcal{H}^k\geqslant \int_U\, d\mathcal{H}^k=1 \] by Lemma \ref{lemma:coarea}. Thus $J_u$ is admissible for $\Gamma_A$ and \[ \textnormal{mod}_p\Gamma_A\leqslant \textnormal{cap}_p\Gamma_A. \] It is unknown whether the reverse inequality is true, but it would imply \eqref{eq:alaraja2}. To prove the reverse inequality one would have to be able to construct the required Lipschitz maps $u$. This seems to be very difficult when $k>1$, especially with a given $J_u$. If $k=1$, the situation is considerably simpler, since then $J_u=|\nabla u|$ and the unit interval $I$ is practically the only choice of $U$. \noindent Department of Mathematics and Statistics, University of Jyv\"askyl\"a, P.O. Box 35 (MaD), FI-40014, University of Jyv\"askyl\"a, Finland.\\ \emph{E-mail:} \settowidth{\hangindent}{\emph{aaaaaaaaa}} \textbf{[email protected]} \end{document}
arXiv
Nil-Coxeter algebra In mathematics, the nil-Coxeter algebra, introduced by Fomin & Stanley (1994), is an algebra similar to the group algebra of a Coxeter group except that the generators are nilpotent. Definition The nil-Coxeter algebra for the infinite symmetric group is the algebra generated by u1, u2, u3, ... with the relations ${\begin{aligned}u_{i}^{2}&=0,\\u_{i}u_{j}&=u_{j}u_{i}&&{\text{ if }}|i-j|>1,\\u_{i}u_{j}u_{i}&=u_{j}u_{i}u_{j}&&{\text{ if }}|i-j|=1.\end{aligned}}$ These are just the relations for the infinite braid group, together with the relations u2 i  = 0. Similarly one can define a nil-Coxeter algebra for any Coxeter system, by adding the relations u2 i  = 0 to the relations of the corresponding generalized braid group. References • Fomin, Sergey; Stanley, Richard P. (1994), "Schubert polynomials and the nil-Coxeter algebra", Advances in Mathematics, 103 (2): 196–207, doi:10.1006/aima.1994.1009, ISSN 0001-8708, MR 1265793
Wikipedia
\begin{document} \title{Exact optimal values of step-size coefficients for boundedness of linear multistep methods} \author{Lajos L\'oczi\thanks{{\texttt{[email protected]}}\newline \indent \ This work was supported by the King Abdullah University of Science and Technology (KAUST), 4700 Thuwal, 23955-6900, Saudi Arabia. The author was also supported by the Department of Numerical Analysis, E\"otv\"os Lor\'and University (ELTE), and the Department of Differential Equations, Budapest University of Technology and Economics (BME), Hungary.}} \date{\today} \maketitle \begin{abstract} Linear multistep methods (LMMs) applied to approximate the solution of initial value problems---typically arising from method-of-lines semidiscretizations of partial differential equations---are often required to have certain monotonicity or boundedness properties (e.g.~strong-stability-preserving, total-variation-diminishing or total-variation-boundedness properties). These properties can be guaranteed by imposing step-size restrictions on the methods. To qualitatively describe the step-size restrictions, one introduces the concept of step-size coefficient for monotonicity (SCM, also referred to as the strong-stability-preserving (SSP) coefficient) or its generalization, the step-size coefficient for boundedness (SCB). A LMM with larger SCM or SCB is more efficient, and the computation of the maximum SCM for a particular LMM is now straightforward. However, it is more challenging to decide whether a positive SCB exists, or determine if a given positive number is a SCB. Theorems involving sign conditions on certain linear recursions associated to the LMM have been proposed in the literature that allow us to answer the above questions: the difficulty with these theorems is that there are in general infinitely many sign conditions to be verified. In this work we present methods to rigorously check the sign conditions. As an illustration, we confirm some recent numerical investigations concerning the existence of positive SCBs in the BDF and in the extrapolated BDF (EBDF) families. As a stronger result, we determine the optimal values of the SCBs as exact algebraic numbers in the BDF family (with $1\le k\le 6$ steps) and in the Adams--Bashforth family (with $1\le k\le 3$ steps). \noindent \textbf{Keywords:} linear multistep methods, strong stability preservation, step-size coefficient for monotonicity, step-size coefficient for boundedness. \end{abstract} \section{Introduction} Let us consider an initial-value problem \begin{equation}\label{ODE} u'(t)=F(u(t))\text{ for } t\ge 0,\text{ with } u(0)=u_0, \end{equation} where $F:\mathbb{V}\to\mathbb{V}$ is a given function, $u_0\in \mathbb{V}$ is a given initial value in some vector space $\mathbb{V}$, and $u$ denotes the unknown function. In applications it is often crucial for the numerical solution $u_n$ to satisfy certain monotonicity or boundedness properties. \begin{exa} Many important partial differential equations have the property that they preserve \begin{itemize} \item[(\textit{i})] the interval containing the initial data; \item[(\textit{ii})] or, as a special case, non-negativity of the initial data. \end{itemize} For example, if one considers a scalar hyperbolic conservation law with initial condition $U(x,t_0)\in [U_{\min},U_{\max}]$ with some constants $U_{\min}\le U_{\max}$ for $x\in\mathbb{R}$, then it is known that the solution satisfies $U(x,t)\in [U_{\min},U_{\max}]$ for $x\in\mathbb{R}$ and $t\ge t_0$. To approximate the solution $U$ of this partial differential equation, one often uses a method-of-lines semidiscretization in space, and obtains a system of ordinary differential equations \eqref{ODE}. For many semidiscretizations, the initial-value problem \eqref{ODE} also preserves (\textit{i}) or (\textit{ii}). Finally, one typically uses a Runge--Kutta method or a linear multistep method to discretize \eqref{ODE}: in this setting it is natural to require that the time discretization $u_n$ should also preserve (\textit{i}) or (\textit{ii}). \end{exa} In situations when the numerical method is a linear multistep method (LMM) approximating the solution of \eqref{ODE}, the boundedness property can be expressed as \begin{equation}\label{boundednessproperty} \|u_n\|\le \mu\cdot \max_{0\le j\le k-1}\|u_j\|\quad\quad (n\ge k), \end{equation} where the constant $\mu\ge 1$ is independent of $n$, the starting vectors $u_j$ ($0\le j\le k-1$) and the problem \eqref{ODE}; $\mu$ is determined only by the LMM. The monotonicity property, or strong-stability-preserving (SSP) property, is recovered if \eqref{boundednessproperty} holds with $\mu=1$. Common choices for the seminorm $\|\cdot\|$ on $\mathbb{V}$ in applications include the supremum norm or the total variation seminorm. For LMMs, a more detailed exposition of the above topics together with references can be found, for example, in \cite[Section 1]{spijker2013}. For Runge--Kutta methods, analogous questions have been analyzed thoroughly and solved satisfactorily in \cite{hundsdorferspijker2011}. In what follows, we focus on LMMs. In the literature a considerable amount of work has been done on developing conditions that guarantee \eqref{boundednessproperty}. One possibility is to impose some restrictions on the step size $\Delta t$ of the LMM. These restrictions lead to the concepts of step-size coefficient for monotonicity (SCM) and step-size coefficient for boundedness (SCB)---see Definitions \ref{scbdef} and \ref{scmdef} below. Depending on the context, the SCM is also referred to as the strong-stability-preserving (SSP) coefficient. The SCB is a generalization of the SCM: for many practically important LMMs, there is no positive SCM, while a positive SCB still exists. It is thus natural to ask whether a positive step-size coefficient (SCM or SCB) exists for a particular LMM, or determine if a given positive number is a step-size coefficient. Since a LMM with larger step-size coefficient is more efficient, one is also interested in the maximum value of the SCM or SCB. Conditions that are easy to check and are necessary and sufficient for the existence of a positive SCM, or for a given positive number to be a SCM have already been devised, see \cite[Section 1.1]{spijker2013}. However, even for a single LMM, it seems more difficult \begin{itemize} \item[(\textit{i})] to decide whether a positive \text{SCB}\ exists; \item[(\textit{ii})] to determine if a given positive number is a \text{SCB}; \item[(\textit{iii})] to compute the maximum \text{SCB}. \end{itemize} In the rest of the paper, we pursue these goals. The theoretical framework we use is presented in \cite{hundsdorferspijkermozartova2012,spijker2013}, while the computational techniques we apply show many similarities with those of \cite{locziketcheson2014}. All computations in this work have been performed by using \textit{Mathematica} 10. The structure of our paper is as follows. In Section \ref{preliminariessection} we present some definitions and notation. In Sections \ref{section12} and \ref{section13} we review the main results of \cite{hundsdorferspijkermozartova2012} and \cite{spijker2013} concerning (\textit{ii}) and (\textit{i}) above, respectively. Section \ref{section2} contains our theorems for three families of multistep methods: \begin{itemize} \item for the extrapolated BDF (EBDF) methods we answer (\textit{i}); \item for the BDF methods (as implicit methods) we answer (\textit{iii}); \item for the Adams--Bashforth (AB) methods (as explicit methods) we answer (\textit{iii}). \end{itemize} The proofs are described in Section \ref{section3}. \begin{rem} In our proofs we essentially need to establish the non-negativity of certain (parametric) linear recursions. Recently, some general results have been devised solving the problem of (ultimate) positivity in several classes of integer linear recursions, see, for example, the series of papers \cite{survey,soda,book1,book2}. \end{rem} \subsection{Preliminaries and notation}\label{preliminariessection} A LMM has the form \begin{equation}\label{LMMform} u_n=\sum_{j=1}^k a_j u_{n-j}+\Delta t\sum_{j=0}^k b_j F(u_{n-j}) \quad\quad (n\ge k), \end{equation} where $k\ge 1$, the step number of the LMM, is a fixed integer, and the coefficients $a_j, b_j\in\mathbb{R}$ determine the method. The step size of the method $\Delta t>0$ is assumed to be fixed, and we suppose that the starting values for the LMM, $u_0$ (appearing in \eqref{ODE}) and $u_j$ ($1\le j\le k-1$), are also given. The quantity $u_n$ approximates the exact solution value $u(n\Delta t)$. The generating polynomials associated with the LMM are denoted by \begin{equation}\label{rhosigmadef} \rho(\zeta):=\zeta^k-\sum_{j=1}^k a_j \zeta^{k-j}\quad\text{and}\quad \sigma(\zeta):=\sum_{j=0}^k b_j \zeta^{k-j}. \end{equation} A non-constant univariate polynomial is said to satisfy the \textit{root condition}, if all of its roots have absolute value $\le 1$, and any root with absolute value $=1$ has multiplicity one. As in \cite{spijker2013}, the LMMs in this work are also required to satisfy the following basic assumptions. \begin{equation}\label{LMMbasicassumptions} \begin{aligned} 1.\quad & \sum_{j=1}^k a_j=1\text{ and } \sum_{j=1}^k j a_j=\sum_{j=0}^k b_j & \text{ (consistency).}\\ 2.\quad & \text{The polynomial } \rho \text{ satisfies the root condition} & \text{(zero-stability).}\\ 3.\quad & \text{The polynomials } \rho \text{ and } \sigma \text{ have no common root} & \text{(irreducibility).}\\ 4.\quad & b_0\ge 0. & \end{aligned} \end{equation} All well-known methods used in practice satisfy the four conditions in \eqref{LMMbasicassumptions}. The \textit{stability region} of the LMM, denoted by ${\mathcal{S}}$, is defined as \[ {\mathcal{S}}:=\{ \lambda\in\mathbb{C} : 1-\lambda b_0\ne 0\text{ and } \rho-\lambda\sigma \text{ satisfies the root condition}\}, \] see \cite[Section 2.1]{spijker2013}. The interior of the stability region will be denoted by $\mathrm{int}({\mathcal{S}})$. \begin{rem} Notice that the above definition of the stability region ${\mathcal{S}}$ is slightly more restrictive than the usual one. The usual definition of the stability region (see, for example, in \cite{hairerwanner}), \[{\widetilde{\mathcal{S}}}:=\{ \lambda\in\mathbb{C} : \rho-\lambda\sigma \text{ satisfies the root condition}\}, \] does \emph{not} exclude the case of a vanishing leading coefficient of the polynomial ${\cal{P}}(\cdot,\lambda):=\rho(\cdot)-\lambda \sigma(\cdot)$. With this definition ${\widetilde{\mathcal{S}}}$, one can construct simple examples with the following properties: \begin{itemize} \item the order of the recurrence relation generated by the LMM becomes $<k$ for certain values of the step size $\Delta t>0$, hence $k$ starting values of the LMM cannot be chosen arbitrarily; \item there is an isolated point of the boundary of ${\widetilde{\mathcal{S}}}$ (being an element of ${\widetilde{\mathcal{S}}}$); \item the boundary of ${\widetilde{\mathcal{S}}}$ is not a subset of the root locus curve due to these isolated boundary points. \end{itemize} Similarly, in the class of multiderivative multistep methods (being a generalization of LMMs), it seems advantageous to exclude the values of $\lambda\in\mathbb{C}$ from the definition of the stability region for which the leading coefficient of the corresponding polynomial ${\cal{P}}(\cdot,\lambda)$ vanishes. \end{rem} The set of natural numbers $\{0,1,\ldots\}$ is denoted by $\mathbb{N}$, while the complex conjugate of $z$ is $\bar{z}$. The \textit{dominant root} of a non-constant univariate polynomial is any root having the largest absolute value. When we define algebraic numbers in later sections, a polynomial \[\sum_{j=0}^n a_j x^j \text{ with } a_j\in\mathbb{Z}, a_n\ne 0 \text{ and } n\ge 3\] will be represented simply by its coefficient list \begin{equation}\label{coeffdef} \{a_n, a_{n-1},\ldots, a_0\}. \end{equation} Now we recall the definition of the \textit{step-size coefficient for boundedness} and \textit{monotonicity}, respectively, corresponding to a given linear multistep method. \begin{dfn}\label{scbdef} Suppose that the method coefficients $a_j\in\mathbb{R}$ ($1\le j\le k$) and $b_j\in\mathbb{R}$ ($0\le j \le k$) satisfy \eqref{LMMbasicassumptions}. We say that $\gamma>0$ is a step-size coefficient for boundedness (\text{SCB}) of the corresponding LMM, if $\exists\ \mu \ge 1$ such that \begin{itemize} \item for any vector space with seminorm $(\mathbb{V},\|\cdot\|)$, \item for any function $F:\mathbb{V}\to\mathbb{V}$ satisfying \[\exists\tau >0\ \ \forall v\in \mathbb{V} \ :\ \|v+\tau F(v)\|\le \|v\|,\] \item for any $\Delta t\in (0,\gamma\,\tau]$, \item and for any starting vectors $u_j\in \mathbb{V}$ ($0\le j\le k-1$), \end{itemize} the sequence $u_n$ generated by \eqref{LMMform} has the property $\|u_n\|\le \mu\cdot \max_{0\le j\le k-1}\|u_j\|$ for all $n\ge k$. \end{dfn} \begin{dfn}\label{scmdef} We say that $\gamma>0$ is a step-size coefficient for monotonicity (\text{SCM}) of the LMM, if Definition \ref{scbdef} holds with $\mu=1$. \end{dfn} \noindent Given a LMM, the following abbreviations will be used throughout this work: \begin{itemize} \item $\exists \mathrm{\ SCM}>0$ and $\nexists \mathrm{\ SCM}>0$ to indicate that there is a positive / there is no positive step-size coefficient for monotonicity, respectively; \item $\exists \mathrm{\ SCB}>0$ and $\nexists \mathrm{\ SCB}>0$ to indicate that there is a positive / there is no positive step-size coefficient for boundedness, respectively. \end{itemize} \noindent It is clear from Definitions \ref{scbdef}-\ref{scmdef} that for a given LMM \[ \exists \mathrm{\ SCM}>0 \implies \exists \mathrm{\ SCB}>0. \] If $\exists \mathrm{\ SCB}>0$, then we define \[ \gamma_\mathrm{sup}:=\sup\{\gamma>0 : \gamma \text{ is a } \text{SCB} \}. \] When a family of $k$-step LMMs is given, sometimes we will use the symbol $\gamma_{\mathrm{sup},k}$ instead. \subsection{A necessary and sufficient condition for $\gamma>0$ to be a $\text{SCB}$}\label{section12} Let us fix a particular LMM. For a given $\gamma\in\mathbb{R}$, we define an auxiliary sequence $\mu_n(\gamma)$ ($n\in\mathbb{Z}$) as in \cite[(2.10)]{spijker2013} by \begin{equation}\label{mundef} \mu_n(\gamma) :=\left\{ \begin{aligned} & 0 & \text{ for } & n<0,\\ & b_n-\gamma\,b_0\mu_n(\gamma)+\sum_{j=1}^k (a_j -\gamma\,b_j)\mu_{n-j}(\gamma) & \text{for } & 0\le n \le k,\\ & -\gamma\,b_0\mu_n(\gamma)+\sum_{j=1}^k (a_j -\gamma\,b_j)\mu_{n-j}(\gamma) & \text{for } & n>k. \end{aligned} \right. \end{equation} The following characterization appears in \cite[Theorem 2.2]{spijker2013}. \begin{thm}\label{thm1.1} Suppose the LMM satisfies \eqref{LMMbasicassumptions} and let $\gamma>0$ be given. Then $\gamma$ is a $\text{SCB}$ if and only if \begin{equation}\label{necsufcond2} -\gamma\in \mathrm{int}({\mathcal{S}}), \text{ and } \mu_n(\gamma)\ge 0 \text{ for all } n\in\mathbb{N}^+. \end{equation} \end{thm} The above theorem is based on the material developed in \cite{hundsdorferspijkermozartova2012}. In \cite[Section 6]{hundsdorferspijkermozartova2012}, the authors numerically determine the maximum \text{SCB}\ values for members of several parametric families of LMMs by repeatedly applying the following test. For a particular LMM and given $\gamma>0$, they check if $\gamma$ is a $\text{SCB}$ by choosing a large $N\in\mathbb{N}$, and verifying $\mu_n(\gamma)\ge 0$ for all $1\le n\le N$. However, as the authors point out in \cite{hundsdorferspijkermozartova2012}, it is not obvious (neither \textit{a priori} nor \textit{a posteriori}) how large $N$ one should choose to conclude---with high certainty---that $\mu_n(\gamma)\ge 0$ for all $n\in\mathbb{N}^+$. They typically use $N\approx 10^3$; as a comparison, see our Remark \ref{bdf4remark}. \subsection{The existence of a \text{SCB}}\label{section13} For a fixed LMM and given $\gamma>0$, Theorem \ref{thm1.1} provides a necessary and sufficient condition for $\gamma$ to be a $\text{SCB}$. But to decide---with the help of this theorem---whether $\nexists \mathrm{\ SCB}>0$, one should check condition \eqref{necsufcond2} for infinitely many $\gamma>0$ values, and for each $\gamma$, there are infinitely many sign conditions $\mu_n(\gamma)\ge 0$ to be verified. To overcome this difficulty, \cite[Theorem 3.1]{spijker2013} combines Theorem \ref{thm1.1} with the results of \cite{tijdeman2013} to present some \textit{simpler} conditions that are \textit{almost} necessary and sufficient for $\exists \mathrm{\ SCB}>0$. ``Almost'' in the previous sentence means that the conditions in \cite[Theorem 3.1]{spijker2013} are necessary and sufficient for $\exists \mathrm{\ SCB}>0$ (not in the full, but) in a slightly restricted class of LMMs; and ``simpler'' means that these conditions do not involve the parametric recursion $\mu_n(\gamma)$ in \eqref{mundef}, rather, a non-parametric recursion $\tau_n$ determined by the method coefficients as \begin{equation}\label{taundef} \tau_n :=\left\{ \begin{aligned} & 0 & \text{ for } & n<0,\\ & b_n+\sum_{j=1}^k a_j \tau_{n-j} & \text{for } & 0\le n \le k,\\ & \sum_{j=1}^k a_j \tau_{n-j} & \text{for } & n>k. \end{aligned} \right. \end{equation} Since we will not work with \cite[Theorem 3.1]{spijker2013} directly, here we cite only \cite[Corollary 3.3]{spijker2013}. \begin{cor}\label{spijkercorollary3.3} Suppose the LMM satisfies \eqref{LMMbasicassumptions}. We define \begin{equation}\label{n0def} n_0:=\min\{ n: 1\le n\le k \text{ and } \tau_n\ne 0\}. \end{equation} \begin{itemize} \item[(i)] If $\tau_n>0$ for all $n\ge n_0$, and the only root of the polynomial $\rho$ appearing in \eqref{rhosigmadef} with modulus $1$ is $1$, then $\exists \mathrm{\ SCB}>0$. \item[(ii)] If $\tau_n\le 0$ for some $n\ge n_0$ being a multiple of $n_0$, then $\nexists \mathrm{\ SCB}>0$. \end{itemize} \end{cor} \noindent The index $n_0$ defined above can be shown to exist due to consistency and zero-stability of the LMM. As an application of \cite[Theorem 3.1]{spijker2013} or Corollary \ref{spijkercorollary3.3}, \cite[Section 5]{spijker2013} analyzes some well-known classical LMMs, including \begin{itemize} \item the Adams--Moulton (or implicit Adams), \item the Adams--Bashforth (or explicit Adams), \item the BDF, \item the extrapolated BDF (EBDF), \item the Milne--Simpson and \item the Nystr\"om methods. \end{itemize} These investigations confirm and extend some earlier results \cite{hundsdorferruuthspiteri2003,hundsdorferruuth2006,hundsdorferspijkermozartova2011,hundsdorferspijkermozartova2012} concerning the existence of step-size coefficients for monotonicity or step-size coefficients for boundedness. The results of \cite[Section 5]{spijker2013} have the following form. Consider a discrete family of LMMs from the previous paragraph, parametrized by the step number $k\in\mathbb{N}$. Let $1\le k_\text{min}\le k_\text{max}\le +\infty$ denote some fixed bounds on $k$ coming from practical considerations (e.g. zero-stability of the LMM), that is, we consider the step numbers $k_\text{min}\le k\le k_\text{max}$. Then there exist two integers $0\le k_\text{mon} \le k_\text{bdd}$ such that \begin{itemize} \item[$\bullet$] $\exists \mathrm{\ SCM}>0$ $\Longleftrightarrow k_\text{min}\le k\le k_\text{mon}$; \item[$\bullet$] $(\nexists \mathrm{\ SCM}>0\text{ and }\exists \mathrm{\ SCB}>0 )\Longleftrightarrow k_\text{mon}+1\le k\le k_\text{bdd}$; \item[$\bullet$] $\nexists \mathrm{\ SCB}>0 \Longleftrightarrow k_\text{bdd}+1\le k\le k_\text{max}$. \end{itemize} It is to be understood that if $\ell_1\le k\le \ell_2$ with $\ell_1>\ell_2$ in any of the inequalities above, then the corresponding case does not occur. Some examples from \cite[Section 5]{spijker2013} are provided in the table below. \begin{table}[H] \centering \begin{tabular}{|c||c|c|c|c|} \hline \textbf{LMM family} & $k_\text{min}$ & $k_\text{max}$ & $k_\text{mon}$ & $k_\text{bdd}$ \\ \hline Adams--Bashforth & 1 & $+\infty$ & 1 & 3 \\ \hline BDF & 1 & 6 & 1 & 6 \\ \hline EBDF & 1 & 6 & 1 & 5 \\ \hline Milne--Simpson & 2 & $+\infty$ & 1 & 1 \\ \hline \end{tabular} \end{table} Out of the several LMMs investigated in \cite[Section 5]{spijker2013}, there are however two families---the BDF methods with $3\le k\le 6$ steps, and the EBDF methods with $3\le k\le 5$ steps---for which the corresponding inequalities \begin{equation}\label{taupositivity} \tau_n>0\quad \text{for } n\ge n_0 \end{equation} appearing in Corollary \ref{spijkercorollary3.3} are not verified completely. More precisely, \eqref{taupositivity} is verified only up to a finite value $n_0\le n\le N$ (for example, up to $N=500$), and it is observed that, for these large $n$ values, $\tau_n$ is already close enough to $\lim_{n\to+\infty}\tau_n=1$ to conclude (``we have no formal proof \ldots, but convincing numerical evidence instead'') the validity of \eqref{taupositivity} (see \cite[Conclusions 5.3 and 5.4]{spijker2013}). \section{Main results}\label{section2} \subsection{Positivity of the $\tau_n$ sequences in the $\text{EBDF}$ family } \begin{thm}\label{thm2.1} Let us fix any $3\le k\le 5$ and consider the EBDF family with $k$ steps. Then the sequence $\tau_n$ satisfies $\tau_n>0$ for $n\ge n_0=1$ (see \eqref{taundef} and \eqref{n0def}). \end{thm} The above theorem completes and verifies the numerical proof of \cite[Conclusion 5.4]{spijker2013} regarding the EBDF methods with $k\in\{3,4,5\}$ steps. In the proof of Theorem \ref{thm2.1}, given in Sections \ref{proofsummaryEBDF} and \ref{EBDFmethodssection}, we explicitly represent $\tau_n$ as a linear combination of powers of algebraic numbers to estimate this sequence from below and hence prove its positivity. As a combination of \cite[Conclusion 5.4]{spijker2013} and our Theorem \ref{thm2.1} we obtain the following result. \begin{cor}In the EBDF family \begin{itemize} \item $\exists \mathrm{\ SCM}>0$ for the $1$-step EBDF method; \item $\nexists \mathrm{\ SCM}>0$ but $\exists \mathrm{\ SCB}>0$ for the $k$-step EBDF method with $k\in\{2,3,4,5\}$; \item $\nexists \mathrm{\ SCB}>0$ for the $6$-step EBDF method. \end{itemize} \end{cor} \subsection{Exact optimal \text{SCB}\ values in the BDF family}\label{section23exactoptimal} We complete the numerical proof of \cite[Conclusion 5.3]{spijker2013} concerning the existence of \text{SCB}\ for the BDF methods with $3\le k\le 6$ steps. However, instead of just proving the positivity of the corresponding sequences $\tau_n$, we directly determine the exact and optimal values of the \text{SCB}\ constants for $2\le k\le 6$. For the sake of completeness, the $k=1$ case (the implicit Euler method) is also included. The approximate numerical values of $\gamma_{\mathrm{sup},k}$ below have been rounded down. The polynomial coefficients---see \eqref{coeffdef} for the notation---corresponding to the cases $k=5$ and $k=6$ have been aligned for easier readability (and they are to be read in the usual way, horizontally from left to right). \begin{thm}\label{thm2.2} The optimal values of the step-size coefficients for boundedness $\gamma_{\mathrm{sup},k}$ in the BDF family are given by the following exact algebraic numbers: \begin{itemize} \item $\gamma_\mathrm{sup,1}=+\infty;$ \item $\gamma_\mathrm{sup,2}=1/2;$ \item $\gamma_\mathrm{sup,3}\approx 0.831264155297$ is the smallest real root of the 4th-degree polynomial\\ {\centerline{\footnotesize{$\{5184, -539352, 4277340, -7093698, 3248425\};$}}} \item $\gamma_\mathrm{sup,4}\approx 0.486220284043$ is the unique real root of the 5th-degree polynomial\\ \centerline{\footnotesize{$\{147456, -4065024, 97751296, -178921248, 146499984, -39945535\};$}} \item $\gamma_\mathrm{sup,5}\approx 0.304213712525$ is the smaller real root of the 10th-degree polynomial \begin{table}[H] \center \footnotesize{ \hskip0.6cm \begin{tabular}{r r r} \{9183300480000000000, & 85812841152000000000, & 11922800956027200000000, \\ $-$158236459797931200000000, & 1300372831455671124000000, & $-$3469598208824475416400000, \\ 5222219230639370911710000, & $-$4938342912266137089480000, & 2829602902356809601352800,\\ $-$897140360120473365541380, & 113406532200497326720157\}; \end{tabular}} \end{table} \item $\gamma_\mathrm{sup,6}\approx 0.131359487166$ is the smaller real root of the 18th-degree polynomial \begin{table}[H] \fontsize{8}{11}\selectfont \center \begin{tabular}{rr} \{301499153838045275528311603200000000, & 122639585534504839818945201438720000000, \\ 384963168041618344234237602954215424000000, & 27549570033081885223128023207444584857600000,\\ 688321830171904949334479202088109368934400000, & $-$3841469418723966761157769983211793789485056000,\\ 114843588487750902323103668249803599786305126400, & $-$1006269459507863531788997342497299304467812843520,\\ 5587246198359348966734174906666273788289332150272, & $-$17429944795858965010882996868073155329514839408640, \\ 35959114141443095864886240750517884787497897431040, & $-$53357827225132542443145327442029250536098863687680,\\ 58779078470720235677143648519968524504336318905600, & $-$48117131040654192740877887801688549303578668712064, \\ 28809153195856173726312967696976168633917662024240, & $-$12158530101520566099221248226347019432756062262240, \\ 3383327891741061214240426918034255832010259451480, & $-$541370800878125712591610585145194659522378896880, \\ 33328092641186254550760247661168148768262937067\}. & \end{tabular} \end{table} \end{itemize} \end{thm} The proofs of the above results are given in Sections \ref{proofsummaryBDF} and \ref{BDFmethodssection}. From a technical point of view, the proof of the $k=3$ case is different from the other cases, see Remark \ref{remark32}. \subsection{Exact optimal \text{SCB}\ values in the Adams--Bashforth family} To further illustrate our techniques, we have computed the largest \text{SCB}\ values for an explicit LMM family as well; we chose the Adams--Bashforth methods with $1\le k\le 4$ steps. For $k=1$ (i.e.~for the explicit Euler method) it is known (\cite[Theorem 5.2]{spijker2013}) that $\exists \mathrm{\ SCM}>0$, hence $\exists \mathrm{\ SCB}>0$. For any $k\ge 4$, \cite[Theorem 5.2]{spijker2013} proves---with the help of the sequence $\tau_n$---that $\nexists \mathrm{\ SCB}>0$. The reason we include the $k=4$ case here is to show an example of using the parametric sequence $\mu_n(\gamma)$ and Theorem \ref{thm1.1} instead of $\tau_n$ in Corollary \ref{spijkercorollary3.3} (\textit{ii}) to detect $\nexists \mathrm{\ SCB}>0$. \begin{thm} The optimal values of the step-size coefficients for boundedness in the Adams--Bashforth family are given by the rational numbers below: \begin{itemize} \item $\gamma_\mathrm{sup,1}=1;$ \item $\gamma_\mathrm{sup,2}=4/9\approx 0.44444;$ \item $\gamma_\mathrm{sup,3}=84/529\approx 0.15879;$ \item for $k=4$, $\nexists \mathrm{\ SCB}>0$. \end{itemize} \end{thm} \noindent The proofs of these results are found in Sections \ref{proofsummaryAB} and \ref{ABmethodssection}. \section{Proofs}\label{section3} \subsection{Summary of the proof techniques for the EBDF methods}\label{proofsummaryEBDF} The proofs in Section \ref{EBDFmethodssection} for the EBDF methods use the following argument. Since $\tau_n$ in \eqref{taundef} is a solution of a linear recursion, it is represented as \begin{equation}\label{taurepresentation} \tau_n=\sum_{j=1}^k c_j \varrho_{j}^n, \end{equation} where the quantities $\varrho_{j}\in\mathbb{C}$ are the roots of the corresponding characteristic polynomial (without multiple roots for each EBDF method), and the constants $c_j\in\mathbb{C}$ are determined by the starting values. By bounding $|c_j|$ and $|\varrho_j|$, we prove the inequality $\tau_n>0$ for all $n\ge 1$. \subsection{Summary of the proof techniques for the BDF methods}\label{proofsummaryBDF} The proofs in Section \ref{BDFmethodssection} for the BDF methods are based on the following. For any given $\gamma>0$, the linear recursion \eqref{mundef} takes the form \begin{equation}\label{genmunrec} \sum_{j=0}^k c_{j}(\gamma)\mu_{n-j}(\gamma)=0\quad (k\le n\in\mathbb{N}), \end{equation} where the coefficients $c_{j}(\gamma)$ ($0\le j\le k$) and the starting values $\mu_j(\gamma)$ ($0\le j\le k-1$) are determined by the LMM. The corresponding characteristic polynomial is denoted by \begin{equation}\label{Ppkcharpoly} {\mathcal{P}}_k(\varrho,\gamma):=\sum_{j=0}^k c_{j}(\gamma)\varrho^{k-j}. \end{equation} We apply the characterization in Theorem \ref{thm1.1} together with Observations 1-4 presented below. Lemma \ref{upperestlemma1} and Lemma \ref{upperestlemma2} will be used to bound $\gamma_\mathrm{sup}$ from above for the $k$-step BDF methods with $k=3$ and $k\in\{2, 4, 5, 6\}$, respectively. Then, by using representations similar to \eqref{taurepresentation} and Observation 4, we show in each case that the proposed upper bound for $\gamma_\mathrm{sup}$ is sharp. \noindent \quad $\bullet$ \textbf{Observation 1} \noindent For a $k$-step BDF method ($1\le k\le 6$), it is known \cite{hairerwanner} that $-\gamma\in \mathrm{int}({\mathcal{S}})$ for any $\gamma>0$. Therefore, the condition \eqref{necsufcond2} in Theorem \ref{thm1.1} reduces to $\mu_n(\gamma)\ge 0$ ($n\in\mathbb{N}^+$). \noindent \quad $\bullet$ \textbf{Observation 2} \noindent It is easily seen from Definition \ref{scbdef} that if $\gamma_0>0$ is a $\text{SCB}$, then each number from the interval $(0,\gamma_0]$ is also a $\text{SCB}$; thus, by \eqref{necsufcond2}, we also have $\mu_n(\gamma)\ge 0$ for all $n\in\mathbb{N}^+$ and $\gamma\in(0,\gamma_0]$. Since the function $\gamma\mapsto \mu_n(\gamma)$ (clearly being a rational function for any fixed $n\in\mathbb{N}$ due to the form of the linear recursion \eqref{mundef}) cannot be non-negative in a neighborhood of a simple zero, we immediately obtain the following upper bound on $\gamma_\mathrm{sup}$ (in the lemma, $\mu'_n$ denotes the derivative of the function $\mu_n(\cdot)$). \begin{lem}\label{upperestlemma1} Suppose there exist some $n\in\mathbb{N}^+$ and $\gamma^*>0$ such that $\mu_n(\gamma^*)=0$ and $\mu'_n(\gamma^*)\in\mathbb{R}\setminus\{0\}$. Then $\gamma_\mathrm{sup}\le\gamma^*$. \end{lem} \noindent \quad $\bullet$ \textbf{Observation 3} \noindent The following lemma will be applied to bound $\gamma_\mathrm{sup}$ from above when the characteristic polynomial has a unique pair of complex conjugate roots that are dominant. \begin{lem}\label{upperestlemma2} Suppose that $z\in\mathbb{C}\setminus\mathbb{R}$ with $|z|=1$, $w\in\mathbb{C}\setminus\{0\}$, and a \emph{real} sequence $\nu_n\to 0$ ($n\to +\infty$) are given. Then $w z^n+\bar{w} (\bar{z})^n+\nu_n<0$ for infinitely many $n\in\mathbb{N}$. \end{lem} \begin{proof} We introduce $\varphi, \psi\in[0,2\pi)$ via the relations $z=\exp(i\varphi)$ and $w=|w|\exp(i\psi)$. Due to symmetry, we can suppose that $\varphi\in(0,\pi)$, so there is a $\delta\in(0,\pi/2)$ such that $\delta<\varphi<\pi-\delta$. Then \begin{equation}\label{zwnun} w z^n+\bar{w} (\bar{z})^n+\nu_n=2|w|\cos\left(n\varphi+\psi\right)+\nu_n. \end{equation} We show that \begin{equation}\label{cosineq} \cos\left(n\varphi+\psi\right)\le\cos(\pi/2+\delta/2) \text{ for infinitely many } n. \end{equation} Indeed, the inequality in \eqref{cosineq} holds if and only if $n\in\mathbb{N}$ and $k\in\mathbb{Z}$ are chosen such that \begin{equation}\label{LHSRHSineq} \text{LHS}:=(\pi/2+\delta/2-\psi+2\pi k)/\varphi\le n \le(3\pi/2-\delta/2-\psi+2\pi k)/\varphi=:\text{RHS}. \end{equation} But $\text{RHS}-\text{LHS}=(\pi-\delta)/\varphi>1$, so, by taking $k\in\mathbb{N}$ larger and larger, we see that there are infinitely many $n\in\mathbb{N}$ satisfying \eqref{LHSRHSineq}. Finally, by using $|w|\ne 0$, \eqref{cosineq}, $\cos(\pi/2+\delta/2)<0$ and $\nu_n\to 0$, we get that \eqref{zwnun} is also negative for infinitely many $n$ indices. \end{proof} \noindent \quad $\bullet$ \textbf{Observation 4} \noindent By taking into account the first sentence of Observation 2, we get the following lower bound. \begin{equation}\label{sect32intro} \exists\,\gamma_0>0 : \mu_n(\gamma_0)\ge 0\ (\forall\,n\in\mathbb{N}^+) \implies \gamma_\mathrm{sup}\ge \gamma_0. \end{equation} \begin{rem} Notice the similarities between Lemma \ref{upperestlemma1} and \cite[Lemma 4.5]{kraaijevanger}, and between Lemma \ref{upperestlemma2} and \cite[Lemma 3.1]{kraaijevanger}. Also compare Lemma \ref{upperestlemma2} and \cite[Theorem 4.3]{spijker2013}. \end{rem} \begin{rem}\label{bdf4remark} Obtaining the exact value of $\gamma_\mathrm{sup,4}\approx 0.48622$ proved to be significantly harder than determining that of $\gamma_\mathrm{sup,3}$, because we could not apply Lemma \ref{upperestlemma1} to bound $\gamma_\mathrm{sup,4}$ from above. The value of $\gamma_\mathrm{sup,4}$ was found via a series of numerical experiments. For example, to see $\gamma_\mathrm{sup,4}<0.48625$, one checks that the sequence $\mu_n$ in Theorem \ref{thm1.1} for $1\le n\le 27000$ satisfies \[ \mu_n({48625/100000})<0 \Longleftrightarrow n\in\{ 26814, 26875, 26886, 26936, 26947, 26997\}. \] To find all these six indices, we used 16000 digits of precision to evaluate the terms of the recursion $\mu_n({48625/100000})$---15000 digits would be insufficient. In fact, these experiments led to the formulation of Lemma \ref{upperestlemma2}. \end{rem} \begin{rem}\label{remark32} Regarding the determination of $\gamma_\mathrm{sup,3}$, the characteristic polynomial ${\mathcal{P}}_3(\cdot,\gamma)$ has one real root $\varrho_1(\gamma)>0$ and a pair of complex conjugate roots $\varrho_{2,3}(\gamma)$ with $|\varrho_1(\gamma)|=|\varrho_2(\gamma)|=|\varrho_3(\gamma)|$ for $\gamma=5/6\approx 0.83333$. From Lemma \ref{upperestlemma2} we would get the bound $\gamma_\mathrm{sup,3}\le 5/6$, but this bound is not sharp. However, Lemma \ref{upperestlemma1} with $n=6$ yields the exact value of $\gamma_\mathrm{sup,3}\approx 0.83126$. \end{rem} \subsection{Summary of the proof techniques for the Adams--Bashforth methods}\label{proofsummaryAB} Since these LMMs are explicit, we have $b_0=0$ in \eqref{LMMform}, so from \eqref{mundef} we see that for any $n\in\mathbb{N}$ the function $\gamma\mapsto \mu_n(\gamma)$ is a polynomial and $\mu_0(\gamma)=0$. For $1\le k\le 3$, we study the roots of these polynomials $\mu_n(\cdot)$ for small $n$ to conjecture the value of $\gamma_{\mathrm{sup},k}$. Of course, Observation 1 from the previous section cannot be applied now, because we have to take into account the condition $-\gamma\in \mathrm{int}({\mathcal{S}})$ in \eqref{necsufcond2} as well. So we use Lemma \ref{upperestlemma1} together with \begin{equation}\label{17mod} (\exists\,\gamma_0>0 : -\gamma_0\in \mathrm{int}({\mathcal{S}}) \text{ and } \mu_n(\gamma_0)\ge 0\ (\forall\,n\in\mathbb{N}^+)) \implies \gamma_\mathrm{sup}\ge \gamma_0 \end{equation} to verify that the conjectured $\gamma_\mathrm{sup}$ is indeed the optimal \text{SCB}. \begin{rem} For $2\le k\le 3$, it turns out that the dominant root of the characteristic polynomial ${\mathcal{P}}_k(\cdot,\gamma)$ in \eqref{Ppkcharpoly} is positive real for $\gamma=\gamma_{\mathrm{sup},k}$, so in these cases a result similar to Lemma \ref{upperestlemma2} is not applicable. \end{rem} \subsection{Proofs for the EBDF methods}\label{EBDFmethodssection} The coefficients for the EBDF methods are listed, for example, in \cite{ruuthhundsdorfer2005}. \subsubsection{The EBDF3 method} For this method, the recursion \eqref{taundef} takes the form \begin{equation}\label{EBDF3tau} 11 \tau_n -18 \tau_{n-1}+9 \tau_{n-2}-2 \tau_{n-3}=0\quad (n\ge 4) \end{equation} with \[ \tau_1={18}/{11},\quad \tau_2={126}/{121},\quad \tau_3={1212}/{1331}. \] We have $\tau_0=0$ and $n_0=1$, hence it is enough to prove $\tau_n>0$ for all $n\ge 1$. One root of the characteristic polynomial corresponding to \eqref{EBDF3tau} is $1$, so we get the representation \[ \tau_n=1+\left(\frac{7}{22}+\frac{i \sqrt{39}}{22}\right)^n+\left(\frac{7}{22}-\frac{i \sqrt{39}}{22}\right)^n \quad (n\ge 1). \] But for $n\ge 1$ we have \[ \left|\frac{7}{22}+\frac{i \sqrt{39}}{22}\right|^n+ \left|\frac{7}{22}-\frac{i \sqrt{39}}{22}\right|^n=2\cdot \left(\frac{2}{11}\right)^{{n}/{2}}\le 9/10, \] and the positivity of $\tau_n>0$ follows. \subsubsection{The EBDF4 method} The recursion \eqref{taundef} now reads \[ 25 \tau_n-48 \tau_{n-1}+36 \tau_{n-2}-16\tau_{n-3}+3\tau_{n-4}=0\quad (n\ge 5) \] with \[ \tau_1={48}/{25},\quad \tau_2={504}/{625},\quad \tau_3={10992}/{15625},\quad \tau_4={366516}/{390625}. \] We again have $\tau_0=0$ and $n_0=1$. The explicit form of the sequence is \[ \tau_n=1+\sum_{j=1}^3 \varrho_{j}^n \quad (n\ge 1), \] where $\varrho_1\in\mathbb{R}$ and $\varrho_{2,3}\in\mathbb{C}\setminus\mathbb{R}$ are the three roots of the cubic polynomial $\{25, -23, 13, -3\}$. This time we have for $n\ge 3$ that \[ \sum_{j=1}^3 |\varrho_{j}|^n\le 3\cdot (3/5)^{n}\le 9/10, \] proving $\tau_n>0$ for $n\ge 1$. \subsubsection{The EBDF5 method} For this method, the recursion \eqref{taundef} is \[ 137 \tau_n-300 \tau_{n-1}+300\tau_{n-2}-200\tau_{n-3}+75\tau_{n-4}-12 \tau _{n-5}=0\quad (n\ge 6) \] with \[ \tau_1={300}/{137},\quad \tau_2={7800}/{18769},\quad \tau_3={1271400}/{2571353}, \] \[ \tau_4={415574100}/{352275361},\quad \tau_5={64978409160}/{48261724457}. \] We have $\tau_0=0$ and $n_0=1$. The explicit form of the sequence is \[ \tau_n=1+\sum_{j=1}^4 \varrho_{j}^n \quad (n\ge 1), \] where $\varrho_{1,2,3,4}\in\mathbb{C}\setminus\mathbb{R}$ are the four roots of the polynomial $\{137, -163, 137, -63, 12\}$. But $|\varrho_{1,2,3,4}|\le 71/100$, so for $n\ge 5$ we have \[ \sum_{j=1}^4 |\varrho_{j}|^n\le 4\cdot \left({71}/{100}\right)^n\le 9/10, \] proving $\tau_n>0$ for $n\ge 1$. \subsection{Proofs for the BDF methods}\label{BDFmethodssection} The coefficients for the BDF methods are listed, for example, in \cite{hairerwanner}. \subsubsection{The BDF1 method} We include this method here for the sake of completeness. The recursion \eqref{genmunrec} now has the form \[ (\gamma +1) \mu_n(\gamma)-\mu_{n-1}(\gamma)=0\quad\quad (n\ge 1) \] with \[ \mu_0(\gamma)=\frac{1}{\gamma+1}. \] The explicit solution is $\mu_n(\gamma)=1/(\gamma+1)^{n+1}>0$, so, due to Theorem \ref{thm1.1}, we have that $\gamma$ is a $\text{SCB}$ for any $\gamma>0$. \subsubsection{The BDF2 method} The recursion \eqref{genmunrec} takes the form \[ (2 \gamma +3) \mu_n(\gamma)-4 \mu_{n-1}(\gamma)+\mu_{n-2}(\gamma)=0\quad\quad (n\ge 2) \] with \[ \mu_0(\gamma)=\frac{2}{2 \gamma +3},\quad \mu_1(\gamma)=\frac{8}{(2 \gamma +3)^2}. \] Its characteristic polynomial ${\mathcal{P}}_2(\cdot,\gamma)$ is quadratic for $\gamma>0$. This polynomial has \begin{itemize} \item two distinct real roots for $0<\gamma<1/2$; \item a double real root for $\gamma=1/2$; \item a pair of complex conjugate roots for $\gamma>1/2$. \end{itemize} For any fixed $\gamma>1/2$ we thus have \[ \mu_n(\gamma)=|\varrho_1(\gamma)|^n\left[c_{1}(\gamma)\left(\frac{\varrho_1(\gamma)}{|\varrho_1(\gamma)|}\right)^n+\overline{c_{1}(\gamma)}\left(\frac{\overline{\varrho_1(\gamma)}}{|\varrho_1(\gamma)|}\right)^n\right] \] with a suitable $c_{1}(\gamma)\in\mathbb{C}\setminus\{0\}$ and $\varrho_1(\gamma)\in\mathbb{C}\setminus\mathbb{R}$. Due to Lemma \ref{upperestlemma2} with $\nu_n\equiv 0$, the expression in $[\ldots]$ is negative for infinitely many $n$. Hence, by Theorem \ref{thm1.1}, $1/2+\varepsilon$ is not a $\text{SCB}$ for any $\varepsilon>0$, implying $\gamma_\mathrm{sup,2}<1/2+\varepsilon$. Conversely, by verifying $\mu_n(1/2)=2^{-n-1} (n+1)\ge 0$ for all $n\in\mathbb{N}$ and taking into account \eqref{sect32intro}, we see that $\gamma_\mathrm{sup,2}\ge 1/2$, so the proof is complete. \subsubsection{The BDF3 method} The recursion \eqref{genmunrec} is \[ (6 \gamma +11) \mu_n(\gamma)-18\mu_{n-1}(\gamma)+9 \mu_{n-2}(\gamma)-2 \mu_{n-3}(\gamma)=0\quad\quad (n\ge 3) \] with \[ \mu_0(\gamma)=\frac{6}{6 \gamma +11},\quad \mu_1(\gamma)=\frac{108}{(6 \gamma +11)^2}, \quad \mu_2(\gamma)=\frac{54 (-6 \gamma+25)}{(6 \gamma +11)^3}. \] Let us consider the term \[ \mu_6(\gamma)=\frac{6 \left(5184 \gamma ^4-539352 \gamma ^3+4277340 \gamma ^2-7093698 \gamma +3248425\right)}{(6 \gamma +11)^7}. \] The polynomial $\{5184, -539352, 4277340, -7093698, 3248425\}$ in the numerator has 4 real roots; let $\gamma^*\approx 0.831264$ denote its smallest root (the other 3 zeros are located at $\approx 1.22747$, $\approx 6.42689$ and $\approx 95.556$). Then, due to Lemma \ref{upperestlemma1}, we have $\gamma_\mathrm{sup,3}\le\gamma^*$. To complete the proof, we show that $\mu_n(\gamma^*)\ge 0\ (\forall\,n\in\mathbb{N})$, meaning that $\gamma_\mathrm{sup,3}\ge\gamma^*$ by \eqref{sect32intro}. Indeed, for $\gamma=\gamma^*$, the explicit form of the recursion is \[ \mu_n(\gamma^*)=c_1 \varrho_{1}^n+c_2 \varrho_{2}^n+\overline{c_2} (\overline{\varrho_{2}})^n \quad\quad (n\ge 0), \] where \begin{itemize} \item $\varrho_1\approx 0.500518$ is the largest real root of the polynomial \begin{itemize} \item[] $P_{\text{BDF31}}:=\{34012224, -85030560, 108650160, -91171656, 55033668, $ \item[] $-25076142, 8777889, -2366334, 486000, -75816, 10080, -1152, 64\}$; \end{itemize} \item $\varrho_2\approx 0.312678 + 0.390087 i$ is the root of $P_{\text{BDF31}}$ with the largest real part; \item $c_1\approx 0.50155509$ is the largest real root of the polynomial \begin{itemize} \item[] $P_{\text{BDF32}}:=\{91221089034315373632, -76017574195262811360,$ \item[] $26664298621295150160, -9975778735584785400, 2799915334883820972,$ \item[] $-498764709912473586, 93247136355378087, -8606361446997984,$ \item[] $425210419226880, -10041822761472, 76685377536, -237993984, 262144\}$; \end{itemize} \item $c_2\approx -0.0631319 - 0.270418 i$ is the root of $P_{\text{BDF32}}$ with the smallest real part. \end{itemize} \begin{rem}\label{rem3612degree} The 12th-degree algebraic numbers $\varrho_{1,2}$ are of course roots of the cubic characteristic polynomial \eqref{Ppkcharpoly}, with $\gamma$ replaced by the 4th-degree algebraic number $\gamma^*$; that is, ${\mathcal{P}}_3(\varrho_{1,2},\gamma^*)=0$. \end{rem} \begin{rem} Notice that $|\varrho_2|\approx 0.499935$ is relatively close to $|\varrho_1|\approx 0.500518$. This results in an increased computational cost needed to finish the proof (cf. Remark \ref{remark32}). \end{rem} Now, clearly, $\mu_n(\gamma^*)=\varrho_1^n\left[c_1+c_2 \left({\varrho_{2}}/{\varrho_{1}}\right)^n+\overline{c_2} \left({\overline{\varrho_{2}}}/{\varrho_{1}}\right)^n\right]$, and we have \[ \left| c_2 \left(\frac{\varrho_{2}}{\varrho_{1}}\right)^n+\overline{c_2} \left(\frac{\overline{\varrho_{2}}}{\varrho_{1}}\right)^n\right|\le 2 |c_2| \left|\frac{\varrho_{2}}{\varrho_{1}}\right|^n< 2\cdot\frac{2777}{10000} \left(\frac{9989}{10000}\right)^n. \] On the other hand, \[ 2\cdot\frac{2777}{10000} \left(\frac{9989}{10000}\right)^n<\frac{50155}{100\,000}<c_1 \] for $n\ge 93$, therefore $\mu_n(\gamma^*)>0$ for $n\ge 93$. Finally, one checks that $\mu_n(\gamma^*)>0$ for $n\in\{0, 1, \ldots, 92\}\setminus\{6\}$ (recall that $\mu_6(\gamma^*)=0$), so the proof is complete. \begin{rem} We have $\mu_{92}(\gamma^*)\approx 1.585176\cdot 10^{-28}$. \end{rem} \subsubsection{The BDF4 method}\label{BDF4subsubsection} The recursion \eqref{genmunrec} is \[ (12 \gamma +25) \mu_n(\gamma)-48 \mu_{n-1}(\gamma)+36 \mu_{n-2}(\gamma)-16 \mu_{n-3}(\gamma)+ 3 \mu_{n-4}(\gamma)=0 \quad\quad (n\ge 4) \] with \[ \mu_0(\gamma)=\frac{12}{12 \gamma +25},\quad \mu_1(\gamma)=\frac{576}{(12 \gamma +25)^2}, \quad \mu_2(\gamma)=\frac{1296 (-4 \gamma+13 )}{(12 \gamma +25)^3}, \] \[ \mu_3(\gamma)=\frac{192 \left(144 \gamma ^2-1992 \gamma +2137\right)}{(12 \gamma +25)^4}. \] For $\gamma>0$, the characteristic polynomial of the recursion, ${\mathcal{P}}_4(\cdot,\gamma)$, has multiple roots if and only if $\gamma=7/12\approx 0.5833$. In the rest of the proof, it will be sufficient to focus on the interval $0<\gamma<7/12$. For any $0<\gamma<7/12$, let us denote the four distinct roots of ${\mathcal{P}}_4(\cdot,\gamma)$ by $\varrho_{1,2,3,4}(\gamma)$. Then $0<\varrho_2(\gamma)<\varrho_1(\gamma)<1$ and $\varrho_{3,4}(\gamma)\in\mathbb{C}\setminus\mathbb{R}$. Let us denote by \begin{equation}\label{gamstar048622def} \gamma^*\approx 0.48622 \end{equation} the 5th-degree algebraic number listed in the row of $\gamma_\mathrm{sup,4}$ in Theorem \ref{thm2.2}. By separating the real and imaginary parts of ${\mathcal{P}}_4(x+i y,\gamma)$, then setting up and solving the appropriate system of polynomial equations over the reals, we can prove that \begin{itemize} \item $|\varrho_{3}(\gamma)|=|\varrho_{4}(\gamma)|<\varrho_1(\gamma)$ for $0<\gamma<\gamma^*$; \item $|\varrho_3(\gamma^*)|=|\varrho_4(\gamma^*)|=\varrho_1(\gamma^*)$ for $\gamma=\gamma^*$; \item $\varrho_1(\gamma)<|\varrho_3(\gamma)|=|\varrho_4(\gamma)|$ for $\gamma^*<\gamma<7/12$. \end{itemize} In other words, the positive real root $\varrho_1(\gamma)$ is no longer dominant for $\gamma>\gamma^*$. First we prove $\gamma_\mathrm{sup,4}\ge \gamma^*$ by proving $\mu_n(\gamma^*)>0$ ($\forall\,n\in\mathbb{N}$), see \eqref{sect32intro}. For $\gamma=\gamma^*$ we have the representation \[ \mu_n(\gamma^*)=c_1(\gamma^*) \left(\varrho_1(\gamma^*)\right)^n+c_2(\gamma^*) \left(\varrho_2(\gamma^*)\right)^n+ c_3(\gamma^*) \left(\varrho_3(\gamma^*)\right)^n+\overline{c_3(\gamma^*)} \left(\overline{\varrho_3(\gamma^*)}\right)^n \quad (n\ge 0), \] where \begin{itemize} \item $\varrho_1(\gamma^*)\approx 0.605651$ is the unique real root of the polynomial \begin{itemize} \item[] $\{96, -144, 86, -30, 9, -2\}$; \end{itemize} \item $\varrho_2(\gamma^*)\approx 0.437941$ is the unique real root of the polynomial \begin{itemize} \item[] $\{7080, -8928, 6410, -2826, 621, -54\}$; \end{itemize} \item $\varrho_3(\gamma^*)\approx 0.25655 + 0.54863 i$ is the root of the polynomial \begin{itemize} \item[] $\{82944, -140544, 160624, -112944, 60516, -27800, 12636, -5832, 1969, -384, 36\}$ \end{itemize} having the property $|\varrho_3(\gamma^*)|=\varrho_1(\gamma^*)$; \item $c_1(\gamma^*)\approx 1.21912$ is the unique real root of the polynomial \begin{itemize} \item[] $\{638976, -1308672, 767680, -148848, 255, -16\}$; \end{itemize} \item $c_2(\gamma^*)\approx -0.583734$ is the unique real root of the polynomial \begin{itemize} \item[] $\{15582086307840, 11032756568064, 1374924543424, 141329286000,$ \item[] $\ -715299903, 8503056\}$; \end{itemize} \item $c_3(\gamma^*)\approx -0.123106 - 0.169757 i$ is the root of the polynomial \begin{itemize} \item[] $\{654252399875063808, 147972616215330816, 117436085430648832,$ \item[] $\ 23378947275620352, 6522272391303168, 504776558675968, 75411131715456,$ \item[] $\ -3364763918784, 58367021905, -452933856, 1679616\}$ \end{itemize} having the smallest real part. \end{itemize} \begin{rem} Here again we have converted polynomials whose coefficients are algebraic numbers to higher-degree polynomials with integer coefficients (cf. Remark \ref{rem3612degree}). \end{rem} By rewriting $\mu_n(\gamma^*)$ ($n\in\mathbb{N}$) as \[ \left(\varrho_1(\gamma^*)\right)^n\left[c_1(\gamma^*) +c_2(\gamma^*) \left(\frac{\varrho_2(\gamma^*)}{\varrho_1(\gamma^*)}\right)^n+ c_3(\gamma^*) \left(\frac{\varrho_3(\gamma^*)}{\varrho_1(\gamma^*)}\right)^n+\overline{c_3(\gamma^*)} \left(\frac{\overline{\varrho_3(\gamma^*)}}{\varrho_1(\gamma^*)}\right)^n\right], \] and noticing that \[ \left| c_2(\gamma^*) \left(\frac{\varrho_2(\gamma^*)}{\varrho_1(\gamma^*)}\right)^n+ c_3(\gamma^*) \left(\frac{\varrho_3(\gamma^*)}{\varrho_1(\gamma^*)}\right)^n+\overline{c_3(\gamma^*)} \left(\frac{\overline{\varrho_3(\gamma^*)}}{\varrho_1(\gamma^*)}\right)^n\right|\le \] \[ |c_2(\gamma^*)| \left|\frac{\varrho_2(\gamma^*)}{\varrho_1(\gamma^*)}\right|^n+ 2|c_3(\gamma^*)| \left|\frac{\varrho_3(\gamma^*)}{\varrho_1(\gamma^*)}\right|^n= |c_2(\gamma^*)| \left|\frac{\varrho_2(\gamma^*)}{\varrho_1(\gamma^*)}\right|^n+ 2|c_3(\gamma^*)|< \] \[ |c_2(\gamma^*)| + 2|c_3(\gamma^*)|< 11/10 <|c_1(\gamma^*)|, \] we see that $\mu_n(\gamma^*)>0$ for all $n\in\mathbb{N}$. Thus we have proved $\gamma_\mathrm{sup,4}\ge \gamma^*$. To prove the converse inequality, $\gamma_\mathrm{sup,4}\le \gamma^*$, we apply Lemma \ref{upperestlemma2}. We set $\gamma:=\gamma^*+\varepsilon$ with some sufficiently small, but arbitrary $\varepsilon>0$. Then for $n\in\mathbb{N}$ we have \[ \mu_n(\gamma)=\left|\varrho_3(\gamma)\right|^n\left( \nu_n+ w z^n+\bar{w} (\bar{z})^n\right) \] with $z:={\varrho_3(\gamma)}/{|\varrho_3(\gamma)|}$, $w:=c_3(\gamma)$ and \[ \nu_n:=c_1(\gamma) \left(\frac{\varrho_1(\gamma)}{|\varrho_3(\gamma)|}\right)^n+c_2(\gamma) \left(\frac{\varrho_2(\gamma)}{|\varrho_3(\gamma)|}\right)^n. \] Due to the properties of the numbers $\varrho_j(\gamma)$ listed in the paragraph of \eqref{gamstar048622def}, we know that $\mathbb{R}\ni \nu_n\to 0$ as $n\to+\infty$. Moreover, since the functions $\varrho_3(\cdot)$ and $c_3(\cdot)$ are continuous (also) at $\gamma^*$, we have $z\in\mathbb{C}\setminus\mathbb{R}$, $|z|=1$ and $w\in\mathbb{C}\setminus\{0\}$, for $\varepsilon>0$ small enough. Lemma \ref{upperestlemma2} then shows that $\mu_n(\gamma)$ cannot be non-negative for all $n\in\mathbb{N}$, so by Theorem \ref{thm1.1} we obtain that $\gamma_\mathrm{sup,4}< \gamma^*+ \varepsilon$. \subsubsection{The BDF5 method} The recursion \eqref{genmunrec} is \[ (60 \gamma +137) \mu_n(\gamma)-300 \mu_{n-1}(\gamma)+300 \mu_{n-2}(\gamma) -200 \mu_{n-3}(\gamma)+ \] \[ 75 \mu_{n-4}(\gamma)-12 \mu_{n-5}(\gamma)=0\quad\quad (n\ge 5) \] with \[ \mu_0(\gamma)=\frac{60}{60 \gamma +137},\quad \mu_1(\gamma)=\frac{18000}{(60 \gamma +137)^2}, \quad \mu_2(\gamma)=\frac{18000 (-60 \gamma+163 )}{(60 \gamma +137)^3}, \] \[ \mu_3(\gamma)=\frac{12000 \left(3600 \gamma ^2-37560 \gamma +30469\right)}{(60 \gamma +137)^4}, \] \[ \mu_4(\gamma)=\frac{4500 \left(-216000 \gamma ^3+8600400 \gamma ^2-22146420 \gamma +10021847\right)}{(60 \gamma +137)^5}; \] see Figure \ref{BDF5fig}. The characteristic polynomial of the recursion ${\mathcal{P}}_5(\cdot,\gamma)$ has no multiple roots for $\gamma>0$. We denote the five distinct roots of ${\mathcal{P}}_5(\cdot,\gamma)$ by $\varrho_{1,2,3,4,5}(\gamma)$ and let \[ \gamma^*\approx 0.30421 \] denote the 10th-degree algebraic number listed in the row of $\gamma_\mathrm{sup,5}$ in Theorem \ref{thm2.2}. Then for any $\gamma\in(0,1)$ we can prove that \begin{itemize} \item $0<\varrho_1(\gamma)<1$ and $\varrho_{2,3,4,5}(\gamma)\in\mathbb{C}\setminus\mathbb{R}$; \item $|\varrho_{2,3,4,5}(\gamma)|<\varrho_1(\gamma)$ for $0<\gamma<\gamma^*$; \item $|\varrho_{4,5}(\gamma^*)|<|\varrho_{2,3}(\gamma^*)|=\varrho_1(\gamma^*)$ for $\gamma=\gamma^*$; \item $|\varrho_{1,4,5}(\gamma)|<|\varrho_2(\gamma)|=|\varrho_3(\gamma)|$ for $\gamma^*<\gamma<1$. \end{itemize} \begin{figure} \caption{The functions $\gamma\mapsto\mu_n(\gamma)$ for $1\le n\le 21$ corresponding to the BDF5 method are shown (the curves with indices $n\in\{1,2,5,6,10,11\}$ are not visible in this plot window). The red dot is placed at $\gamma=\gamma_\mathrm{sup,5}\approx 0.30421$.} \label{BDF5fig} \end{figure} For $\gamma=\gamma^*$ and $n\ge 0$ we have \[ \mu_n(\gamma^*)=\varrho_{1}^n\left[ c_1 +c_2 \left(\frac{\varrho_{2}}{\varrho_1}\right)^n+\overline{c_2} \left(\frac{\overline{\varrho_{2}}}{\varrho_1}\right)^n+c_4 \left(\frac{\varrho_{4}}{\varrho_1}\right)^n+\overline{c_4} \left(\frac{\overline{\varrho_{4}}}{\varrho_1}\right)^n\right], \] where, for brevity, now we omit the explicit form of the algebraic numbers $c_j$ and $\varrho_j$, and only give their approximate values: \begin{itemize} \item $\varrho_1\approx 0.737893$,\quad $\varrho_2\approx 0.195442 + 0.711539 i$,\quad $\varrho_4\approx 0.401777 + 0.175943 i$, \item $c_1\approx 0.994377$,\quad $c_2\approx -0.117157 - 0.126015 i$,\quad $c_4\approx -0.186798 - 0.0841337 i$. \end{itemize} From this we get that \[ \left| c_2 \left(\frac{\varrho_{2}}{\varrho_1}\right)^n+\overline{c_2} \left(\frac{\overline{\varrho_{2}}}{\varrho_1}\right)^n+c_4 \left(\frac{\varrho_{4}}{\varrho_1}\right)^n+\overline{c_4} \left(\frac{\overline{\varrho_{4}}}{\varrho_1}\right)^n\right|\le 2|c_2|\cdot 1^n+ 2|c_4|\cdot 1^n< \frac{8}{10}<c_1, \] so $\mu_n(\gamma^*)>0$ ($n\in\mathbb{N}$), and hence $\gamma_\mathrm{sup,5}\ge \gamma^*$ by \eqref{sect32intro}. The proof of the converse inequality, $\gamma_\mathrm{sup,5}\le \gamma^*$, is again based on Lemma \ref{upperestlemma2}, and is completely analogous to the one presented in Section \ref{BDF4subsubsection}. \subsubsection{The BDF6 method} The recursion \eqref{genmunrec} is \[ 3 (20 \gamma +49) \mu_n(\gamma)-360 \mu_{n-1}(\gamma) +450\mu_{n-2}(\gamma)-400\mu_{n-3}(\gamma)+ 225\mu_{n-4}(\gamma)- \] \[ 72\mu_{n-5}(\gamma)+10\mu_{n-6}(\gamma)=0 \quad\quad (n\ge 6) \] with \[ \mu_0(\gamma)=\frac{20}{20 \gamma +49},\quad \mu_1(\gamma)=\frac{2400}{(20 \gamma +49)^2}, \] \[ \mu_2(\gamma)=\frac{3000 (-20 \gamma+47 )}{(20 \gamma +49)^3},\quad \mu_3(\gamma)=\frac{8000 \left(400 \gamma ^2-3440 \gamma +2131\right)}{3 (20 \gamma +49)^4}, \] \[ \mu_4(\gamma)=\frac{500 \left(-24000 \gamma ^3+695600 \gamma ^2-1343380 \gamma +474833\right)}{(20\gamma +49)^5}, \] \[ \mu_5(\gamma)=\frac{160 \left(480000 \gamma ^4-53296000 \gamma ^3+283987200 \gamma ^2-212499240 \gamma +84071653\right)}{(20 \gamma +49)^6}. \] Let us consider any $0\le \gamma<37/60\approx 0.6167$. Then one checks by using the discriminant that the 6 roots of ${\mathcal{P}}_6(\cdot,\gamma)=0$ are distinct. \begin{figure}\label{BDF6fig} \end{figure} Let \[ \gamma^*\approx 0.13136 \] denote the 18th-degree algebraic number listed in the row of $\gamma_\mathrm{sup,6}$ in Theorem \ref{thm2.2}. This constant has been obtained after some non-trivial computation and simplification. The roots $\varrho_j(\gamma)$ ($1\le j\le 6$) are distributed as follows: \begin{itemize} \item $0<\varrho_2(\gamma)<\varrho_1(\gamma)<1$ and $\varrho_{3,4,5,6}(\gamma)\in\mathbb{C}\setminus\mathbb{R}$; \item $|\varrho_{2,3,4,5,6}(\gamma)|<\varrho_1(\gamma)$ for $0<\gamma<\gamma^*$; \item $|\varrho_{2,5,6}(\gamma^*)|<|\varrho_{3,4}(\gamma^*)|=\varrho_1(\gamma^*)$ for $\gamma=\gamma^*$; \item $|\varrho_{1,2,5,6}(\gamma)|<|\varrho_3(\gamma)|=|\varrho_4(\gamma)|$ for $\gamma^*<\gamma<37/60$. \end{itemize} For $\gamma=\gamma^*$ and $n\ge 0$, one has the representation \[ \mu_n(\gamma^*)=\varrho_{1}^n\left[ c_1 +c_2 \left(\frac{\varrho_{2}}{\varrho_1}\right)^n +c_3 \left(\frac{\varrho_{3}}{\varrho_1}\right)^n+\overline{c_3} \left(\frac{\overline{\varrho_{3}}}{\varrho_1}\right)^n+c_5 \left(\frac{\varrho_{5}}{\varrho_1}\right)^n+\overline{c_5} \left(\frac{\overline{\varrho_{5}}}{\varrho_1}\right)^n\right], \] where the algebraic numbers $c_j$, $\varrho_j$ have the approximate values \begin{itemize} \item $\varrho_1\approx 0.87690236$,\quad $\varrho_2\approx 0.41284041$, \item $\varrho_3\approx 0.13673253 + 0.86617664 i$,\quad $\varrho_5\approx 0.38057439 + 0.29512217 i$, \item $c_1\approx 1.0000077$,\quad $c_2\approx -0.13742979$, \item $c_3\approx -0.11295491 - 0.10160183 i$,\quad $c_5\approx -0.124637633 - 0.050848744 i$, \end{itemize} see Figure \ref{BDF6fig}. \begin{figure} \caption{The sequence $\mu_n(\gamma^*)$ corresponding to the BDF6 method is depicted (using linear interpolation).} \label{BDF6mun} \end{figure} For any $n\ge 0$, the estimate \[ \left|c_2 \left(\frac{\varrho_{2}}{\varrho_1}\right)^n +c_3 \left(\frac{\varrho_{3}}{\varrho_1}\right)^n+\overline{c_3} \left(\frac{\overline{\varrho_{3}}}{\varrho_1}\right)^n+c_5 \left(\frac{\varrho_{5}}{\varrho_1}\right)^n+\overline{c_5} \left(\frac{\overline{\varrho_{5}}}{\varrho_1}\right)^n\right|\le \] \[ |c_2|+2|c_3|+2|c_5|<\frac{8}{10}<c_1 \] yields $\mu_n(\gamma^*)>0$, see Figure \ref{BDF6mun}. This proves that $\gamma_\mathrm{sup,6}\ge \gamma^*$ by \eqref{sect32intro}. As before, a final application of Lemma \ref{upperestlemma2} shows that $\gamma_\mathrm{sup,6}\le \gamma^*$, completing the proof. \subsection{Proofs for the Adams--Bashforth methods}\label{ABmethodssection} The coefficients for the Adams--Bashforth methods are listed, for example, in \cite{hairerwanner1}. \subsubsection{The AB1 method} It is easily seen that the recursion \eqref{mundef} now has the form \[ \mu_n(\gamma)=(1-\gamma) \mu_{n-1}(\gamma)\quad\quad (n\ge 2) \] with $\mu_1(\gamma)=1$, so any $\gamma>1$ violates the non-negativity of $\mu_n(\gamma)$ in \eqref{necsufcond2}. Hence $\gamma_\mathrm{sup,1}\le 1$. But $\mu_n(1)\ge 0$ for all $n\in\mathbb{N}$, and it is known \cite{hairerwanner} that $-1\in \mathrm{int}({\mathcal{S}})$, so \eqref{17mod} finishes the proof. \subsubsection{The AB2 method} For this method, the recursion \eqref{mundef} is \[ \mu_n(\gamma)-\left(1-\frac{3 \gamma }{2}\right) \mu_{n-1}(\gamma)-\frac{\gamma}{2}\mu_{n-2}(\gamma)=0 \quad\quad (n\ge 3) \] with $\mu_1(\gamma)=3/2$ and $\mu_2(\gamma)=-9\gamma/4+1$. Lemma \ref{upperestlemma1} with $n=2$ and $\gamma^*=4/9$ shows that $\gamma_\mathrm{sup,2}\le 4/9$. On the other hand, \[ \mu_n(4/9)=3^{1-n} \left(2^n-4 (-1)^n\right)/4\ge 0\quad\quad (n\ge 1), \] and $-4/9\in \mathrm{int}({\mathcal{S}})$ (see \cite{hairerwanner}), so the proof is complete due to \eqref{17mod}. \subsubsection{The AB3 method} For this method, the recursion \eqref{mundef} takes the form \[ \mu_n(\gamma)-\left(1-\frac{23 \gamma}{12}\right) \mu_{n-1}(\gamma) -\frac{4}{3} \gamma \mu_{n-2}(\gamma)+\frac{5}{12} \gamma \mu_{n-3}(\gamma)=0\quad\quad (n\ge 4) \] with \[ \mu_1(\gamma)=\frac{23}{12},\quad \mu_2(\gamma)=-\frac{529 \gamma }{144}+\frac{7}{12},\quad \mu_3(\gamma)=\frac{12167 \gamma ^2}{1728}-\frac{161 \gamma }{72}+1. \] Lemma \ref{upperestlemma1} with $n=2$ and $\gamma^*={84}/{529}$ shows that $\gamma_\mathrm{sup,3}\le {84}/{529}$. We also know \cite{hairerwanner} that $-{84}/{529}\in \mathrm{int}({\mathcal{S}})$, so by \eqref{17mod} it is enough to verify that $\mu_n({84}/{529})\ge 0$ for all $n\ge 1$. For $n\ge 1$ we have \[\mu_n({84}/{529})=\sum_{j=1}^3 c_j \varrho_{j}^n=\varrho_{3}^n \left(c_3+ c_1\left(\frac{\varrho_1}{\varrho_3}\right)^n+c_2\left(\frac{\varrho_2}{\varrho_3}\right)^n\right),\] where the numbers $\varrho_j$ ($\varrho_1<0<\varrho_2<\varrho_3$, $|\varrho_{1}|<\varrho_3/2$, $\varrho_{2}<\varrho_3/4$) and $c_j$ ($-5<c_1<-3<c_2<0<1<c_3$) are the three roots of the polynomials $\{529, -368, -112, 35\}$ and \[ \{30733417008, 193547352348, 162435667337, -391554926405\}, \] respectively. Since \[ \left|c_1\left(\frac{\varrho_1}{\varrho_3}\right)^n+c_2\left(\frac{\varrho_2}{\varrho_3}\right)^n\right| \le 5 \left|\frac{\varrho_1}{\varrho_3}\right|^n+3 \left|\frac{\varrho_2}{\varrho_3}\right|^n <\frac{5}{2^n}+\frac{3}{4^n}<1<c_3 \] for $n\ge 3$, and $\mu_n({84}/{529})\ge 0$ for $1\le n\le 2$, the proof is complete. \subsubsection{The AB4 method} The starting terms of the recursion \eqref{mundef} satisfy \[ \mu_1(\gamma)=\frac{55}{24},\quad \mu_2(\gamma)=-\frac{3025 \gamma }{576}-\frac{1}{6}, \] so the non-negativity condition in \eqref{necsufcond2} for $n=2$ is violated for any $\gamma>0$, hence $\nexists \mathrm{\ SCB}>0$. \section{Conclusions}\label{conclusionssection} The step-size coefficient for boundedness (SCB) of a linear multistep method (LMM) is a generalization of the strong-stability-preserving (SSP) coefficient of the LMM. The SCB appears in conditions that ensure monotonicity or boundedness properties of the LMM, and a method is more efficient if it possesses a larger SCB. In \cite{hundsdorferspijkermozartova2012,spijker2013}, a necessary and sufficient condition has been given for a number $\gamma>0$ to be a SCB of a LMM. This condition involves checking the non-negativity of an auxiliary sequence $\mu_n(\gamma)$ that satisfies a linear recurrence relation in $n\in\mathbb{N}$. For fixed $n$, the function $\mu_n(\cdot)$ is a rational function. The main goal of the present work is to determine the maximum SCB, $\gamma_\mathrm{sup}$ for a given linear multistep method. For each $k$-step BDF method ($2\le k\le 6$) and each $k$-step Adams--Bashforth method ($1\le k\le 3$), we determine the exact value of $\gamma_\mathrm{sup}$ by finding the largest $\gamma>0$ that satisfies $\mu_n(\gamma)\ge 0$ for all non-negative $n$. We have identified two types of conditions that characterize $\gamma_\mathrm{sup}$ in these multistep families:\\ \indent ($i$) a positive real dominant root of the characteristic polynomial corresponding to the recursion $\mu_n(\gamma)$ loses its dominant property at $\gamma=\gamma_\mathrm{sup}$, or\\ \indent ($ii$) there is an index $n_0\in\mathbb{N}$ such that $\gamma_\mathrm{sup}$ is a simple root of the function $\mu_{n_0}(\cdot)$.\\ It turns out that $\gamma_\mathrm{sup}$ is determined \begin{itemize} \item by condition ($i$) for the BDF methods with $k\in\{2, 4, 5, 6\}$ steps; \item by condition ($ii$) with $n_0=6$ for the $3$-step BDF method; \item by condition ($ii$) with $n_0=2$ for the Adams--Bashforth methods with $k\in\{1, 2, 3\}$ steps. \end{itemize} \noindent \textbf{Acknowledgements.} The author is indebted to the anonymous referees of the manuscript for their suggestions that helped improving the presentation of the material. \end{document}
arXiv
\begin{document} \title[Hilbert transform and maximal operator]{$L^p$ estimates for Hilbert transform and maximal operator associated to variable polynomial} \author[R. Wan]{ Renhui Wan} \address{School of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, People's Republic of China} \email{[email protected]} \vskip .2in \begin{abstract} We investigate the Hilbert transform and the maximal operator along a class of variable non-flat polynomial curves $(P(t),u(x)t)$ with measurable $u(x)$, and prove uniform $L^p$ estimates for $1<p<\infty$. In particular, via the change of variable, these uniform estimates are equal to the ones for the curves $(P(v(x)t),t)$ with measurable $v(x)$. To obtain the desired bound, we make full use of time-frequency techniques and establish a crucial $\epsilon$-improving estimate for some special separate sets. \end{abstract} \maketitle \section{Introduction} \label{s1} In the present paper, we investigate the Hilbert transform and the maximal operator along variable non-flat polynomial curves in $\mathbb{R}^2$ described by \begin{equation}\label{curve} \Gamma_x(t):=(P(t),u(x)t),\ \ {\rm where}\ \ P(t)=\sum_{i=2}^Na_it^i,\ \{a_i\}_{i=2}^N\subset \mathbb{R}, \end{equation} where the function $u(x):\mathbb{R}\mapsto \mathbb{R}$ is measurable, and obtain the uniform $L^p$ estimates for $1<p<\infty$. More precisely, we study the Hilbert transform along $\Gamma_x(t)$ defined by \begin{equation}\label{hhh} \mathcal{H}^\Gamma f(x,y):=p.v.\int_\mathbb{R} f((x,y)-\Gamma_x(t))\frac{dt}{t} \end{equation} and the maximal operator along $\Gamma_x(t)$ given by \begin{equation}\label{mmm} M^\Gamma f(x,y):=\sup_{\epsilon>0}\frac{1}{2\epsilon} \int_{-\epsilon}^\epsilon |f((x,y)-\Gamma_x(t))|dt. \end{equation} \vskip.1in We state our main result as follows: \begin{thm}\label{t1} Let $u(x)$ and $\Gamma_x(t)$ be given by (\ref{curve}). Then $\mathcal{H}^\Gamma $ and $M^\Gamma $ defined as in (\ref{hhh}) and (\ref{mmm}) can be extended to two bounded operators from $L^p(\mathbb{R}^2)$ to $L^p(\mathbb{R}^2)$ for $1<p<\infty$. In addition, the bounds are uniform in the sense of that they depend only on the degree $N$. \end{thm} \begin{rem}\label{r1} The curve $\Gamma_x(t)$ in (\ref{curve}) can be generalized to $(\sum_{i=1}^N a_i [t]^{\beta_i}$, $u(x)[t]^\alpha)$, where $\alpha>0$, $a_i\in\mathbb{R},\ \beta_i\neq \alpha$ and $\beta_i>0.$ Here $[t]^\sigma=|t|^\sigma$ or $sgn(t)|t|^\sigma$. In order not to clutter the presentation, we do not choose to pursue on this direction here. Furthermore, the arguments in the proof of Theorem \ref{t1} also work for the Carleson type operator $p.v.\int f(x-P(t))e^{iu(x) t}\frac{dt}{t}$, see section \ref{me} for its discussion. \end{rem} \subsection{Historical background} Replacing $\Gamma_x (t)$ in (\ref{curve}) by $(t,u(x,y)t)$ and the domain of integration in (\ref{hhh}) by $[-\epsilon_0,\epsilon_0]$, we reduce the operators $M^\Gamma$ and $\mathcal{H}^\Gamma$ to the $\epsilon_0$-truncated operators $M^\Gamma_{\epsilon_0}$ and $\mathcal{H}^\Gamma_{\epsilon_0}$, which are the objects investigated in the so-called Zygmund conjecture and Stein conjecture. For these conjectures, the zero-curvature case as well as the function $u(x,y)$, which depends on the double variables and is assumed only in the $Lip$ space, make them so difficult that both are open so far. Nonetheless, there are various variants of them. Here we list some partial progresses connected with the present work as follows. \vskip.1in \underline{The zero-curvature case}\ \ \ Bourgain \cite{B89} proved that $M^\Gamma_{\epsilon_0}$ with any real analytic $u=u(x,y)$ is $L^2$ bounded. The analogous result for $\mathcal{H}^\Gamma_{\epsilon_0}$ was proved later by Stein and Street \cite{SS12}, whose objects are all polynomials with analytic coefficients. For smooth $u=u(x,y)$ satisfying certain curvature condition, Christ et al. \cite{CNSW99} demonstrated that both $\mathcal{H}^\Gamma$ and $M^\Gamma$ are $L^p$ bounded. Later, Lacey and Li \cite{LL10} obtained by a sophisticated time-frequency approach exploited by them in \cite{LL06} that $\mathcal{H}^\Gamma_{\epsilon_0}$ with $u\in \mathcal{C}^{1+\epsilon}$ is bounded on $L^2$. For any measurable $u=u(x)$, Bateman \cite{B13} and Bateman-Thiele \cite{BT} proved that $\mathcal{H}^\Gamma $ is $L^p$ bounded for $p>3/2$ and $\mathcal{H}^\Gamma P_k^{(2)}$ is bounded on $L^p$ for $p>1$, where $P_k^{(2)}$ denotes the Littlewood-Paley projection in the $y$-variable and the commutation relation $\mathcal{H}^\Gamma P_k^{(2)}=P_k^{(2)}\mathcal{H}^\Gamma $ is crucial in their proofs. Recently, partially motivated by Bateman'work \cite{B13}, via establishing a crucial $L^p$ estimate of certain commutator, Guo \cite{G17} proved $\mathcal{H}^\Gamma$ is $L^p$ bounded for $p>3/2$ under the assumption that $u=u(h(x,y))$ with sufficiently small $\|\nabla h -(1,0)\|_\infty$. If the measurable $u=u(x,y)$ does not own any regularity, $\mathcal{H}^\Gamma$ along the curve $(t,u(x,y)t)$ is not bounded on $L^p$, see \cite{K07,LMP19}. \vskip.1in \underline{The non-zero curvature case}\ \ \ This problem is not only a non-trivial generalization of the zero curvature but also closely related to the Carleson maximal operators. Here and in what follows $P_x(t)$ ($P_y(t),P_{x,y}(t)$) denotes the polynomial with the coefficients depending on the $x$-variable ($y$-variable, both $x$-variable and $y$-variable). For $\Gamma_x(t)=(t,u(x,y)[t]^\alpha)$ $(0<\alpha\neq 1)$ with measurable function $u(x,y)$, Marletta and Ricci \cite{MR98} obtained $M^\Gamma$ is $L^p$ bounded for $p>2$. For $u(x,y)\in {\rm Lip}(\mathbb{R}^2)$, Guo et al. \cite{GHLJ} proved $M^\Gamma_{\epsilon_0}$ with certain $\epsilon_0=\epsilon_0(\|u\|_{\rm Lip})$ is bounded on $L^p$ for $p\in(1,2]$. Furthermore, Guo et al. proved that $\mathcal{H}^\Gamma P_k^{(2)}$ with measurable $u=u(x,y)$ is $L^p$ bounded for $p\in (2,\infty)$. Under the assumption that $\|u\|_{\rm Lip}$ is small enough, Di Plinio et al. \cite{DGTZ} obtained $\mathcal{H}^\Gamma_{1}$ is bounded on $L^p$ for $p\in (1,\infty)$. Very recently, Liu-Song-Yu \cite{LSY21} and Liu-Yu \cite{LY22} used local smoothing estimates for Fourier integral operators to extend \cite{MR98,GHLJ,DGTZ} to a larger class of curves $(t,u(x,y)\gamma(t))$, where $\gamma(t)$ is even or odd. However, it seems difficult to use the ideas in the above works to accomplish the uniform estimate for the variable curve $(t,P_{x}(t))$ with $P_{x}'(0)=0$ and measurable coefficients, where the uniform estimate is in the sense of that its bound depends only on the degree $N$. For this curve, Lie \cite{L19} recently used a unified approach to obtain the uniform $L^p$-boundedness $(1<p<\infty)$ of $M^\Gamma$, $\mathcal{H}^\Gamma$ and related operators by the LGC-methodology (see page 9 in that paper for the details). In fact, some more general cases are proved in that paper. For the special case $\Gamma_x(t)=(t,a_2(x)t^2+a_3(x)t^3)$, via making full use of Littlewood-Paley theory and the commutation relation $\mathcal{H}^\Gamma P_k^{(2)}=P_k^{(2)}\mathcal{H}^\Gamma$, Wan \cite{Wan19} proved $\mathcal{H}^\Gamma$ is $L^p$ bounded for $p\in(1,\infty)$. For $\Gamma_x(t)=(t,u(x,y)[t]^b)$ with $b>1$, if the measurable $u=u(x,y)$ does not own any regularity, Guo et al. \cite{GRSP,GRSP2} showed that $\mathcal{H}^\Gamma$ is not bounded on $L^p$ for any $p\in(1,\infty)$, which is different from the operator $M^\Gamma$, we refer \cite{MR98}. \vskip.1in \subsection{ Motivations} The uniform $L^p$ estimates of the linear and multilinear singular integral operators and the maximal operators along various variable polynomial curves have been studied, however, for $\mathcal{H}^\Gamma$ and $M^\Gamma$ along the curve $(t,P_y(t))$ with $P_{y}'(0)=0$ or $(P_x(t),t)$ with $P_{x}'(0)=0$, we do not know so far whether their uniform estimates hold without imposing any assumptions to the coefficients. This paper gives partial progress on this question. More precisely, we investigate the uniform $L^p$ estimates of $\mathcal{H}^\Gamma$ and $M^\Gamma$ along a new class of variable curves $(P(t),u(x)t)$ with $P'(0)=0$. Indeed, the change of variable gives that each curve can be transformed to the one like $(P(v(x)t),t)$ with certain measurable $v(x)$, which is a special case of $(P_x(t),t)$ and can not be treated by directly using the arguments in the previous works. In addition, we only pay attention to the polynomial whose coefficients do not depend on the variable(s) since $\mathcal{H}^\Gamma$ may not be bounded otherwise, see \cite{GHLJ}. \subsection{Outline of the proof and comments} Since the present work belongs to the non-zero curvature cases, some arguments are related to those of the previous works such as \cite{GHLJ,L19,LX16}. By a decomposition technique, we first reduce the goal to the estimates related to $N-1$ ``dominating sets". In every ``dominating set", we reduce the objective estimate into four segments via the partition of unity given by (\ref{fenjie}) in order to to quantify the phase function more conveniently. We list the approaches treating each segment as follows: \vskip.1in 1 The first segment is the low frequency case which is estimated by Taylor's expansion, while the second segment is the off-diagonal frequency case which we deal with by exploiting integration by parts as well as Taylor's expansion. We remark that the function of Taylor's expansion in the second segments is to exploit the large lower bound of the derivative of the phase function. In addition, to estimate the second segment, we also need the vector-valued shifted maximal estimate and the estimates of some variants of vector-valued singular integrals (which are also used in the fourth segment). For the proof of the third segment which belongs to the off-diagonal frequency case as well, we only give a sketch since it can be handled by combining the previous two approaches treating the first and the second segments. \vskip.1in 2 The estimate of the fourth segment is obtained by establishing a new $\epsilon$-improving estimate (see Lemma \ref{l31}) and developing some important arguments in \cite{L19}. Indeed, the proof in the current paper is more involved than the previous works. The main novelty is to achieve the exponential decay for the $L^2$ estimate of $\mathcal{H}_{\Delta_4,m}f$ (see section \ref{hh}), which is the key part in this paper. The non-degenerate phase function makes us apply the method of stationary phase to get an asymptotics. However, because of the phase function depending on $x$ variable, we need several new ideas in the estimate of this asymptotics. More precisely, these ideas are included in the following steps: \begin{itemize} \item discretizing the phase function and reducing the problem to the analysis of the integrand in certain new oscillatory integral; \vskip.1in \item expressing this integrand by regarding it as a periodic function and making full use of $TT^\star$ method, and then reducing the goal to the estimate of certain integral expected to own an exponential decay; \vskip.1in \item establishing a crucial $\epsilon$-improving estimate given by Lemma \ref{l31} for certain ``bad" set and then verifying the decay estimate in the former step. \end{itemize} \vskip.1in {\bf Organization of the paper} In Section \ref{s2}, we give the partition of unity and the reduction of Theorem \ref{t1}. The third section lists two auxiliary consequences which are applied to the tricky estimate in the sixth section, and the followed section gives the proof of Theorem \ref{t2} for $\sigma=1$. In the fifth section, we prove Theorem \ref{t2} for $\sigma=2$ and $\sigma=3$. In the sixth section, Theorem \ref{t2} for $\sigma=4$ is proved by the auxiliary consequences in the third section. In the 7th section, we give the proof of Lemma \ref{l6.1} which is used in the sixth section. At last, we recall some useful results including the shifted maximal operator in the Appendix. \vskip.1in {\bf Notations}\ \ We use $e(x)=e^{2\pi i x}$. The Fourier transform $\widehat{f}(\xi)$ of $f(x)$ is defined by $\int e(-\xi x) f(x) dx$, while $g^{\vee}$ is the Fourier inverse transform of $g$ defined by $g^{\vee}(x)=\int g(\xi) e(\xi x)d\xi$. For convenience, hereinafter, we omit $2\pi$ in the notation of $e(x)$. $\mathcal{F}^y$ is the Fourier transform in the $y-$variable. We use $x\lesssim y$ to stand for there exists a uniform constant $C$ such that $x\le Cy$. $x\gtrsim y$ means $y\lesssim x$, and the notation $x\thicksim y$ signifies that $x\gtrsim y$ and $x\lesssim y$. The absolute or uniform constant in what follows may be hidden in ``$\lesssim$". We use $C_{\gamma},C(\gamma)$ to represent the constants depending on $\gamma$, and the constants hidden in $\lesssim_N$ depend only on $N$. We use $\|\cdot\|_p$ to stand for $\|\cdot\|_{L^p}$. \section{The reduction of Theorem \ref{t1}} \label{s2} Let $\theta_+(t)$ be supported in $(\frac{1}{9},9)$ such that $\sum_{j\in\mathbb{Z}}\theta_+(2^jt)=1$ for all $t>0$. Let $\theta_-(t)=\theta_+(-t)$, $\theta(t)=\theta_+(t)+\theta_-(t)$, then $\sum_{j\in\mathbb{Z}} \theta(2^jt)=1$ for all $t\neq0$. Let $\rho(t)=\frac{\theta(t)}{t}$, denote $\rho_j(t):=2^j\rho(2^jt)$, we have for all $t\neq 0$, \begin{equation}\label{idd} \frac{1}{t}=\sum_{j\in\mathbb{Z}}2^j\frac{\theta(2^jt)}{2^jt} =\sum_{j\in\mathbb{Z}}\rho_j(t). \end{equation} \subsection{First reduction of Theorem \ref{t1}} We only give a detailed proof for $\mathcal{H}^\Gamma f$ since $M^\Gamma f$ can be similarly treated. Setting $\mathcal{H}_jf(x,y)$ as $$\mathcal{H}_jf(x,y):=\int f(x-P(2^{-j}t),y-u(x)2^{-j}t)\rho(t)dt,$$ satisfying $\|\mathcal{H}_jf\|_p\lesssim \|f\|_p$, we obtain via (\ref{idd}) that $ \mathcal{H}^\Gamma f(x,y) =\ \sum_{j\in\mathbb{Z}}\mathcal{H}_jf(x,y). $ Next, we seek the ``dominating monomial" of the polynomial $P(2^{-j}t)$. \vskip.1in Now, we give a further decomposition of $\mathcal{H}^\Gamma f$. For $l=2,\cdot\cdot\cdot,N$, we denote \begin{equation}\label{sing} S_l=\big\{j\in\mathbb{Z}:\ |j|>2^N,\ |a_l|2^{-jl}>\digamma_N|a_i|2^{-ji}\ {\rm for\ all\ }\ i\neq l \ {\rm and }\ 2\le i\le N\big\}, \end{equation} where $\digamma_N$ is a large enough constant depending only on $N$. It is easy to see $S_l\cap S_{l'}=\varnothing$ for $l\neq l'$. Denote $S_o=\big(\bigcup_{l=2}^N S_l\big)^c,$ then we have a decomposition of $\mathbb{Z}$, that is $\mathbb{Z}=S_o\cup\big(\bigcup_{l=2}^N S_l\big)$. In addition, for any $j\in S_o$, we have $|j|\le 2^N$ or there exists $(\mathfrak{l},\mathfrak{l}')\in \{2,\cdot\cdot\cdot,N\}^2$ satisfying $\mathfrak{l}\neq \mathfrak{l}'$ such that $|a_\mathfrak{l}|2^{-j\mathfrak{l}}\le C_1(N)|a_{\mathfrak{l}'}|2^{-j\mathfrak{l}'}\le C_2(N)|a_\mathfrak{l}|2^{-j\mathfrak{l}}$ for certain constants $C_2(N)\ge C_1(N)>0$, which immediately yields $\sharp S_o\lesssim_N1$ and $\|\sum_{j\in S_o}\mathcal{H}_jf \|_p\lesssim_N\sup_{j\in S_o}\|\mathcal{H}_jf\|_p\lesssim_N \|f\|_p.$ Thus it suffices to show that $\|\sum_{j\in S_l}\mathcal{H}_jf \|_p\lesssim_N\|f\|_p$ holds for each $l\in \{2,\cdot,\cdot,\cdot,N\}$. \subsection{Second reduction of Theorem \ref{t1}} Via rescaling arguments, we can assume \begin{equation}\label{p} P(t)=t^l+\sum_{2=i\neq l}^N a_i t^i. \end{equation} In order to avoid the negative effect of the coefficient of $t^l$, this process is necessary. Fourier inverse transform gives $$\mathcal{H}_jf(x,y)=\int_{\xi,\eta}\widehat{f}(\xi,\eta) e(\xi x+\eta y) M_j(\xi,\eta)d\xi d\eta,$$ where $$M_j(\xi,\eta):=\int e(\phi_{j,\xi,\eta,x}(t))\rho(t)dt,\ \phi_{j,\xi,\eta,x}(t):=\xi P(2^{-j}t)+\eta u(x)2^{-j}t.$$ In the following, we will decompose the multiplier $M_j(\xi,\eta)$ into four main parts. Before we go ahead, we introduce the partition of unity as follows: \begin{equation}\label{fenjie} \sum_{(m,n)\in \mathbb{Z}^2}\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})=1, \end{equation} where $\widehat{\Phi}(\cdot)=\widehat{\Phi}_+({\cdot}) +\widehat{\Phi}_-(\cdot)$, and $\widehat{\Phi}_\pm$ supported in $\pm[1/2,2]$ are defined as $\theta_\pm$. (\ref{fenjie}) is based on the phase function $\phi_{j,\xi,\eta,x}(t)$ in $M_j(\xi,\eta)$. To discuss the different behaviors of $\phi_{j,\xi,\eta,x}(t)$, we also need a decomposition of $\mathbb{Z}^2$: $\mathbb{Z}^2=\cup_{i=1}^4\Delta_i$ given by \begin{equation}\label{dde} \begin{aligned} \Delta_1:=&\ \{(m,n)\in\mathbb{Z}^2:\ \max\{m,n\}\le 0\},\\ \Delta_2:=&\ \{(m,n)\in\mathbb{Z}^2: \ \max\{m,n\}>0,\ |m-n|> 100l,\ \min\{m,n\}>0 \}\\ \Delta_3:=&\ \{(m,n)\in\mathbb{Z}^2:\ \max\{m,n\}>0,\ |m-n|> 100l,\ \min\{m,n\}\le0\},\\ \Delta_4:=&\ \{(m,n)\in\mathbb{Z}^2:\ \max\{m,n\}>0,\ |m-n|\le 100l\}. \end{aligned} \end{equation} We will give an explanation of this process at the end of this section. Then $M_j(\xi,\eta) =\sum_{i=1}^4M_{j,\Delta_i} (\xi,\eta),$ where $$ M_{j,\Delta_i} (\xi,\eta)=\sum_{(m,n)\in \Delta_i}\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})M_j(\xi,\eta), $$ and it suffices to show the following theorem. \begin{thm}\label{t2} For $1\le \sigma\le 4$, define \begin{equation}\label{aim} \mathcal{H}_{\Delta_\sigma}f(x,y)=\int_{\xi,\eta}\widehat{f}(\xi,\eta) e(\xi x+\eta y) \sum_{j\in S_l} M_{j,\Delta_\sigma}(\xi,\eta)d\xi d\eta, \end{equation} then we have $\|\mathcal{H}_{\Delta_\sigma}f\|_p\lesssim_N \|f\|_p$ holds for $1<p<\infty$. \end{thm} We end this section with an explanation of (\ref{dde}). The low frequency component $\mathcal{H}_{\Delta_1}f$ is not oscillatory, and can be seen as a more well-behaved integral, see section \ref{ll}. We treat the mixed frequency components $\mathcal{H}_{\Delta_2}f$ and $\mathcal{H}_{\Delta_3}f$ by integration by parts and square function estimate because of the rapid decay stemming from $|m-n|\ge 100l$, see section \ref{lh}. The difference between $\mathcal{H}_{\Delta_2}f$ and $\mathcal{H}_{\Delta_3}f$ is that we shall use Taylor's expansion to $\mathcal{H}_{\Delta_3}f$ before applying integration by parts. At last, the high frequency component $\mathcal{H}_{\Delta_4}f$ needs a more intricate analysis, which depends on a crucial estimate given by Lemma \ref{ll2}, see section \ref{hh}. \section{Auxiliary consequences} \label{AC} The following auxiliary lemmas are important in proving the uniform estimate of $\mathcal{H}_{\Delta_4}$. Let $N$ be a positive integer, $C_0$, $C_1$ and $C_2$ be three uniform constants. Let $\mathcal{F}(t)$ be a polynomial of degree not more than $N$ which is supported in $S:=\{t\in\mathbb{R}:\ C_0^{-1}\le |t |\le C_0\}$ and satisfies $C_1^{-1}\le |\mathcal{F}'(t)|\le C_1$ and $|\mathcal{F}^{''}(t)|\le C_2$ in $S$. Denote the inverse function of $\mathcal{F}$ by $F$. Obviously, \begin{equation}\label{perty} C_0^{-1}\le |F|\le C_0,\ F'(\tau)=\frac{1}{\mathcal{F}'(F(\tau))},\ |F'|\ge C_1^{-1}. \end{equation} \begin{lemma}\label{l31} Let $\mathfrak{S}=\{s_i\}_{i=1}^{2^N}\subset [-\mathfrak{C}_0,\mathfrak{C}_0]$ with certain uniform $\mathfrak{C}_0>0$, and $\mathfrak{A}=\{\alpha_j\}_{j=1}^N\subset\mathbb{R}$ be two strictly increasing sets. Set $d(\mathfrak{S})=\min_{1\le i<j\le 2^N}|s_j-s_i|$, $D_j(\mathfrak{A})=\prod_{1=i\neq j}^N|\alpha_j-\alpha_i|$ if $N\ge2$ and $D_j(\mathfrak{A})=1$ if $N=1$. There are a constant $A_0>0$ and $\{a_j\}_{j=1}^N\subset \mathbb{R}$ such that \begin{equation}\label{coo} |\sum_{j=1}^N a_jF(s)^{\alpha_j}|\le A_0 \ \ {\rm\ for\ any}\ \ s\in \mathfrak{S}, \end{equation} then there exists a positive constant $\tilde{C}$ depending only on $\{\alpha_i\}_{i=1}^N$, $C_1$ and $C_0$ such that \begin{equation}\label{aa1} |a_j|\le \frac{\tilde{C}^{N\big((\alpha_N-\alpha_1)+\min_{1\le j\le N}|\alpha_j|+2\big)}A_0}{\big(d(\mathfrak{S})\big)^{N-1} D_j(\mathfrak{A})} \end{equation} holds for $1\le j\le N$. In particular, $\tilde{C}= 2C_1C_0^{\max_{1\le i\le N}|\alpha_i|+1}$ is enough. \end{lemma} \begin{rem}\label{rp} The function $F$ in this lemma can be relaxed to any smooth function satisfying the first and the third conditions in (\ref{perty}). In addition, we require only (\ref{aa1}) with ``$\min_{1\le j\le N}$" replaced by ``$\max_{1\le j\le N}$" in the following context. \end{rem} \begin{proof} We shall prove (\ref{aa1}) with $\tilde{C}\ge 2C_1C_0^{\max_{1\le i\le N}|\alpha_i|+1}$ by induction over the values of $N$. Obviously, $N=1$ is trivial since $|a_1|\le \frac{A_0}{|F(s)|^{\alpha_1}}\le C_0^{|\alpha_1|} A_0\le \tilde{C} A_0$ holds for any $s\in \mathfrak{S}$. We now assume that (\ref{aa1}) holds for $N=k$, it suffices to show (\ref{aa1}) for $N=k+1$. Multiplying both sides of \begin{equation}\label{k0} |\sum_{j=1}^{k+1}a_jF(s)^{\alpha_j}|\le A_0\ \ {\rm\ for\ any}\ \ s\in \mathfrak{S}, \end{equation} by $|F(s)|^{-\alpha_1}$, we obtain \begin{equation}\label{k11} |a_1+\sum_{j=2}^{k+1}a_jF(s)^{\alpha_j-\alpha_1}|\le A_0|F(s)|^{-\alpha_1}\le C_0^{|\alpha_1|}A_0 \end{equation} holds for any $s\in \mathfrak{S}$. Applying the mean value theorem to every interval $(s_{2l-1},s_{2l})$ with $l=1,2,3\cdot\cdot\cdot,2^{N-1}$, we derive a collection of intermediate points $\{\tilde{s}_l\}_{l=1}^{2^N-1}$, which we denote by $\mathfrak{S}_\star$. Thanks to this process, we have obtained a new object like the summation on the left side of (\ref{k11}) without the constant term $a_1$. More precisely, for all $s\in \mathfrak{S}_\star$, we have $$\frac{2C_1C_0^{|\alpha_1|+1}A_0}{d(\mathfrak{S})}\ge C_1C_0 |\sum_{j=2}^{k+1}(\alpha_j-\alpha_1)a_jF(s)^{\alpha_j-\alpha_1-1} F'(s)|\ge |\sum_{j=2}^{k+1}(\alpha_j-\alpha_1)a_j F(s)^{\alpha_j-\alpha_1}|,$$ which, with the assumption (\ref{aa1}) for $N=k$, leads to that $$|a_j|(\alpha_j-\alpha_1) \le\ \frac{\tilde{C}^{k((\alpha_{k+1}-\alpha_2)+\min_{2\le j\le k+1}|\alpha_j-\alpha_1|+2)}}{(d(\mathfrak{S_\star}))^{k-1} D_j(\mathfrak{A_\star})} \frac{2C_1C_0^{|\alpha_1|+1}A_0}{d(\mathfrak{S})}$$ holds for all $2\le j\le k+1$, where $\mathfrak{A_\star}=\{\alpha_j-\alpha_1\}_{j=2}^{k+1}$. Note that $\{(\alpha_j-\alpha_1)a_j\}_{j=2}^{k+1}$ are the new coefficients. Utilizing $d(\mathfrak{S})\le d(\mathfrak{S}_\star)$ which is deduced from the choice of $\{(s_{2l-1},s_{2l})\}_{l=1}^{2^{N-1}}$, $(\alpha_j-\alpha_1)D_j(\mathfrak{A_\star})=D_j(\mathfrak{A}), $ and the choice of $\tilde{C}$, we obtain (\ref{aa1}) with $N$ replaced by $k+1$ holds for $2\le j\le k+1$. Thus it remains to show the desired estimate of $a_1$. As a matter of fact, it can be treated by a similar way. Multiplying both sides of (\ref{k0}) by $|F(s)|^{-\alpha_{k+1}}$ and following the arguments below (\ref{k0}), we can also obtain the estimate of $a_1$, which completes the proof of Lemma \ref{l31}. \end{proof} Next, we introduce a lemma giving an effective control of certain sparse set called ``bad" set in section \ref{s1}. More importantly, it is essential in proving the uniform estimate of $\mathcal{H}_{\Delta_4}f$. Define $$B(x,t):=\sum_{k=1}^Na_k(x)(F(t))^{\alpha_k},$$ where $F$ is defined as the previous statement, $\alpha_1<\cdot\cdot\cdot<\alpha_N$, $\{\alpha_k\}_{k=1}^N \subset \mathbb{R}$, $x\in X\subseteq [-\mathfrak{C}_1,\mathfrak{C}_1]$ for certain absolute constant $\mathfrak{C}_1>0$ and $\{a_k(x)\}_{k=1}^N$ is a series of measurable functions satisfying $|a_k(x)|\lesssim 1$ for all $1\le k\le N$. Denote $\vec{\alpha}:=\{\alpha_1,\cdot\cdot\cdot,\alpha_N\}$. \begin{lemma}\label{p1} Let $m$ and $\epsilon$ be two positive constants satisfying $\epsilon m\ge 1$. Denote $$D_m=\{w\in 2^{-\frac{m}{2}}\mathbb{Z}:\ C_5^{-1}\le |w|\le C_5\}, \ \mathfrak{D}_m=\{l\in 2^{-\frac{m}{2}}\mathbb{Z}:\ |l|\le C_4 \}$$ where $C_4,C_5\ge 1$ are two absolute constants, and $$ \mathfrak{G}_{B,\epsilon}:= \{(l,w)\in\mathfrak{D}_m\times D_m:\ |A_{B,\epsilon}(l,w)|\le 2^{-2\epsilon m}\},\ \ \mathfrak{H}_{B,\epsilon}:=(\mathfrak{D}_m\times D_m) \setminus \mathfrak{G}_{B,\epsilon},$$ where $$A_{B,\epsilon}(l,w):=\{x\in X:\ |B(x,x+l)-w^{-1}| \le 2^{-(1/2-2\epsilon)m}\}.$$ Denote $$\mathfrak{H}_{B,\epsilon}^1:=\{l\in \mathfrak{D}_m:\ \exists\ w\in D_m,\ s.t.\ (l,w)\in \mathfrak{H}_{B,\epsilon}\}.$$ If \begin{equation}\label{lower} \inf_{x\in X}|\frac{\partial}{\partial t} B(x,t)|\gtrsim1, \ m\ge \aleph_1,\ \epsilon\le\aleph_2^{-1} \end{equation} for certain large enough constants $\aleph_1=\aleph_1(N,\vec{\alpha})$ and $\aleph_2=\aleph_2(N)$, then there exists a positive constant $\mu=\mu(N)\in (0,1/2)$ such that \begin{equation}\label{sign} \sharp\mathfrak{H}_{B,\epsilon}^1\lesssim \ 2^{(\frac{1}{2}-\mu)m}. \end{equation} \end{lemma} \begin{rem}\label{rr1} $``\aleph_1(N,\vec{\alpha})"$ in (\ref{lower}) can be replaced by $``\aleph_1(N,\alpha_N)"$ when $\alpha_i\in \mathbb{N}$ for $1\le i\le N$. In particular, it can be replaced by $``\aleph_1(N)"$ in our following proof. Besides, The restriction $F=\mathcal{F}^{-1}$ is crucial in this lemma, we do not know that whether this restriction can be relaxed. \end{rem} \begin{proof} To prove the desired estimate, we will use reduction ad absurdum. We assume that \begin{equation}\label{As1} \sharp\mathfrak{H}_{B,\epsilon}^1\ge 2^{(\frac{1}{2}-\mu)m}, \end{equation} where $\frac{1}{2}-\mu\ge 2^{8N+2}\epsilon$. Our strategy is to prove that $\inf_{x\in Y}|\frac{\partial}{\partial t} B(x,t)|$ is smaller than any given positive constant for certain $Y\subset X$. \vskip.1in Denote $\lambda_{m}^{\epsilon,\mu}:=2^{2^{4N}2\epsilon m+\mu m}$. We first construct a sparse set $\mathfrak{H}_{B,\epsilon}^{1,1}\subset\mathfrak{H}_{B,\epsilon}^1$, which satisfies \begin{equation}\label{abc1} \sharp \mathfrak{H}_{B,\epsilon}^{1,1} \thicksim\lambda_{m}^{\epsilon,0},\ \ \tilde{d}:=\inf_{\rho_1,\rho_2\in \mathfrak{H}_{B,\epsilon}^{1,1}}|\rho_1-\rho_2|\gtrsim (\lambda_{m}^{\epsilon,\mu})^{-1}. \end{equation} (Obviously, $\tilde{d}\lesssim1$). Indeed, set $\vartheta_i:=[i,i+1](\lambda_{m}^{\epsilon,\mu})^{-1}$, we have $\sharp(\{\vartheta_i\cap 2^{-\frac{m}{2}}\mathbb{Z}\})\thicksim 2^\frac{m}{2}(\lambda_{m}^{\epsilon,\mu})^{-1},$ which, with (\ref{As1}), leads to that there are at least $\thicksim\lambda_{m}^{\epsilon,0}$ intervals denoted by $\{\vartheta_{j_l}\}_{l=1}^{\mathfrak{M}_1}$ ($\mathfrak{M}_1\gtrsim \lambda_{m}^{\epsilon,0}$, $j_1<j_2<\cdot\cdot\cdot<j_{\mathfrak{M}_1}$) such that $\vartheta_{j_l}\cap \mathfrak{H}_{B,\epsilon}^1\neq\varnothing$ for all $1\le l\le \mathfrak{M}_1$. Denote $\mathfrak{S}_l:=\vartheta_{j_l}\cap \mathfrak{H}_{B,\epsilon}^{1}$, choosing any point in every set $\mathfrak{S}_{l}$ with odd $l$ yields the desired set $\mathfrak{H}_{B,\epsilon}^{1,1}$. \vskip.1in Define $|A_{B,\epsilon}(l,w_l)|=\max_{w:(l,w)\in \mathfrak{H}_{B,\epsilon}}|A_{B,\epsilon}(l,w)|,$ applying Lemma \ref{la1} with $I_l:=A_{B,\epsilon}(l,w_l)$, $n:=2^{2N}$, $K:=2^{2\epsilon m}$ and $\tilde{N}:=2^{2^{4N}2\epsilon m}$, we gain that there exist $\mathfrak{H}_{B,\epsilon}^{1,1,1}\subset \mathfrak{H}_{B,\epsilon}^{1,1}$ with $\sharp \mathfrak{H}_{B,\epsilon}^{1,1,1}=n$ and $X_1:=\cap_{l\in \mathfrak{H}_{B,\epsilon}^{1,1,1}}I_l$ with $|X_1|\ge 2^{-1}K^{-n}$ such that for all $l\in \mathfrak{H}_{B,\epsilon}^{1,1,1}$ and $x\in X_1$, we have \begin{equation}\label{aa2} |B(x,x+l)-w_l^{-1}| \le 2^{-(1/2-2\epsilon)m}. \end{equation} We next construct a sparse subset $X_1^1$ of $X_1$. Let $E$ be the collection of same length (= $2^{-\frac{m}{4}}$) intervals which partition $[-\mathfrak{C}_1,\mathfrak{C}_1]$ and are mutually disjoint. Note that the number of the intervals in $E$ is in $[\mathfrak{C}_1 2^{\frac{m}{4}+1},\mathfrak{C}_1 2^{\frac{m}{4}+1}+1]$. Let $E_1$ be the collection of intervals $J\in E$ such that $|X_1\cap J|\ge (2\mathfrak{C}_1)^{-1}2^{-4}K^{-n}|J|.$ Denote $X_1^1:=\cup_{J\in E_1}(X_1\cap J)$, we claim \begin{equation}\label{1.1} |X_1'|\ge \frac{|X_1|}{2}\ge 2^{-2}K^{-n}. \end{equation} Actually, (\ref{1.1}) follows since $$|X_1|\le\sum_{J\in E_1}|X_1\cap J|+\sum_{J\in E\setminus E_1}|X_1\cap J| =|X_1'|+\sum_{J\in E\setminus E_1}|X_1\cap J| $$ and $$\sum_{J\in E\setminus E_1}|X_1\cap J| \le\ (2\mathfrak{C}_1)^{-1} 2^{-4}K^{-n} 2^{-m/4} \sharp\{J:\ J\in E\setminus E_1\}\le\ 2^{-4}K^{-n}\le \frac{|X_1|}{2}.$$ We conclude that for $y\in X_1'$, there exist $y'$ and $J$ such that $y'\in J \in E_1$ and $ |J|\ge|y-y'|\ge (2\mathfrak{C}_1)^{-1}2^{-5}K^{-n}|J|.$ Thus, for any $x\in X_1^1$, applying Lemma \ref{l31} to (\ref{aa2}), we deduce that for all $1\le j\le N$, \begin{equation}\label{aa3} |a_j(x)|\le \frac{\tilde{C}^{N((\alpha_N-\alpha_1)+\max_{1\le j\le N}|\alpha_j|+2)}(1+C_5)}{\tilde{d}^{N-1} \inf_{j}\prod_{1=i\neq j}^N|\alpha_j-\alpha_i|}:=\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}, \end{equation} in which $C_{\vec{\alpha},N}$ (which may vary at each appearance below), and there exists $x'\in X_1'$ satisfying \begin{equation}\label{sparse} (2\mathfrak{C}_1)^{-1}2^{-5}K^{-n}|J|\le|x-x'|\le |J| \end{equation} such that for all $l\in \mathfrak{H}_{B,\epsilon}^{1,1,1}$, $|B(x,x+l)-B(x',x'+l)| \le 2^{1-(1/2-2\epsilon)m},$ namely, $|\sum_{k=1}^Na_k(x)(F(x+l))^{\alpha_k}-\sum_{k=1}^Na_k(x') (F(x'+l))^{\alpha_k}|\le 2^{1-(1/2-2\epsilon)m}.$ Now, taking advantage of Taylor's expansion $$(F(x'+l))^{\alpha_k}-(F(x+l))^{\alpha_k} =\alpha_k (x'-x)F'(x+l)F(x+l)^{\alpha_k-1}+O(|x-x'|^2),$$ where $O(|x-x'|^2)$ means a term $\lesssim_{\alpha_k,C_1,C_2}|x-x'|^2$, and using (\ref{aa3}), we can obtain \begin{equation}\label{set1} \begin{aligned} &\ \ |\sum_{k=1}^N[a_k(x)-a_k(x')](F(x+l))^{\alpha_k}- \sum_{k=1}^N\alpha_k a_k(x') (x-x')F'(x+l)F(x+l)^{\alpha_k-1}|\\ \lesssim&\ 2^{-(1/2-2\epsilon)m}+\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}O(|x-x'|^2). \end{aligned} \end{equation} Thanks to (\ref{aa3}) and the upper bound of $|x-x'|$, we deduce from (\ref{set1}) that \begin{equation}\label{set2} \begin{aligned} |\sum_{k=1}^N[a_k(x)-a_k(x')](F(x+l))^{\alpha_k}| \lesssim&\ 2^{-(1/2-2\epsilon)m}+\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}O(|x-x'|)+\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}O(|x-x'|^2)\\ \lesssim&\ 2^{-(1/2-2\epsilon)m}+\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}O(|x-x'|). \end{aligned} \end{equation} Applying Lemma \ref{l31} to (\ref{set2}), we have for $1\le k\le N$, \begin{equation}\label{az1} |a_k(x)-a_k(x')|\lesssim\ \frac{C_{\vec{\alpha},N}}{\tilde{d}^{N-1}} \big(2^{-(1/2-2\epsilon)m}+\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}O(|x-x'|)\big). \end{equation} Note that we can not directly use Lemma \ref{l31} to the object on the left side of (\ref{set1}) because the coefficient $F'(x+l)$ depends on $l$. However, thanks to (\ref{perty}), $F'(x+l)=\frac{1}{\mathcal{F}'(F(x+l))}$ and the fact that $\mathcal{F}$ is a polynomial of degree not more than $N$, we can get around this barrier. Multiplying (\ref{set1}) by $\mathcal{F}'(F(x+l))$, using $|\mathcal{F}'(F(x+l))|\le C_1$ and the fact that $\mathcal{F}'$ is a polynomial of degree not more than $N-1$, we obtain \begin{equation}\label{set11} \begin{aligned} &\ \ |\sum_{k=1}^N[a_k(x)-a_k(x')](F(x+l))^{\alpha_k}\mathcal{F}'(F(x+l))- \sum_{k=1}^N\alpha_k a_k(x') (x-x')F(x+l)^{\alpha_k-1}|\\ \lesssim&\ 2^{-(1/2-2\epsilon)m}+\frac{C_{\vec{ \alpha},N}}{\tilde{d}^{N-1}}O(|x-x'|^2). \end{aligned} \end{equation} Observe that the lowest power on the left side of (\ref{set11}) is $\alpha_1-1$, it follows by Lemma \ref{l31} and (\ref{sparse}) that \begin{equation}\label{e10} \begin{aligned} |a_1(x')| \lesssim&\ \frac{C_{\vec{\alpha},N}}{|x-x'|\tilde{d}^{N^2-1}}( 2^{-(1/2-2\epsilon)m}+\frac{1}{\tilde{d}^{N-1}}O(|x-x'|^2))\\ \lesssim&\ \frac{C_{\vec{\alpha},N}}{\tilde{d}^{N^2+N}} 2^{-(1/4-2\epsilon)m}2^{2^{2N}2\epsilon m}, \end{aligned} \end{equation} which, with (\ref{az1}), implies $ |a_1(x)|+|a_1(x')| \lesssim\ \frac{C_{\vec{\alpha},N}}{\tilde{d}^{N^2+N}} 2^{-(1/4-2\epsilon)m}2^{2^{2N}2\epsilon m}. $ So $$|\sum_{k=2}^Na_k(x)(F(x+l))^{\alpha_k}-\sum_{k=2}^Na_k(x') (F(x'+l))^{\alpha_k}|\lesssim \frac{C_{\vec{\alpha},N}}{\tilde{d}^{N^2+N}} 2^{-(1/4-2\epsilon)m}2^{2^{2N}2\epsilon m}. $$ Repeating the above process $N-1$ times, we get for any $1\le k\le N$, $x\in X_1^N$ with $X_1^N\subset X_1^{N-1}\subset\cdot\cdot\cdot\subset X_1^1\subset X_1$ and $|X_1^N|\ge \frac{|X_1|}{2^{N+1}}$ that $$ \begin{aligned} |a_k(x)|+|a_k(x')| \lesssim&\ \frac{C_{\vec{\alpha},N}}{\tilde{d}^{2N^2k}} 2^{-(\frac{1}{2^{k+1}}-2\epsilon)m}2^{2^{2N}2k\epsilon m}. \end{aligned} $$ Utilizing (\ref{abc1}), we deduce $$|a_k(x)|\lesssim C_{\vec{\alpha},N}(2^{2^{4N}2\epsilon m+\mu m})^{2N^2k} 2^{-(\frac{1}{2^{k+1}}-2\epsilon)m}2^{2^{2N}2k\epsilon m}.$$ Thanks to the conditions on $m$ and $\epsilon$ in (\ref{lower}), the right side is $\lesssim\ C_{\vec{\alpha},N}2^{-\varepsilon_1 m}$ for certain positive constant $\varepsilon_1=\varepsilon_1(N)$. This gives $$1\stackrel{(\ref{lower})}{\lesssim}\inf_{x\in X}|\frac{\partial}{\partial t} B(x,t)|\le \inf_{x\in X_1^N}|\frac{\partial}{\partial t} B(x,t)|\lesssim\ C_{\vec{\alpha},N}2^{-\varepsilon_1 m}$$ which yields a contradiction. Therefore, (\ref{As1}) does not hold, which completes the proof. \end{proof} \vskip.3in \section{Proof of Theorem \ref{t2} for $\sigma=1$: $L^p$ estimate of $\mathcal{H}_{\Delta_1}f$} \label{ll} In this section, we prove Theorem \ref{t2} for $\sigma=1$. Recall $$ \Delta_1=\{(m,n)\in\mathbb{Z}^2:\ m\le 0,\ n\le 0\},\ M_j(\xi,\eta)=\int e(\xi P(2^{-j}t)+\eta u(x)2^{-j}t)\rho(t)dt $$ and $$M_{j,\Delta_1} (\xi,\eta)=\sum_{(m,n)\in \Delta_1}\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})M_j(\xi,\eta).$$ Denote \begin{equation}\label{dff1} \wp:=\{(n_1,n_2)\in \mathbb{Z}^2:\ n_1\ge 0,\ n_2\ge 0,\ n_1+n_2>0\},\ \ \widehat{\Phi_{n_3}}(\xi):=\xi^{n_3}\widehat{\Phi}(\xi),\ n_3\in\mathbb{Z}. \end{equation} Since $(m,n)\in \Delta_1$, $j\in S_l$, the support of $\widehat{\Phi}(\cdot)$, via Taylor's expansions $$e(\xi P(2^{-j}t))=\sum_{q\ge 0}\frac{i^q}{q!}(\xi P(2^{-j}t))^q=2^{mq}\sum_{q\ge 0}\frac{i^q}{q!}(\frac{\xi}{2^{m+jl}})^q(2^{jl} P(2^{-j}t))^q$$ and $$e(\eta u(x)2^{-j}t)=\sum_{\upsilon\ge 0}\frac{i^\upsilon}{\upsilon!}(\eta u(x)2^{-j}t)^\upsilon=2^{n\upsilon}\sum_{\upsilon\ge 0}\frac{i^\upsilon}{\upsilon!}(\frac{\eta}{2^k})^\upsilon (\frac{u(x)}{2^{j-k+n}})^\upsilon t^\upsilon,$$ we have $$ \begin{aligned} M_{j,\Delta_1} (\xi,\eta)=&\ \sum_{(m,n)\in \Delta_1}\sum_{k\in\mathbb{Z}}\sum_{(q,\upsilon)\in \wp} \frac{i^{q+\upsilon}}{q!\upsilon!} 2^{mq+n\upsilon} \widehat{\Phi_q}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi_\upsilon}(\frac{u(x)}{2^{j-k+n}}) \widehat{\Phi_\upsilon}(\frac{\eta}{2^{k}}) \gamma_{j,q,\upsilon}, \end{aligned} $$ where $\gamma_{j,q,\upsilon}:=\int_\mathbb{R} (2^{jl}P(2^{-j}t))^q t^\upsilon \rho(t)dt.$ It is not hard to see the tricky cases are $q=0$ and $\upsilon=0$, which make us split the summation of $(q,\upsilon)$ into three parts: $\sum_{q=0,\upsilon\ge 1}+\sum_{\upsilon=0,q\ge 1}+\sum_{\upsilon\ge 1,q\ge 1}.$ In the following, we only give the detailed proof of the former two terms since it is easier to dominate the third term. \vskip.1in Denote $$ M_{j,\Delta_1,1}(\xi,\eta):=\sum_{(m,n)\in \Delta_1}\sum_{k\in\mathbb{Z}}\sum_{\upsilon\ge1} \frac{i^{\upsilon}}{\upsilon!} 2^{n\upsilon} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi_\upsilon}(\frac{u(x)}{2^{j-k+n}}) \widehat{\Phi_\upsilon}(\frac{\eta}{2^{k}}) \gamma_{j,0,\upsilon} $$ and $$M_{j,\Delta_1,2}(\xi,\eta):=\sum_{(m,n)\in \Delta_1}\sum_{k\in\mathbb{Z}}\sum_{q\ge 1} \frac{i^{q}}{q!} 2^{mq} \widehat{\Phi_q}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \gamma_{j,q,0},$$ it suffices to show that for $\sigma=1,2$, \begin{equation}\label{A1} \|\mathcal{H}_{\Delta_1,\sigma}f\|_p\lesssim_N \|f\|_p, \end{equation} where $$ \mathcal{H}_{\Delta_1,\sigma}f(x,y):=\int_{\xi,\eta}\widehat{f}(\xi,\eta) e(\xi x+\eta y) \sum_{j\in S_l} M_{j,\Delta_1,\sigma}(\xi,\eta)d\xi d\eta. $$ \subsection{The estimate of $\mathcal{H}_{\Delta_1,1}f(x,y)$ } We will prove (\ref{A1}) for $\sigma=1$. By dual arguments, it is enough to show that for all $g\in L^{p'}(\mathbb{R}^2)$, \begin{equation}\label{D2} |\langle \mathcal{H}_{\Delta_1,1}f,g \rangle| \lesssim_N\ \|f\|_p\|g\|_{p'}. \end{equation} Denote \begin{equation}\label{df2} \widehat{\hbar}(\xi):=\sum_{m\le 0}\widehat{\Phi}(\frac{\xi}{2^m}),\ \hbar_s(x):=2^s\hbar(2^sx), \ \Phi_{\upsilon,k}(x):=2^k\Phi_{\upsilon}(2^kx),\ \Psi_k(y):=(\sum_{|k'-k|\le1}\widehat{\Phi}(2^{-k'}\eta) )^{\vee}(-y), \end{equation} where $|\Phi_{\upsilon}(x)|\lesssim\ 2^{2\upsilon}(1+|x|^2)^{-1}$, it follows by Fourier inverse transform, $|\gamma_{j,0,\upsilon}|\lesssim 2^\upsilon$ and H\"{o}lder's inequality that \begin{equation}\label{123} \begin{aligned} {\rm LHS\ of\ (\ref{D2})}\lesssim&\ \sum_{n\le 0}\sum_{\upsilon\ge 1} \frac{2^{n\upsilon}}{\upsilon!} |\langle\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \widehat{\Phi_\upsilon}(\frac{u(x)}{2^{j-k+n}}) \gamma_{j,0,\upsilon} \hbar_{jl}*_x \Phi_{\upsilon,k}*_y f,\ \Psi_k*_yg \rangle|\\ \lesssim&\ \sum_{n\le 0}\sum_{\upsilon\ge 1} \frac{2^{(n+1)\upsilon}}{\upsilon!} \Big\|\big(\sum_{j\in S_l}\sum_{k\in\mathbb{Z}}|\widehat{\Phi_\upsilon} (\frac{u(x)}{2^{j-k+n}})| |\hbar_{jl}*_x \Phi_{\upsilon,k}*_y f|^2\big)^\frac{1}{2}\Big\|_p\\ &\ \times \Big\|\big(\sum_{j\in S_l}\sum_{k\in\mathbb{Z}}|\widehat{\Phi_\upsilon} (\frac{u(x)}{2^{j-k+n}})| | \Psi_k*_yg|^2\big)^\frac{1}{2}\Big\|_{p'}. \end{aligned} \end{equation} Because of $\sum_{j\in S_l}|\widehat{\Phi}_\upsilon (\frac{u(x)}{2^{j-k+n}})|\lesssim 2^\upsilon$, by Littlewood-Paley theorem, the norm $\|\cdot\|_{p'}$ on the right side of (\ref{123}) is $\lesssim 2^{\upsilon/2}\|g\|_{p'}$. In what follows, $M^{(1)}$ and $M^{(2)}$ denote the Hardy-Littlewood Maximal operators applied in the first variable and the second variable, respectively. Note that $\hbar_{jl}*_x f\lesssim M^{(1)}f$. Then we bound the norm $\|\cdot\|_p$ on the right side of (\ref{123}) by $$ \begin{aligned} \Big\|\big(\sum_{j\in S_l}\sum_{k\in\mathbb{Z}}|\widehat{\Phi}_\upsilon (\frac{u(x)}{2^{j-k+n}})| |M^{(1)}( \Phi_{\upsilon,k}*_y f)|^2\big)^\frac{1}{2}\Big\|_p \lesssim&\ 2^{\upsilon/2}\Big\|\big(\sum_{k\in\mathbb{Z}} |M^{(1)}( \Phi_{\upsilon,k}*_y f)|^2\big)^\frac{1}{2}\Big\|_p\\ \lesssim&\ 2^{\upsilon/2}\Big\|\big(\sum_{k\in\mathbb{Z}} | \Phi_{\upsilon,k}*_y f|^2\big)^\frac{1}{2}\Big\|_p\\ \lesssim&\ 2^{5\upsilon/2}\Big\|\big(\sum_{k\in\mathbb{Z}} | M^{(2)} (\tilde{\Psi}_k*_yf)|^2\big)^\frac{1}{2}\Big\|_p\\ \lesssim&\ 2^{5\upsilon/2}\|f\|_p, \end{aligned} $$ where $\tilde{\Psi}_k(y):=\Psi_k(-y)$, Fefferman-Stein's inequality and Littlewood-Paley theorem are applied. Inserting these estimates into (\ref{123}) leads to that ${\rm LHS\ of\ (\ref{D2})}\lesssim \|f\|_p\|g\|_{p'}\sum_{n\le 0}\sum_{\upsilon\ge 1} \frac{2^{(n+1)\upsilon}8^\upsilon}{\upsilon!}.$ Therefore, the proof of (\ref{D2}) is completed. \subsection{The estimate of $\mathcal{H}_{\Delta_1,2}f(x,y)$ } We will prove (\ref{A1}) for $\sigma=2$. Using $\sum_{n\le 0}\widehat{\Phi}(\frac{u(x)}{2^{j-l+n}}) =\widehat{\hbar}(\frac{u(x)}{2^{j-l}}),$ we have $$M_{j,\Delta_1,2}(\xi,\eta)=\sum_{m\le0} \sum_{k\in\mathbb{Z}}\sum_{q\ge 1} \frac{i^{q}}{q!} 2^{mq} \widehat{\Phi}_q(\frac{\xi}{2^{jl+m}}) \widehat{\hbar}(\frac{u(x)}{2^{j-k}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \gamma_{j,q,0}.$$ We remark that at this point it is different from the previous case since the summation of $j$ can not be absorbed by $\widehat{\hbar}(\frac{u(x)}{2^{j-k}})$ any more. Our strategy is to make full use of $\widehat{\Phi}_q(\frac{\xi}{2^{jl+m}})$. \vskip.1in Analogously, dual arguments gives that it suffices to prove for all $g\in L^{p'}$, \begin{equation}\label{D3} |\langle \mathcal{H}_{\Delta_1,2}f,g \rangle| \lesssim\ \|f\|_p\|g\|_{p'}. \end{equation} Applying Fourier inverse transform, H\"{o}lder's inequality and Littlewood-Paley theorem, we obtain $$ \begin{aligned} {\rm LHS\ of\ } (\ref{D3})\lesssim&\ \sum_{m\le 0}\sum_{q\ge 1}\frac{2^{mq}}{q!} |\langle\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \widehat{\hbar}(\frac{u(x)}{2^{j-k}}) \gamma_{j,q,0} \Phi_{q,jl+m}*_x \Phi_{k}*_y f,\ \Psi_{k}*_yg \rangle|\\ \lesssim&\ \Big\|\big(\sum_{k\in\mathbb{Z}}|\Psi_{k}*_yg|^2 \big)^\frac{1}{2}\Big\|_{p'}\sum_{m\le 0}\sum_{q\ge 1}\frac{2^{mq}}{q!} \beth_{m,q}\lesssim\ \|g\|_{p'}\sum_{m\le 0}\sum_{q\ge 1}\frac{2^{mq}}{q!} \beth_{m,q}, \end{aligned} $$ where $$\beth_{m,q}:=\Big\|\big(\sum_{k\in\mathbb{Z}}|\sum_{j\in S_l}\widehat{\hbar}(\frac{u(x)}{2^{j-k}}) \gamma_{j,q,0} \Phi_{q,jl+m}*_x \Phi_{k}*_y f|^2\big)^\frac{1}{2}\Big\|_{p}.$$ So it is enough to prove $ \beth_{m,q}\lesssim\ 100^q\|f\|_p. $ Density arguments yields that it suffices to prove for any $M_1>0$, measurable functions $ z(x)\in \mathbb{Z}$ and $Z(x)\in \mathbb{Z}$, \begin{equation}\label{D5} \tilde{\beth}_{m,q}\lesssim\ 100^q\|f\|_p, \end{equation} where $$\tilde{\beth}_{m,q}:=\Big\|\big(\sum_{|k|\le M_1}|\sum_{j\in [z(x),Z(x)]}\chi_{S_l}(j)\widehat{\hbar}(\frac{u(x)}{2^{j-k}}) \gamma_{j,q,0} \Phi_{q,jl+m}*_x \Phi_{k}*_y f|^2\big)^\frac{1}{2}\Big\|_{p}.$$ Note that if $z(x)=Z(x)$, (\ref{D5}) is immediately achieved by applying Fefferman-Stein's inequality and Littlewood-Paley theorem. In what follows, we assume $z(x)\le Z(x)-1$. As the previous statement, here we want to make full use of $\Phi_{q,jl+m}$ to absorb the summation of $j$. \vskip.1in Recall the process of Abel summation, that is, denote $S_n=\sum_{\kappa\le n}J_\kappa$, we have \begin{equation}\label{E1} \begin{aligned} \sum_{\kappa=z(x)}^{Z(x)} H_\kappa J_\kappa =&\ \sum_{\kappa=z(x)}^{Z(x)} H_\kappa (S_\kappa-S_{\kappa-1}) = \sum_{\kappa=z(x)}^{Z(x)} H_\kappa S_\kappa - \sum_{\kappa=z(x)-1}^{Z(x)-1} H_{\kappa+1} S_\kappa\\ =&\ H_{Z(x)}S_{Z(x)}+ \sum_{\kappa=z(x)}^{Z(x)-1} (H_{\kappa}-H_{\kappa+1}) S_\kappa-H_{z(x)}S_{z(x)-1}. \end{aligned} \end{equation} Applying (\ref{E1}) with $H_j:=\widehat{\hbar}(\frac{u(x)}{2^{j-k}}),\ \ J_j:=\Phi_{q,jl+m}\chi_{S_l}(j) \gamma_{j,q,0},$ the first and third term can be bounded by the same way as yielding the desired estimate for the case $z(x)=Z(x)$. So we only pay attention to the second term. That is, to prove (\ref{D5}), it suffices to show \begin{equation}\label{D6} \tilde{\beth}_{m,q}^{(2)}\lesssim\ 100^q\|f\|_p, \end{equation} where $$\tilde{\beth}_{m,q}^{(2)}:= \Big\|\big(\sum_{|k|\le M_1}|\sum_{j\in [z(x),Z(x)-1]}\big(\widehat{\hbar}(\frac{u(x)}{2^{j-k}}) -\widehat{\hbar}(\frac{u(x)}{2^{j+1-k}}) \big) \bar{\hbar}_{q,jl+m}*_x \Phi_{k}*_y f|^2\big)^\frac{1}{2}\Big\|_{p},$$ in which $\bar{\hbar}_{q,jl+m}:=\sum_{j'\le j}\Phi_{q,j'l+m}\chi_{S_l}(j')\gamma_{j',q,0}$. Hence, to prove (\ref{D6}), it suffices to show \begin{equation}\label{D7} \Im(x):=\sup_{k}\sum_{j\in [z(x),Z(x)-1]}|\widehat{\hbar}(\frac{u(x)}{2^{j-k}}) -\widehat{\hbar}(\frac{u(x)}{2^{j+1-k}}) |\lesssim1. \end{equation} As a matter of fact, denote $\bar{\hbar}_{q,\infty}:=\sum_{j'\in\mathbb{Z}}\Phi_{q,j'l+m}\chi_{S_l}(j')\gamma_{j',q,0}$, we have via (\ref{D7}), a variant of Cotlar's inequality, Fefferman-Stein's inequality and Littlewood-Paley theorem that $$ \begin{aligned} \tilde{\beth}_m^{(2)}\lesssim&\ 2^{3q} \Big\|\left(\sum_{|k|\le M_1}\left| \sup_{j\in [z(x),Z(x)]}\big| \bar{\hbar}_{q,jl+m}*_x \Phi_{k}*_y f\big|\right|^2\right)^\frac{1}{2}\Big\|_{p}\\ \lesssim&\ 2^{3q}\Big\|\left(\sum_{|k|\le M_1}\left| M^{(1)} (\Phi_{k}*_y f)\right|^2\right)^\frac{1}{2}\Big\|_{p} +2^{3q}\Big\|\left(\sum_{|k|\le M_1}\left| M^{(1)} (\bar{\hbar}_{q,\infty}*_x \Phi_{k}*_y f)\right|^2\right)^\frac{1}{2}\Big\|_{p}\\ \lesssim&\ 2^{3q}\|f\|_p. \end{aligned} $$ Thus it remains to prove (\ref{D7}). By the fundamental theorem of calculus, we have $$ \begin{aligned} |\widehat{\hbar}(\frac{u(x)}{2^{j-k}}) -\widehat{\hbar}(\frac{u(x)}{2^{j+1-k}})| =&\ |\int_0^1\frac{d}{ds} \big(\widehat{\hbar}(\frac{u(x)}{2^{j-k}}s +\frac{u(x) }{2^{j+1-k}}(1-s)) \big)ds|\\\ =&\ |\frac{u(x)}{2^{j+1-k}}|\int_0^1|\widehat{\hbar}' \big(\frac{u(x)}{2^{j+1-k}}(1+s) \big)|ds \lesssim\ \frac{|\frac{u(x)}{2^{j+1-k}}| }{1+|\frac{u(x)}{2^{j+1-k}}|^2}, \end{aligned} $$ Hence, (\ref{D7}) follows since $\sum_{j\in\mathbb{Z}}\frac{|\frac{u(x)}{2^{j+1-k}}| }{1+|\frac{u(x)}{2^{j+1-k}}|^2}\lesssim1$. This completes the proof of (\ref{A1}). \section{Proof of Theorem \ref{t2} for $\sigma=2,3$: $L^p$ estimates of $\mathcal{H}_{\Delta_2}f$ and $\mathcal{H}_{\Delta_3}f$} \label{lh} In this section, Theorem \ref{t2} for $\sigma=2,3$ will be proved. As the previous explanation, we only give the details for $\sigma=2$. \subsection{The estimate of $\mathcal{H}_{\Delta_2}f$} Without loss of generality, we assume that $m>n+100l$ and $n>0$. Remember $$\phi_{j,\xi,\eta,x}(t)=\xi P(2^{-j}t)+\eta u(x)2^{-j}t,\ M_j(\xi,\eta)=\int_\mathbb{R} e(\phi_{j,\xi,\eta,x}(t)) \rho(t) dt$$ and the definition of $M_{j,\Delta_2}(\xi,\eta)$ $$ M_{j,\Delta_2} (\xi,\eta)=\sum_{(m,n)\in \Delta_2}\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})M_j(\xi,\eta). $$ we see that the phase function $\phi_{j,\xi,\eta,x}(t)$ does not have a critical point since $m>n+100l$. In fact, $|\phi_{j,\xi,\eta,x}'(t)|$ owns a large lower bound, that is, $$ \begin{aligned} |\phi_{j,\xi,\eta,x}'(t)|=&\ |\xi 2^{-j}P'(2^{-j}t)+\eta u(x)2^{-j}|\\ \ge&\ 2^m|2^{-jl-m}\xi||2^{j(l-1)}P'(2^{-j}t)|- 2^n|2^{-k}\eta||2^{-(j-k+n)}u(x)|\\ \gtrsim_l&\ 2^m, \end{aligned} $$ which prompts us to employ integration by parts. \vskip.1in Motivated by the above analysis, we have via utilizing integration by parts $$ \begin{aligned} M_j(\xi,\eta)=&\ -i\int \frac{1}{\phi_{j,\xi,\eta,x}'(t)}\frac{d}{dt}(e^{i\phi_{j,\xi,\eta,x}(t)}) \rho(t)dt\\ =&\ i\int \frac{1}{\phi_{j,\xi,\eta,x}'(t)}e^{i\phi_{j,\xi,\eta,x}(t)} \rho'(t)dt+i\int \frac{\phi_{j,\xi,\eta,x}''(t) }{(\phi_{j,\xi,\eta,x}'(t))^2}e^{i\phi_{j,\xi,\eta,x}(t)} \rho(t)dt. \end{aligned} $$ Since the above two terms can be homoplastically treated, we only focus on the first term. Taylor's expansion provides $$ \begin{aligned} \frac{1}{\phi_{j,\xi,\eta,x}'(t)}=& \ \frac{1}{\xi 2^{-j}P'(2^{-j}t)}\frac{1}{1+\frac{u(x)\eta}{\xi P'(2^{-j}t)}} =\ \frac{1}{\xi 2^{-j}P'(2^{-j}t)}\sum_{r=0}^\infty (-\frac{u(x)\eta}{\xi P'(2^{-j}t)})^r\\ =&\ 2^{-m}\sum_{r=0}^\infty (-1)^r 2^{(n-m)r}(\frac{\xi}{2^{jl+m}})^{-r-1} (\frac{u(x)}{2^{j-k+n}})^r (\frac{\eta}{2^k})^r (2^{j(l-1)}P'(2^{-j}t))^{-r-1}. \end{aligned} $$ As the statement in section \ref{s1}, Taylor's expansion is used to obtain the rapid decay. Recall the definition of $\widehat{\Phi_r}(\cdot)$ in (\ref{dff1}), and denote $\rho_{j,r}(t):=(2^{j(l-1)}P'(2^{-j}t))^{-r-1}\rho'(t)$, $$ \begin{aligned} M_{j,\Delta_2,m,n}(\xi,\eta):=&\ 2^{-m}\sum_{k\in\mathbb{Z}}\sum_{r=0}^\infty (-1)^r 2^{(n-m)r} \widehat{\Phi_{-r-1}}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi_r}(\frac{\eta}{2^{k}}) \widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})\int_\mathbb{R} e^{i\phi_{j,\xi,\eta,x}(t)} \rho_{j,r}(t)dt, \end{aligned} $$ and $M_{\Delta_2,m,n}(\xi,\eta):=\sum_{j\in S_l} M_{j,\Delta_2,m,n}(\xi,\eta),$ thus, in order to prove (\ref{aim}) for $\sigma=2$, it suffices to prove that \begin{equation}\label{AA1} \|\int_{\mathbb{R}^2} e(\xi x+\eta y) \widehat{f}(\xi,\eta) M_{\Delta_2,m,n}(\xi,\eta) d\xi d\eta||_p\lesssim\ m^22^{-m}\|f\|_p. \end{equation} Denote $$M_{r,m,n}(\xi,\eta):=\sum_{k\in\mathbb{Z}}\sum_{j\in S_l} \widehat{\Phi_{-r-1}}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi_r}(\frac{\eta}{2^{k}}) \widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})\int_\mathbb{R} e^{i\phi_{j,\xi,\eta,x}(t)} \rho_{j,r}(t)dt$$ and $$T_{r,m,n}f(x,y):=2^{-m}\int_{\mathbb{R}^2} e(\xi x+\eta y) \widehat{f}(\xi,\eta) M_{r,m,n}(\xi,\eta) d\xi d\eta,$$ To demonstrate (\ref{AA1}), it suffices to show that there exists a positive constant $\mathfrak{H}\le 2^{m-n-1}$ such that for all $g\in L^{p'}(\mathbb{R}^2)$, \begin{equation}\label{A41} |\langle T_{r,m,n}f,g\rangle |\lesssim\ \frac{m^2}{2^m}\mathfrak{H}^r \|f\|_p\|g\|_{p'}. \end{equation} Remember the definition in (\ref{df2}), Fourier inverse transform and H\"{o}lder's inequality give \begin{equation*} \begin{aligned} &\ {\rm LHS\ of}\ (\ref{A41}) =\ 2^{-m}|\langle\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})\\ &\ \ \ \ \times \int (f*_x\Phi_{-r-1,jl+m}*_y\Phi_{r,k}) (x-P(2^{-j}t),y-u(x)2^{-j}t) \rho_{j,r}(t)dt,g*_y\Psi_{k} \rangle |\\ \le&\ 2^{-m} \|I\|_{L^{p'}_{x,y}(l^2_{j,k})} \|II\|_{L^{p}_{x,y}(l^2_{j,k})}, \end{aligned} \end{equation*} where $\{W_{j,k}\}_{j,k}\in l^2_{j,k}$ means $(\sum_{j,k}|W_{j,k}|^2)^{1/2}\lesssim1$, $I:=\sum_{|j'-j|\le 1}|\widehat{\Phi}(\frac{u(x)}{2^{j'-k+n}})||g*_y\Psi_{k}|$ and $$II:=\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})\int (f*_x\Phi_{-r-1,jl+m}*_y\Phi_{r,k}) (x-P(2^{-j}t),y-u(x)2^{-j}t) \rho_{j,r}(t)dt. $$ In what follows, we assume $t\in (1/9,9)$. By the change of variable $2^{jl}P(2^{-j}t)\to \tau$ (since $j\in S_l$), we have $$ II:=\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})\int (f*_x\Phi_{-r-1,jl+m}*_y\Phi_{r,k}) (x-2^{-jl}\tau),y-u(x)2^{-j}t(\tau)) \rho_{j,r}(t(\tau))t'(\tau)d\tau, $$ where the function $t(\tau)$ satisfying $t'(\tau)\thicksim1$ is the inverse function of $\tau(t)=2^{jl}P(2^{-j}t)$. Thanks to $\sum_{j\in S_l}(\sum_{|j'-j|\le 1}|\widehat{\Phi}(\frac{u(x)}{2^{j'-k+n}})|)^2\lesssim 1$, we have by Littlewoode-Paley theorem $$\|I\|_{L^{p'}_{x,y}(l^2_{j,k})} \lesssim\ \|(\sum_{k\in\mathbb{Z}}|g*_y\Psi_{k}|^2)^\frac{1}{2}\|_{p'} \lesssim\ \|g\|_{p'}.$$ For the estimate of $\|II\|_{L^{p}_{x,y}(l^2_{j,k})}$, since $|\int_\mathbb{R}\sup_{j\in S_l} \rho_{j,r}(t(\tau))t'(\tau)d\tau|\lesssim (2l)^{r+1},$ it suffices to show that for all $\tau_0\in(1/9,9)$, \begin{equation}\label{a55} \Big\|\|\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}}) (f*_x\Phi_{-r-1,jl+m}*_y\Phi_{r,k}) (x-2^{-jl}\tau_0^l),y-u(x)2^{-j}t(\tau_0^l))\|_{l^2_{j,k}}\Big\|_p\lesssim\ 10^{10r}mn\|f\|_p. \end{equation} \begin{lemma}\label{l900} Fix $x$. For all $\{h_j(y)\}_{j\in \mathbb{Z}}\in L^p_y(l^2_j)$, we have for all $t\in(1/9,9)$, \begin{equation}\label{901} \big\|\|\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}}) (h_j*_y \Phi_{r,k})(x,y-u(x)2^{-j}t)\|_{l^2_{j,k}}\big\|_{L^p_y} \lesssim\ n 10^{8r}\|h_j(y)\|_{L^p_y(l^2_j)}. \end{equation} \end{lemma} We postpone the proof of Lemma \ref{l900} and continue the proof of (\ref{a55}). For any fixed $x$, choosing $h_j(y):=f*_x\Phi_{-r-1,jl+m}(x-2^{-jl}\tau_0^l,y)$, thanks to (\ref{901}), it suffices to show that LHS of (\ref{a55}) is $\lesssim$ \begin{equation}\label{SME} n 10^{8r} \|\Big(\sum_{j\in S_l}|(f*_x\Phi_{-r-1,jl+m})(x-2^{-jl}\tau_0^l,y)|^2\Big)^\frac{1}{2} \|_p, \end{equation} whose proof is based on \begin{equation}\label{SS1} \begin{aligned} &\ \chi_{S_l}(j)|f*_x\Phi_{-r-1,jl+m})(x-2^{-jl}\tau_0^l,y)|\\ \lesssim&\ 2^{3r}\Big(M^{[\tau_0^l2^{m+1}]}_1f(x,y) +\sum_{k\ge 0}2^{-2k} M^{[\tau_0^l2^{m-k+1}]}_1f(x,y) \Big), \end{aligned} \end{equation} in which $M^{[\cdot]}_1$ is the shifted maximal operator applied in the first variable, see the Appendix for its definition. We postpone the proof of (\ref{SS1}) at the end of this subsection. In reality, thanks to (\ref{SS1}), utilizing the vector-valued shifted maximal estimate (\ref{sme1}) and Littlewood-Paley theorem, we deduce $$ \begin{aligned} {\rm (\ref{SME})}\lesssim\ \ &\ n 10^{9r} \Big\{ \|\Big(\sum_{j\in S_l}|M^{[\tau_0^l2^{m+1}]}_1(\Phi_{jl+m}*_xf)|^2\Big)^\frac{1}{2} \|_p\\ &\ +\sum_{k\ge 0}2^{-2k} \|\Big(\sum_{j\in S_l}|M^{[\tau_0^l2^{m-k+1}]}_1(\Phi_{jl+m}*_xf)|^2\Big)^\frac{1}{2} \|_p\Big\}\\ \lesssim_N&\ m n 10^{9r}\|\Big(\sum_{j\in \mathbb{Z}}|\Phi_{jl+m}*_xf|^2\Big)^\frac{1}{2} \|_p \lesssim_N\ mn 10^{9r}\|f\|_p. \end{aligned} $$ We end the proof of (\ref{a55}). \vskip.1in Now, it remains to verify Lemma \ref{l900} and (\ref{SS1}). \begin{proof}[Proof of Lemma \ref{l900}] Denote $F(s):=\{h_j(s)\}_{j\in\mathbb{Z}}$, and $$\vec{K}(y,s)(\{a_j\}_{j\in\mathbb{Z}}):=\{a_j \widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}}) \Phi_{r,k}(y-s+u(x)2^{-j}t) \}_{j,k},$$ and $\vec{T}(F)(y):=\int_\mathbb{R} \vec{K}(y,s) F(s) ds.$ In order to prove (\ref{901}), it suffcies to show \begin{equation}\label{911} \|\vec{T}(F)\|_{L^p_y(l^2_{j,k})}\lesssim\ n 10^{8r}\|F\|_{L^p_y(l^2_j)}. \end{equation} We first have $$ \begin{aligned} \|\vec{T}(F)(y)\|_{l^2_{j,k}}^2 =&\ \sum_{j,k}\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})^2 |\int_\mathbb{R} h_j(s) \Phi_{r,k}(y-s+u(x)2^{-j}t)ds |^2\\ =&\ \sum_{j,k}\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}})^2 | h_j*_y \Phi_{r,k}(y+u(x)2^{-j}t) |^2\\ \le&\ 4^r\sum_{j,k} | h_j*_y \Phi_{r,k}(y+u(x)2^{-j}t) |^2. \end{aligned} $$ Fubini's theorem and Littlewood-Paley theorem give \begin{equation}\label{bb1} \begin{aligned} \|\vec{T}(F)(y)\|_{L^2(l^2_{j,k})}^2\le &\ 4^r \sum_{j\in\mathbb{Z}}\int_\mathbb{R}\sum_{k\in\mathbb{Z}}|h_j*_y\Phi_{r,k}|^2dy \lesssim\ 16^r \|F||_{L^2(l^2(\mathbb{Z}))}^2. \end{aligned} \end{equation} Next, we shall prove \begin{equation}\label{bb2} \begin{aligned} \int_{|y-s|>2|s-z|} \|\vec{K}(y,s)-\vec{K}(y,z)\|_{l^2(\mathbb{Z})\rightarrow l^2(\mathbb{Z}^2)} dy\lesssim 10^{6r} n. \end{aligned} \end{equation} If (\ref{bb2}) holds, together with (\ref{bb1}) and the symmetric property of $\vec{K}(y,s)$, we deduce the desired result from the application of Theorem \ref{a.2}. Thus it remains to show (\ref{bb2}). \vskip.1in The translation invariance of $\vec{K}(y,s)$ gives that it suffices to show the case $s=0$, i.e., \begin{equation}\label{end12} \int_{|y|>2|z|} \|\vec{K}(y,0)-\vec{K}(y,z)\|_{l^2(\mathbb{Z})\rightarrow l^2(\mathbb{Z}^2)} dy\lesssim\ 10^{6r} n. \end{equation} For all $a_j\in l^2(\mathbb{Z})$ and $\|a_j\|_{l^2_j}\le1$, we have \begin{equation}\label{aam} \begin{aligned} &\ \ \|(\vec{K}(y,0)-\vec{K}(y,z))\{a_j\}\|_{l^2(\mathbb{Z}^2)}\\ =&\ \|\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}}) \big(\Phi_{r,k}(y+u(x)2^{-j}t)- \Phi_{r,k}(y-z+u(x)2^{-j}t)\big)\{a_j\}\|_{l^2(\mathbb{Z}^2)}\\ =&\ \Big(\sum_{j,k}|a_j|^2 |\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}}) |^2 |\Phi_{r,k}(y+u(x)2^{-j}t)- \Phi_{r,k}(y-z+u(x)2^{-j}t)|^2\Big)^{1/2}\\ \le&\ \|a_j\|_{l^2_j} \Big(\sup_j\sum_k|\widehat{\Phi_r}(\frac{u(x)}{2^{j-k+n}}) |^2 |\Phi_{r,k}(y+u(x)2^{-j}t)- \Phi_{r,k}(y-z+u(x)2^{-j}t)|^2\Big)^{1/2}\\ \le&\ 2^{r} \sup_j\sum_k|\widehat{\Phi}(\frac{u(x)}{2^{j-k+n}}) | |\Phi_{r,k}(y+u(x)2^{-j}t)- \Phi_{r,k}(y-z+u(x)2^{-j}t)|. \end{aligned} \end{equation} We will discuss the relation between $|\frac{u(x)}{2^{j-k}}|(\thicksim 2^n)$ and $|z|$. We assume $z\neq0$. Define $k_{1z}\in \mathbb{Z}$ by $\frac{2^{n+10}}{|z|}\in [2^{k_{1z}},2^{k_{1z}+1})$ and $k_{2z}=k_{1z}-n-10$. Rewrite the summation of $k$ in (\ref{aam}) as follows: $$\sum_{k}\cdot=\sum_{k\ge k_{1z}}\cdot+\sum_{k< k_{2z}}\cdot+\sum_{k_{2z}\le k< k_{1z}}\cdot.$$ Thus we have $${\rm LHS \ of\ (\ref{aam})} \le\ 2^{r}\sup_j\big( \sum_{k\ge k_{1z}}\cdot+\sum_{k< k_{2z}}\cdot+\sum_{k_{2z}\le k< k_{1z}}\cdot\big)=: L_{1r}(y,z)+L_{2r}(y,z)+L_{3r}(y,z).$$ So (\ref{end12}) follows from \begin{equation}\label{end124} \tilde{L}_\kappa:=\int_{|y|\ge 2|z|} L_{\kappa r}(y,z) dy\lesssim\ 10^{6r}n\ {\rm\ for\ all}\ \kappa=1,2,3. \end{equation} For $\tilde{L}_1$, because of ${k\ge k_{1z}}$, we have $|2^ku(x)2^{-j}t|\le 2^{n+4}$, which yields $|2^ky|\ge 2^{k-1}|y-z|\ge 2^{k_{1z}}|z|\ge 2|2^ku(x)2^{-j}t|$. Then $$ \tilde{L}_1\lesssim 2^{4r}\sum_{k\ge k_{1z}} \int_{|y|\ge 2|z|} \frac{2^k}{(1+2^k|y|)^3} dy \lesssim 2^{4r}\sum_{k\ge k_{1z}}\frac{1}{(1+2^k|z|)^2} \lesssim\ 2^{4r}(2^{k_{1z}}|z|)^{-2}\lesssim 2^{4r}2^{-2n}.$$ By the fundamental theorem of calculus, we obtain $$ \begin{aligned} \tilde{L}_2\lesssim&\ 2^{3r}\sum_{k\le k_{2z}}2^{2k}|z|\sum_{j\in\mathbb{Z}} \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}}) \int_{|y|\ge 2|z|}\int_0^1 \frac{1}{1+|2^k(y-sz+u(x)2^{-j}t)|^2} dsdy\\ \lesssim&\ 2^{3r}\sum_{k\le k_{2z}}2^{k}|z|\sup_j \int_0^1\int_{|y|\ge 2^{k+1}|z|}\frac{1}{1+|y-2^k(sz-u(x)2^{-j}t)|^2} dy ds\\ \lesssim&\ 2^{3r}\sum_{k\le k_{2z}}2^{k}|z|\lesssim\ 2^{3r} 2^{k_{2z}}|z|\lesssim 2^{3r}. \end{aligned} $$ Finally, we estimate $\tilde{L}_3$. Denote $\diamondsuit:=\{k\in\mathbb{Z}:k_{1z}\ge k\ge k_{2z}\}$, observe that $\sharp \diamondsuit\lesssim n$, we have $$ \begin{aligned} \tilde{L}_3\lesssim&\ 2^{3r}\sum_{k\in\diamondsuit} \sum_j\widehat{\Phi}(\frac{u(x)}{2^{j-k+n}}) \int_{|y|\ge 2|z|} \big(\frac{2^k}{1+|2^k(y+u(x)2^{-j}t)|^2}+ \frac{2^k}{1+|2^k(y-z+u(x)2^{-j}t)|^2} \big)dy\\ \lesssim&\ 2^{4r}\sum_{k\in\diamondsuit} \int_{\mathbb{R}}\frac{1}{1+|y|^2}dy \lesssim\ 2^{4r} n. \end{aligned} $$ Combining the above estimates of $\tilde{L}_1$, $\tilde{L}_2$ and $\tilde{L}_3$, we complete the proof of (\ref{end124}). This concludes the proof of Lemma \ref{l900}. \end{proof} \begin{proof}[Proof of (\ref{SS1})] Making use of the prepoerty of $\Phi$, we receive $$\chi_{S_l}(j)|(f*_x\Phi_{-r-1,jl+m})(x-2^{-jl}\tau _0^l,y)| \lesssim\ 2^{3r}\int|f(x-2^{-jl}\tau _0^l-x',y)|\frac{2^{jl+m}}{1+|2^{jl+m} x'|^3}dx'.$$ The constant $2^{3r}$ comes from the pointwise estimate of $\Phi_{-r-1,jl+m}$. The integral on the right side is majorized by $$\int_{|x'|\le 2^{-jl-m}}\cdot+\sum_{k\ge 0}\int_{|x'|\in [ 2^{-jl-m+k}, 2^{-jl-m+k+1}]}\cdot=:L_1+L_2.$$ For $L_1$, we have $$ \begin{aligned} L_1\lesssim&\ 2^{jl+m}\int_{|x'|\le 2^{-jl-m}}|f(x-2^{-jl}\tau _0^l-x',y)| dx'\\ \lesssim&\ 2^{jl+m}\int_{|x-2^{-jl}\tau _0^l-z|\le 2^{-jl-m}}|f(z,y)| dz\\ \lesssim&\ M^{[\tau_0^l2^{m+1}]}_1f(x,y). \end{aligned} $$ As for $L_2$, in a similar fashion, we obtain $$ \begin{aligned} L_2\lesssim&\ \sum_{k\ge 0}2^{-3k}2^{jl+m} \int_{|x'|\in [ 2^{-jl-m+k}, 2^{-jl-m+k+1}]}|f(x-2^{-jl}\tau _0^l-x',y)| dx'\\ \lesssim&\ \sum_{k\ge 0}2^{-2k} M^{[\tau_0^l2^{m-k+1}]}_1f(x,y). \end{aligned} $$ As a result, (\ref{SS1}) have been proved by combining the estimates of $L_1$ and $L_2$. \end{proof} \subsection{The estimate of $\mathcal{H}_{\Delta_3}f$} Without loss of the generality, we assume $m>0$, $n\le 0$ and $m-n>100l$. We can achieve the goal by combining the approach leading to the estimate of $\mathcal{H}_{\Delta_2}f$ and certain idea in the proof of the estimate of $\mathcal{H}_{\Delta_1}f$. More precisely, the essential difference with the estimate of $\mathcal{H}_{\Delta_2}f$ is that we shall first use Taylor's expansion $e(\eta u(x)2^{-j})=\sum_{q\ge 0}\frac{i^q}{q!}(\eta u(x)2^{-j})^q$ to the $M_j(\xi,\eta)$. The role of Taylor's expansion is similar to the one in the estimate of $\mathcal{H}_{\Delta_1}f$. \vskip.1in Using the above procedure, to get the desired estimate of $\mathcal{H}_{\Delta_3}f$, it suffices to show \begin{equation}\label{ppp} \sum_{q\ge 0}\frac{i^q}{q!}\| \int_{\mathbb{R}^2} e(\xi x+\eta y) \widehat{f}(\xi,\eta)\sum_{j\in S_l} M_{*,j,m,q}(\xi,\eta) d\xi d\eta\ \|_p\lesssim_N m^22^{-m} \|f\|_p, \end{equation} where $$M_{*,j,m,q}(\xi,\eta):=\sum_{n\le \min\{0,m-100l\}}\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})(\eta u(x)2^{-j})^q \int e(\xi P(2^{-j}t))t^q\rho(t)dt.$$ For $q=0$, we can absorb the summation of $n$ by $\widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})$. To prove (\ref{ppp}), it suffices to show that for $q=0$, \begin{equation}\label{eeq} \| \int_{\mathbb{R}^2} e(\xi x+\eta y) \widehat{f}(\xi,\eta)\sum_{j\in S_l} \tilde{M}_{*,j,m,0}(\xi,\eta) d\xi d\eta\ \|_p\lesssim_N m^22^{-m} \|f\|_p, \end{equation} where $$\tilde{M}_{*,j,m,0}(\xi,\eta):=\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \int e(\xi P(2^{-j}t))\rho(t)dt,$$ and for $q\ge 1$, \begin{equation}\label{ppp1} \| \int_{\mathbb{R}^2} e(\xi x+\eta y) \widehat{f}(\xi,\eta)\sum_{j\in S_l} \tilde{M}_{*,j,n,m,q}(\xi,\eta) d\xi d\eta\ \|_p\lesssim_N C^qm^22^{nq-m} \|f\|_p, \end{equation} where the uniform constant $C$ is independent of $m,n,q$ and $$\tilde{M}_{*,j,n,m,q}(\xi,\eta):=\sum_{k\in\mathbb{Z}}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+n}})(\eta u(x)2^{-j})^q \int e(\xi P(2^{-j}t))t^q\rho(t)dt.$$ Next, we give the sketch of the proofs of (\ref{eeq}) and (\ref{ppp1}). \vskip.1in {\bf Estimate (\ref{eeq})}\ \ Applying integration by parts, we have $$ \begin{aligned} \int e(\xi P(2^{-j}t))\rho(t)dt&\ =i \int \frac{\rho'(t)}{\xi 2^{-j}P'(2^{-j}t)}e(\xi P(2^{-j}t)) dt\\ &\ =i2^{-m}(\frac{\xi}{2^{jl+m}})^{-1}\int (\frac{P'(2^{-j}t)}{2^{-j(l-1)}})^{-1} \rho'(t) e(\xi P(2^{-j}t)) dt. \end{aligned} $$ Arguing similarly as in the proof of (\ref{A41}), we can get the desired estimate for $q=0$. \vskip.1in {\bf Estimate (\ref{ppp1})}\ \ Using $(\eta u(x)2^{-j})^q=2^{nq}(\eta u(x)2^{-j-n})^q$, and preforming an analogous process as yielding (\ref{A41}), we can also achieve the desired estimate for $q\ge 1$. \vskip.1in \section{Proof of Theorem \ref{t2} for $\sigma=4$:\ The estimate of $\mathcal{H}_{\Delta_4}f$} \label{hh} In this section, we prove Theorem \ref{t2} for $\sigma=4$, which is the core part in this paper. \subsection{The reduction of this estimate}\label{4.1} Without loss of generality, we assume $m\ge n$ in $\Delta_4$. Then we reduce the conditions of $m,n$ to $m>0$ and $m-100l\le n\le m$, which means the summation of $n$ is finite if $m$ is fixed. Thus we can assume $m=n$ and set $\Delta_4=\{(m,n)\in\mathbb{Z}^2:\ m=n>0\}$ in the following. Taking advantage of $\rho(t)=\rho_+(t)+\rho_-(t),$ we just need to prove the operator with $\rho(t)$ replaced by $\rho_+(t)$ since the case on $\rho_-(t)$ can be handled in a similar way. For convenience, we still use $\rho(t)$ to stand for $\rho_+(t)$. \vskip.1in Recall $\phi_{j,\xi,\eta,x}(t):=\xi P(2^{-j}t)+\eta u(x)2^{-j}t,$ then $M_{j,\Delta_4}(\xi,\eta)=\sum_{m>0}M_{j,m,\Delta_4}(\xi,\eta),$ where $$ M_{j,m,\Delta_4}(\xi,\eta):= \sum_{k\in\mathbb{Z}} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})\int e^{i\phi_{j,\xi,\eta,x}(t)}\rho(t)dt.$$ Direct computations lead $$\phi_{j,\xi,\eta,x}'(t)=\xi 2^{-j}P'(2^{-j}t)+u(x)2^{-j}\eta,\ \ \phi_{j,\xi,\eta,x}''(t)=\xi 2^{-2j}P''(2^{-j}t).$$ Denote $ G(t):=lt^{l-1}+\sum_{2=i\neq l}^Nia_i2^{j(l-i)}t^{i-1}, $ and $\bar{G}(s):=\int_0^s G(\tau)d\tau$, then $$\phi_{j,\xi,\eta,x}'(t):=\xi 2^{-jl}G(t)+\eta u(x)2^{-j}, \ \phi_{j,\xi,\eta,x}''(t):= \xi 2^{-jl}G'(t),\ \phi_{j,\xi,\eta,x}(t)=\xi 2^{-jl}\bar{G}(t)+\eta u(x)2^{-j}t.$$ Notice that $G(t)$ is monotone in the support of $\rho(t)$ since $j\in S_l$, and then we denote the inverse function of $G(t)$ by $f(t)$. Here the domains of definition of $G(t)$ and $f(t)$ can be extended to large interval like $[\tilde{K}^{-1},\tilde{K}]$ for sufficiently large $\tilde{K}=\tilde{K}(N)$ because we can take $\digamma_N$ large enough in the definition of $S_l$ given in (\ref{sing}). \vskip.1in When $u(x)2^{-j}\eta/\xi 2^{-jl}<0$, we have $|\phi_{j,\xi,\eta,x}''(t)|\thicksim 2^m$, which implies that $\phi_{j,\xi,\eta,x}(t)$ has a unique critical point $t_{cx}$ satisfying \begin{equation}\label{cpo} G(t_{cx})=-\frac{u(x)2^{-j}\eta}{\xi 2^{-jl}},\ \ {\rm and}\ t_{cx}=t_{cx}(\xi,\eta):=f\big(-\frac{u(x)2^{-j}\eta)}{\xi 2^{-jl}} \big). \end{equation} The support of $\widehat{\Phi}$ gives that there exist two cases: $(1)\ \frac{u(x)2^{-j}\eta}{\xi 2^{-jl}}\thicksim -1 \ {\rm or}\ (2)\ \frac{u(x)2^{-j}\eta}{\xi 2^{-jl}}\thicksim 1.$ To give a precise analysis, we use $\widehat{\Phi}(z)=\widehat{\Phi}_+(z)+\widehat{\Phi}_-(z)$ to obtain \begin{equation*} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})= \sum_{(\omega_1,\omega_2,\omega_3)\in \{+,-\}^3} \widehat{\Phi}_{\omega_1}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}_{\omega_2}(\frac{\eta}{2^{k}}) \widehat{\Phi}_{\omega_3}(\frac{u(x)}{2^{j-k+m}}). \end{equation*} If $(\omega_1,\omega_2,\omega_3)\in \{ (-,-,+), (-,+,-), (+,+,+), (+,-,-)\}, $ integration by parts gives the $L^2$ estimate with bound $2^{-m}$ since $|\phi_{j,\xi,\eta,x}'(t)|\gtrsim 2^m$. It remains to bound other four cases given by $(\omega_1,\omega_2,\omega_3)\in \{ (-,+,+), (-,-,-), (+,-,+), (+,+,-)\}.$ Indeed, it suffices to show the case $(\omega_1,\omega_2,\omega_3)=(-,+,+)$ since other three cases can be treated similarly. That is, we assume \begin{equation}\label{jx} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})= \widehat{\Phi}_-(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}_+(\frac{\eta}{2^{k}}) \widehat{\Phi}_+(\frac{u(x)}{2^{j-k+m}}). \end{equation} In what follows, for convenience we omit the subscript $+$ and $-$ on the right side of (\ref{jx}), and one can keep $\frac{\xi}{2^{jl+m}}\sim -1,\ \ \frac{\eta}{2^{k}}\sim 1,\ \ \frac{u(x)}{2^{j-k+m}}\sim 1 $ in mind. Note $\phi_{j,\xi,\eta,x}''(t)\thicksim_l -2^m<0$ at this point. \vskip.1in Applying $\sum_{s\in\mathbb{Z}} \theta_+\big(\frac{G(t_{cx})}{2^s}\big)=1$, we have \begin{equation}\label{op} \int e( \phi_{j,\xi,\eta,x}(t))\rho(t)dt =\sum_{s\in\mathbb{Z}} \theta_+\big(\frac{G(t_{cx})}{2^s}\big)\int e( \phi_{j,\xi,\eta,x}(t))\rho(t)dt. \end{equation} Thanks to the above analysis, there exists a sufficiently large constant $C(N)$ such that RHS of (\ref{op}) with $\sum_{s\in\mathbb{Z}} $ replaced by $\sum_{|s|\ge C(N)}$ can be estimated by integration by parts, which yields the desired $L^2$ estimate with bound $2^{-m}$. Here we have used that $|s|\ge C(N)$ leads to $G(t_{cx})\gg1$ or $0<G(t_{cx})\ll1$ which yields $|\phi_{j,\xi,\eta,x}'(t)|\gtrsim 2^m$. So we call this part ``good part". Via the method of stationary phase, denote $\lambda:=2^m$ and $\phi_{j,\xi,\eta,x}^m(t):=2^{-m}\phi_{j,\xi,\eta,x}(t)$, we achieve \begin{equation}\label{Analy} \begin{aligned} M_{j,\Delta_4}(\xi,\eta) =&\ \sum_{m>0,k\in\mathbb{Z}} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})\int e(\lambda \phi_{j,\xi,\eta,x}^m(t))\rho(t)dt\\ =&\ \sum_{m>0,k\in\mathbb{Z}} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})\Big(\ {\rm good\ part}\\ &\ +\sum_{|s|\le C(N)} \theta_+\big(\frac{G(t_{cx})}{2^s}\big)\int e(\lambda \phi_{j,\xi,\eta,x}^m(t))\rho(t)dt\Big)\\ =&\ \sum_{m>0,k\in\mathbb{Z}} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}}) \Big\{\ {\rm good\ part}\ +{\rm \ main\ part\ } + {\rm\ error}\Big\}, \end{aligned} \end{equation} where $${\rm \ main\ part\ }:=\mathcal{C}\lambda^{-\frac{1}{2}} \frac{e(\lambda \phi_{j,\xi,\eta,x}^m(t_{cx}))} {|(\phi_{j,\xi,\eta,x}^m)''(t_{cx})|^{1/2}} \sum_{|s|\le C(N)} \theta_+\big(\frac{G(t_{cx})}{2^s}\big)\rho(t_{cx}),$$ $\mathcal{C}$ is a universal constant. The ``error" admits a better decay rate $\lambda^{-3/2}$ so that we can get its desired estimate by the Cauchy-Schwarz inequality. Precisely, we can get the associated $L^2$ estimate with bound $\lambda^{-1}=2^{-m}$. \vskip.1in In this section, with the assumption (\ref{jx}), we will prove the following two lemmas. For $m>0$, define $$\mathcal{H}_{\Delta_4,m}f(x,y)=\int_{\xi,\eta}\widehat{f}(\xi,\eta) e(\xi x+\eta y) \sum_{j\in S_l} M_{j,m,\Delta_4}(\xi,\eta)d\xi d\eta.$$ \begin{lemma}[$L^2$ estimate]\label{ll2} There exists a positive constant $\epsilon_0$ such that \begin{equation}\label{l2} \|\mathcal{H}_{\Delta_4,m}f\|_2\lesssim\ 2^{-\epsilon_0 m}\|f\|_2. \end{equation} \end{lemma} and \begin{lemma}[$L^p$ estimate]\label{llp} For all $p\in (1,\infty)$, we have \begin{equation}\label{lp} \|\mathcal{H}_{\Delta_4,m}f\|_p\lesssim\ m^2\|f\|_p. \end{equation} \end{lemma} Interpolating (\ref{l2}) with (\ref{lp}) yields the desired $L^p$ estimate of $\mathcal{H}_{\Delta_4,m}f$ and then the $L^p$ estimate of $\mathcal{H}_{\Delta_4}f$. Therefore, we only require to verify Lemma \ref{ll2} and Lemma \ref{llp}. Notice that we can assume $m\ge C_1(N)$ for sufficiently large $C_1(N)$. \subsection{Proof of Lemma \ref{ll2} ($L^2$ estimate)} Denote $\underline{\rho}(t_{cx}):= \theta_+\big(\frac{G(t_{cx})}{2^s}\big)\rho(t_{cx})$, define $\mathfrak{M}_{j,k,m,x}(\xi,\eta)$ by $$\mathfrak{M}_{j,k,m,x}(\xi,\eta):=2^{-\frac{m}{2}} \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}}) \frac{e( \phi_{j,\xi,\eta,x}(t_{cx}))} {|(\phi_{j,\xi,\eta,x}^m)''(t_{cx})|^{1/2}}\underline{\rho}(t_{cx}) $$ and denote $$T_{j,k,m}f(x,y):=\chi_{S_l}(j)\int_{\mathbb{R}^2}e(\xi x+\eta y)\widehat{f}(\xi,\eta) \mathfrak{M}_{j,k,m,x}(\xi,\eta)d\xi d\eta,$$ due to the analysis in (\ref{Analy}), applying the dual arguments and Littlewood-Paley theorem, we only need to show the following lemma. \begin{lemma}\label{l100} There exists a positive constant $\epsilon_1$ such that \begin{equation}\label{aim1} \|T_{j,k,m}f\|_2\lesssim\ 2^{-\epsilon_1 m}\|f\|_2. \end{equation} \end{lemma} The proof of Lemma \ref{l100} is postponed below. Assume (\ref{aim1}) holds. As a matter of fact, (\ref{l2}) equals that for all $g(x,y)\in L^2$ satisfying $\|g\|_2\le1$, $ |\big\langle \sum_{j\in S_l}\sum_{k\in\mathbb{Z}}T_{j,k,m}f,g\big\rangle| \lesssim\ 2^{-\epsilon_0 m}\|f\|_2. $ Using $$\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}})=\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}})\sum_{|j'-jl|\le 1}\sum_{|k'-k|\le1} \widehat{\Phi}(\frac{\xi}{2^{j'+m}}) \widehat{\Phi}(\frac{\eta}{2^{k'}}),$$ denote \begin{equation}\label{dfn1} \widehat{f_{jk}}(\xi,\eta)=\sum_{|j'-jl|\le 1}\sum_{|k'-k|\le1}\widehat{\Phi}(\frac{\xi}{2^{j'+m}}) \widehat{\Phi}(\frac{\eta}{2^{k'}})\widehat{f}(\xi,\eta), \end{equation} we have by H\"{o}lder's inequality \begin{equation}\label{FT} \begin{aligned} |\big\langle \sum_{j,k}T_{j,k,m}f,g\big\rangle| =&\ |\big\langle \sum_{j,k}T_{j,k,m}f,\ \sum_{|k'-k|\le 1}\widehat{\Phi}(\frac{u(x)}{2^{j-k'+m}}) (\tilde{\Phi}_k\ast_yg)\big\rangle|\\ =&\ |\big\langle \sum_{j,k}T_{j,k,m}(f_{jk}),\ \sum_{|k'-k|\le 1}\widehat{\Phi}(\frac{u(x)}{2^{j-k'+m}}) (\tilde{\Phi}_k\ast_yg)\big\rangle|\\ \lesssim&\ \big\|\big( \sum_{j,k}|T_{j,k,m}(f_{jk})|^2 \big)^{1/2}\big\|_2\ \big\|\Big[ \sum_{j,k} \sum_{|k-k'|\le 1}\widehat{\Phi}(\frac{u(x)}{2^{j-k'+m}})^2 |(\tilde{\Phi}_k\ast_yg)|^2\Big]^{1/2}\big\|_2. \end{aligned} \end{equation} Utilizing $$\sup_{k\in\mathbb{Z}}\sum_{j\in\mathbb{Z}}\sum_{|k-k'|\le 1}\widehat{\Phi}(\frac{u(x)}{2^{j-k'+m}})^2 \lesssim\ \sup_{k\in\mathbb{Z}} \sum_{j\in\mathbb{Z}}\widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})\lesssim 1,$$ by Littlewood-Paley theorem, we obtain $$\big\|\Big[ \sum_{j,k} \sum_{|k-k'|\le 1}\widehat{\Phi}(\frac{u(x)}{2^{j-k'+m}})^2 |\tilde{\Phi}_k\ast_yg|^2\Big]^{1/2}\big\|_2 \lesssim\ \|\big(\sum_{k\in\mathbb{Z}}|\tilde{\Phi}_k\ast_yg|^2\big)^{1/2}\big\|_2\lesssim\ \|g\|_2.$$ As for the first term on the RHS of (\ref{FT}), Fubini's theorem and (\ref{aim1}) give $$\big\|\big( \sum_{j,k}|T_{j,k,m}(f_{jk})|^2 \big)^{1/2}\big\|_2 \lesssim\ \big( \sum_{j,k}\|T_{j,k,m}(f_{jk})\big\|_2^2 \big)^{1/2} \lesssim\ 2^{-\epsilon_0 m}\big(\sum_{j,k}\|f_{jk}\|_2^2 \big)^{1/2}\lesssim\ 2^{-\epsilon_0 m}\|f\|_2.$$ We complete the proof of Lemma \ref{ll2} once Lemma \ref{l100} is proved. Next, our goal is to prove Lemma \ref{l100}. \begin{proof}[Proof of Lemma \ref{l100}] We first discretize the part associated to $u(x)$ in the phase function $\phi_{j,\xi,\eta,x}(t)$. Let $\varrho(x)$ be a non-negative smooth function which is supported in $[-3/4,3/4]$, and satisfies $\sum_{p_1\in\mathbb{Z}}\varrho(z-p_1)=1$ for all $z\in\mathbb{R}$. Denote \begin{equation}\label{denote} u_{j,k,m}(x):=\frac{u(x)}{2^{j-k+m}},\ \xi_{jlm}:=\frac{\xi}{2^{jl+m}},\ \eta_k:=\frac{\eta}{2^k}. \end{equation} Decompose $\widehat{\Phi}(u_{j,k,m}(x))$ as follows: \begin{equation}\label{de1} \begin{aligned} &\ \widehat{\Phi}(u_{j,k,m}(x)) =\sum_{p_1\thicksim 2^m}\widehat{\Phi}(u_{j,k,m}(x)) \varrho(u_{j,k,0}(x)-p_1),\ \ {\rm where}\\ &\ \{p_1\in \mathbb{Z}:\ p_1\thicksim 2^m\}=\{p_1\in\mathbb{Z}:\ 2^{m-1}\le p_1\le 2^{m+1}\}. \end{aligned} \end{equation} Denote $\widehat{\Phi_s}(\xi)=|\xi|^s\widehat{\Phi}(\xi)$ for $s\in\mathbb{R}$, write $$ \begin{aligned} &\ T_{j,k,m}f(x,y)=\ \chi_{S_l}(j) 2^{-\frac{m}{2}}u_{j,k,m}(x)^{-1/2}\\ &\ \times\sum_{p_1\thicksim 2^m} \int_{\mathbb{R}^2}e(\xi x+\eta y)\widehat{f}(\xi,\eta) \widehat{\Phi_{-\frac{1}{2}}}(\xi_{jlm}) \widehat{\Phi}(\eta_k) \varrho(u_{j,k,0}(x)-p_1) \frac{e(\phi_{j,\xi,\eta,x}(t_{cx})) }{|G'(t_{cx})|^{1/2}}\underline{\rho}(t_{cx}) d\xi d\eta. \end{aligned} $$ Because of the support of $\varrho$, we will use $p_1$ to approximate $u_{j,k,0}(x)$. In fact, recall $\bar{G}(s)=\int_0^s G(\tau)d\tau$ and $f=G^{-1}$, denote $$t_{cp_1}=t_{cp_1}(\xi,\eta):=f(-\frac{p_1\eta 2^{-k}}{\xi 2^{-jl}}),\ Q(s):=sf(-s)+\bar{G}\big(f(-s)\big),\ A(s):=\frac{\xi}{2^{jl}} Q(\frac{s\eta 2^{-k}}{\xi 2^{-jl}}),$$ we have $$ \begin{aligned} e\Big(\phi_{j,\xi,\eta,x}(t_{cx})\Big)=&\ e\Big(\xi 2^{-jl} Q\big(\frac{u(x)2^{-j}\eta}{\xi 2^{-jl}}\big)\Big) =\ e(A(u_{j,k,0}(x))-A(p_1))e(A(p_1)) \end{aligned} $$ Using the fundamental theorem of calculus, we have $$A(u_{j,k,0}(x))-A(p_1) =(u_{j,k,0}(x)-p_1)\int_0^1 A'(p_1+s(u_{j,k,0}(x)-p_1)) ds,$$ which yields for $\kappa\ge 0$ $$\big(A(u_{j,k,0}(x))-A(p_1)\big)^\kappa =\ \big(u_{j,k,0}(x)-p_1\big)^\kappa \Big(\int_0^1 A'(p_1+s(u_{j,k,0}(x)-p_1)) ds\Big)^\kappa.$$ Since $|u_{j,k,0}(x)-p_1|\lesssim1$ and $\|A'(z)\|_{L^\infty(z\thicksim 2^m)}\lesssim1$, we deduce by Taylor's expansion that $$e(A(u_{j,k,0}(x))-A(p_1)) =\sum_{\kappa=0}^\infty \frac{i^\kappa}{\kappa!} A_\kappa(u_{j,k,0}(x)-p_1, \frac{\xi}{2^{jl+m}},\frac{\eta}{2^k}), $$ for smooth functions $\{A_\kappa(\cdot,\cdot,\cdot)\}_\kappa$ satisfying there exists an absolute constant $\tilde{C_0}$ such that $$\|A_\kappa(\cdot,\cdot,\cdot)\|_{\mathcal{C}^2([-\frac{3}{4},\frac{3}{4}] \times (\frac{1}{2},2)^2)}\le \tilde{C_0}^\kappa,$$ where $\mathcal{C}^2$ means the standard H\"{o}lder space $\mathcal{C}^\iota$ with $\iota=2$. To get the desired result, it suffices to show the operator $$ \begin{aligned} T_{j,k,m,\kappa}f(x,y):=&\chi_{S_l}(j) 2^{-\frac{m}{2}}\sum_{p_1\thicksim 2^m}\varrho(u_{j,k,0}(x)-p_1) \int_{\mathbb{R}^2}e(\xi x+\eta y)\widehat{f}(\xi,\eta) \widehat{\Phi}(\xi_{jlm}) \widehat{\Phi}(\eta_k) e\big(A(p_1)\big)\\ &\ \times\Upsilon(\xi_{jlm}, \eta_k,u_{j,k,m}(x))A_\kappa(u_{j,k,0}(x)-p_1, \xi_{jlm}, \eta_k) d\xi d\eta, \end{aligned} $$ where $\Upsilon(\cdot,\cdot,\cdot)$ is a smooth function satisfying $\|\Upsilon(\cdot,\cdot,\cdot)\|_{\mathcal{C}^2} \lesssim1$, satisfies there exists an absolute $\tilde{C_1}>0$ such that $\|T_{j,k,m,\kappa}f(x,y)\|_2\lesssim\ \tilde{C_1}^\kappa2^{-\epsilon_0 m}\|f\|_2.$ Due to the properties of $A_\kappa$ and $\Upsilon$, we only focus on $\kappa=0$ since the estimates of the cases $\kappa\ge 1$ have similar bounds, which do not affect the convergence of the summation of $\kappa$. Now, we pay attention to the operator $$ \begin{aligned} &\ T_{j,k,m,0}f(x,y):= \mathcal{T}_{j,k,m}f(x,y)=\chi_{S_l}(j) 2^{-\frac{m}{2}}\sum_{p_1\thicksim 2^m}\varrho(u_{j,k,0}(x)-p_1)\\ &\ \times\int_{\mathbb{R}^2}e(\xi x+\eta y)\widehat{f_{j,k}}(\xi,\eta) \widehat{\Phi}(\xi_{jlm}) \widehat{\Phi}(\eta_k) e\big(A(p_1)\big) \Upsilon_1(u_{j,k,0}(x)-p_1, \xi_{jlm}, \eta_k) d\xi d\eta, \end{aligned} $$ for some smooth function $\Upsilon_1$ satisfying a similar $\mathcal{C}^2$ estimate as $\Upsilon$ and $A_0$. Here we have applied (\ref{dfn1}). Thus it is enough to prove \begin{equation}\label{aim2} \|\mathcal{T}_{j,k,m}f\|_2\lesssim\ 2^{-\epsilon_0 m}\|f\|_2. \end{equation} Taking the Fourier transform in $y$ variable, we have $$ \begin{aligned} &\ \mathcal{F}_y\{\mathcal{T}_{j,k,m}f\}(x,\eta)= \chi_{S_l}(j) 2^{-\frac{m}{2}}\sum_{p_1\thicksim 2^m}\varrho(u_{j,k,0}(x)-p_1)\\ &\ \int_{\mathbb{R}}e(\xi x)\widehat{f_{j,k}}(\xi,\eta) \widehat{\Phi}(\xi_{jlm}) \widehat{\Phi}(\eta_k) e\big(A(p_1)\big) \Upsilon_1(u_{j,k,0}(x)-p_1, \xi_{jlm}, \eta_k) d\xi. \end{aligned} $$ Plancherel's identity applied in the second variable gives that (\ref{aim2}) equals \begin{equation}\label{aim3} \|\mathcal{F}_y\{\mathcal{T}_{j,k,m}f\}\|_2\lesssim\ 2^{-\epsilon_0 m}\|f\|_2. \end{equation} To obtain (\ref{aim3}), we need further to discrete the $\xi_{jlm}$ and $\eta_k$. Using a variant of the previous process, that is, $$\widehat{\Phi}(\xi_{jlm}) =\sum_{w\thicksim -2^\frac{m}{2}}\widehat{\Phi} (\xi_{jlm}) \varrho(\xi_{jl\frac{m}{2}}-w),\ \widehat{\Phi}(\eta_k) =\sum_{v\thicksim 2^\frac{m}{2}}\widehat{\Phi} (\eta_k) \varrho(\eta_{k-\frac{m}{2}}-v), $$ where $v\thicksim 2^\frac{m}{2}$ and $w\thicksim -2^\frac{m}{2}$ is due to the properties of $\varrho(\cdot)$ and the assumption (\ref{jx}), we can see $\widehat{f_{jk}}(\xi,\eta)$ is supported in $$[(w-\frac{3}{4})2^{jl+\frac{m}{2}},(w+\frac{3}{4})2^{jl+ \frac{m}{2}}] \times\ [(v-\frac{3}{4})2^{k-\frac{m}{2}},(v+\frac{3}{4})2^{k- \frac{m}{2}}],$$ and can be extended to be a periodic function whose periods are $\frac{3}{2}2^{jl+\frac{m}{2}}$ for the first variable and $\frac{3}{2}2^{k-\frac{m}{2}}$ for the second variable. For convenience, in what follows we assume its periods are $2^{jl+\frac{m}{2}}$ for the first variable and $2^{k-\frac{m}{2}}$ for the second variable. Recall the notations (\ref{denote}), we have $$ \begin{aligned} &\ \ \widehat{f_{jk}}(\xi,\eta) \varrho(\xi_{jl\frac{m}{2}}-w) \varrho(\eta_{k-\frac{m}{2}}-v) =\ \sum_{(s,l_1)\in\mathbb{Z}^2} a_{s,l_1}^{w,v} e^{il_1\xi_{jl\frac{m}{2}}} e^{is\eta_{k-\frac{m}{2}}}. \end{aligned} $$ Denote $$\phi_{l_1,s}^{w,v}(\xi,\eta):= \varrho(\xi_{jl\frac{m}{2}}-w) \varrho(\eta_{k-\frac{m}{2}}-v) e^{il_1\xi_{jl\frac{m}{2}}} e^{is\eta_{k-\frac{m}{2}}}, $$ then \begin{equation}\label{w1} a_{s,l_1}^{w,v}:=\langle \widehat{f_{jk}},\phi_{l_1,s}^{w,v}\rangle_{{\L}^2}, \end{equation} where $\langle h,g\rangle_{{\L}^2}$ is defined by \begin{equation}\label{df} \langle h,g\rangle_{{\L}^2}:=2^{-(jl+k)}\langle h,g\rangle=2^{-(jl+k)}\int_{[0,2^{jl+\frac{m}{2}}]\times [0,2^{k-\frac{m}{2}}]} h(\xi,\eta) \bar{g}(\xi,\eta) d\xi d\eta. \end{equation} Thus we have $$ \begin{aligned} \widehat{f_{jk}}(\xi,\eta)=&\ \sum_{w\thicksim -2^\frac{m}{2}, v\thicksim 2^\frac{m}{2}} \sum_{s,l_1} a_{s,l_1}^{w,v} e^{il_1\xi_{jl\frac{m}{2}}} e^{is\eta_{k-\frac{m}{2}}}, \end{aligned} $$ which, with the support of $\varrho(\cdot)$ and the definition of $a_{s,l_1}^{w,v}$ (\ref{w1}), yields $$\|\widehat{f_{jk}}\|_2^2 =2^{jl+k}\sum_{w,v,s,l_1}|a_{s,l_1}^{w,v}|^2.$$ So we rewrite $\mathcal{F}_y\{\mathcal{T}_{j,k,m}f\}$ as $$ \begin{aligned} \mathcal{F}_y\{\mathcal{T}_{j,k,m}f\}(x,\eta):= &\chi_{S_l}(j) 2^{-\frac{m}{2}}\sum_{p_1\thicksim 2^m} \sum_{w,v,l_1,s} \langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle_{{\L}^2} A_{l_1,s}^{w,v,p_1}(x,\eta), \end{aligned} $$ where $$ \begin{aligned} A_{l_1,s}^{w,v,p_1}(x,\eta)&:= \int_{\mathbb{R}}e(\xi x) e^{il_1\xi_{jl\frac{m}{2}}} e^{is\eta_{k-\frac{m}{2}}}e\big(A(p_1)\big)\varrho(\xi_{jl\frac{m}{2}}-w) \varrho(\eta_{k-\frac{m}{2}}-v) \Upsilon_1(u_{j,k,0}(x)-p_1, \xi_{jlm}, \eta_k) d\xi. \end{aligned} $$ To obtain the $L^2$ estimate of $\mathcal{F}_y\{\mathcal{T}_{j,k,m}f\}(x,\eta)$, we first consider the estimate of $$ \begin{aligned} &\ \ \|\sum_{w,v,l_1,s} \langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle_{{\L}^2} A_{l_1,s}^{w,v,p_1}(x,\eta)\|_{L^2_{x,\eta}}^2\\ =&\ \sum_{w,v,l_1,s}\sum_{w',v',l_1',s'}\int_x\int_{\eta} \langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle_{{\L}^2} A_{l_1,s}^{w,v,p_1}(x,\eta)\overline{\langle \widehat{f_{j,k}},\phi_{l_1',s'}^{w',v'} \rangle_{{\L}^2} A_{l_1',s'}^{w',v',p_1}(x,\eta)} d\eta dx\\ =&\ \sum_{w,v,l_1,s} \sum_{w',l_1',s'} \langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle_{{\L}^2} \overline{\langle \widehat{f_{j,k}},\phi_{l_1',s'}^{w',v} \rangle_{{\L}^2}}\int_x \Omega (x) dx, \end{aligned} $$ where $$\Omega(x)=\int_\eta A_{l_1,s}^{w,v,p_1}(x,\eta) \overline{A_{l_1',s'}^{w',v,p_1}(x,\eta)}\ d\eta $$ and we have used the orthogonality of the support of $\Upsilon_1$ to absorb $\sum_{v'\thicksim 2^\frac{m}{2}}$. \vskip.1in Denote $\xi'_{jlm}=\frac{\xi'}{2^{jl+m}}$, $$ B^{w,v}(x,p_1,\xi,\eta)= \Upsilon_1(u_{j,k,0}(x)-p_1,\frac{\xi}{2^\frac{m}{2}}, \frac{\eta}{2^\frac{m}{2}})\varrho(\xi_{jl\frac{m}{2}}-w) \varrho(\eta_{k-\frac{m}{2}}-v), $$ then rewrite $\Omega(x)$ as $$ \begin{aligned} \Omega(x)=&\int_\eta\int_{\xi,\xi'} e((\xi-\xi')x) e^{il_1\xi_{jl\frac{m}{2}}-il_1'\xi'_{jl\frac{m}{2}}}e^{is\eta_{k-\frac{m}{2}}-is' \eta_{k-\frac{m}{2}}} e^{i\Big(\xi_{jl0} Q(\frac{p_12^{-m}\eta_k}{\xi_{jlm} })-\xi'_{jl0} Q(\frac{p_12^{-m}\eta_k}{\xi'_{jlm}} )\Big)}\\ &\ \ B^{w,v}(x,p_1,\xi_{jl\frac{m}{2}}, \eta_{k-\frac{m}{2}}) \overline{B^{w',v}(x,p_1,\xi'_{jl\frac{m}{2}}, \eta_{k-\frac{m}{2}})} \ d\xi d\xi'\ d\eta \end{aligned} $$ Changing the variable $\xi\rightarrow 2^{jl+\frac{m}{2}}\xi$, $\eta\rightarrow 2^{k-\frac{m}{2}}\eta$, $\xi'\rightarrow 2^{jl+\frac{m}{2}}\xi'$, rewrite $\Omega(x)$ as $$ \begin{aligned} \Omega(x)=&\ 2^{2jl+\frac{m}{2}+k}\int_\eta\int_{\xi,\xi'} e^{i2^{jl+\frac{m}{2}}(\xi-\xi')x} e^{il_1\xi +is\eta} e^{-il_1'\xi'-is'\eta} e^{i\Big(\xi2^\frac{m}{2} Q(\frac{p_1\eta_m}{\xi})-\xi'2^\frac{m}{2} Q(\frac{p_1\eta_m}{\xi'})\Big)}\\ &\ \ B^{w,v}(x,p_1,\xi, \eta) \overline{B^{w,v}(x,p_1,\xi', \eta)} d\xi d\xi' d\eta. \end{aligned} $$ Next, we give the estimate of $|\Omega(x)|$. Denote $$\Theta_{v,w,l_1}(x):=2^\frac{m}{2} (\bar{G}\circ f)(-\frac{p_1v }{{2^m}w})+2^{jl+\frac{m}{2}}x+l_1,$$ $$\Theta_{v,w,w',s,s'}(x):= \frac{p_1}{2^\frac{m}{2}}\big(f(-\frac{p_1v }{{2^m}w}) -f(-\frac{p_1v }{{2^m}w'})\big)+s-s',$$ we have the following lemma providing useful pointwise estimate of $\Omega(x)$. \begin{lemma}\label{l6.1} \begin{equation}\label{eo} \begin{aligned} |\Omega(x)| \lesssim&\ 2^{2jl+\frac{m}{2}+k} \varrho(\frac{u(x)}{2^{j-k}}-p_1) \frac{\min\{1,(\Theta_{v,w,l_1}(x))^{-2}\} \min\{1,(\Theta_{v,w',l_1'}(x))^{-2}\}} {1+(\Theta_{v,w,w',s,s'}(x))^2}. \end{aligned} \end{equation} \end{lemma} The proof is postponed in the section \ref{slemma}. We continue the proof of (\ref{aim1}). By using the support of $\Upsilon_1(u_{j,k,0}(x)-p_1,\xi_{jlm}, \eta_k)$ and the mean value theorem, we have $$ \begin{aligned} &\ \ \|\mathcal{F}_y\{ \mathcal{T}_{j,k,m}f\}(x,\eta)\|_{2}^2\\ \lesssim&\ \chi_{S_l}(j) 2^{-m}\sum_{p_1\thicksim 2^m} \sum_{w,v,l_1,s} \sum_{w',l_1',s'} \langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle_{{\L}^2} \overline{\langle \widehat{f_{j,k}},\phi_{l_1',s'}^{w',v} \rangle_{{\L}^2}}|\int_\mathbb{R} \Omega (x) dx|\\ \lesssim&\ 2^{2jl+\frac{m}{2}+k}\chi_{S_l}(j) 2^{-m} \sum_{w,v,l_1,s} \sum_{w',l_1',s'} \langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle_{{\L}^2} \overline{\langle \widehat{f_{j,k}},\phi_{l_1',s'}^{w',v} \rangle_{{\L}^2}}\\ &\ |\int_\mathbb{R} \varrho(\frac{u(x)}{2^{j-k+m}}) \frac{1}{1+(\tilde{\Theta}_{v,w,l_1}(x))^2} \frac{1}{1+(\tilde{\Theta}_{v,w',l_1'}(x))^2} \frac{1}{1+(\tilde{\Theta}_{v,w,w',s,s'}(x))^2} dx|, \end{aligned} $$ where $\tilde{\Theta}_{v,w,l_1}(x)$ and $\tilde{\Theta}_{v,w,w',s,s'}(x)$ are the previous $\Theta_{v,w,l_1}(x)$ and $\Theta_{v,w,w',s,s'}(x)$ with $p_1$ replaced by $\frac{u(x)}{2^{j-k}}$, respectively. This process is valid since the scale $2^m$ of $p_1$ is larger than the scale $2^{m/2}$ of $w$ or $v$. Recall (\ref{df}), denote $(\sum_{s}\langle \widehat{f},\phi_{l_1,s}^{w,v}\rangle^2)^\frac{1}{2} =C_{l_1}^{w,v},\ (\sum_{s'}\langle \widehat{f},\phi_{l_1',s'}^{w',v}\rangle^2)^\frac{1}{2} =C_{l_1'}^{w',v},$ we further obtain $$ \begin{aligned} \|\mathcal{F}_y\{ \mathcal{T}_{j,k,m}f\}(x,\eta)\|_{2}^2 \lesssim&\ 2^{-\frac{m}{2}}\chi_{S_l}(j) \sum_{w,v,l_1} \sum_{w',l_1'}2^{-k}\int_x \varrho(\frac{u(x)}{2^{j-k+m}}) \frac{1}{1+(\tilde{\Theta}_{v,w,l_1}(x))^2} \frac{1}{1+(\tilde{\Theta}_{v,w',l_1'}(x))^2}\\ &\ \Big\{\sum_{s,s'} \frac{1}{1+(\tilde{\Theta}_{v,w,w',s,s'}(x))^2} |\langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle| |\langle \widehat{f_{j,k}},\phi_{l_1',s'}^{w',v'} \rangle|\Big\}dx. \end{aligned} $$ The summation in brace is bounded by $$ \begin{aligned} &\ (\sum_{s}|\langle \widehat{f_{j,k}},\phi_{l_1,s}^{w,v} \rangle|^2)^\frac{1}{2} \Big(\sum_s\big(\sum_{s_1} \frac{1}{1+(\tilde{\Theta}_{v,w,w',s,s'}(x))^2} |\langle \widehat{f_{j,k}},\phi_{l_1',s'}^{w',v'} \rangle|\big)^2\Big)^\frac{1}{2} \lesssim\ C_{l_1}^{w,v} C_{l_1'}^{w',v}, \end{aligned} $$ where we have used Young's inequality for series. Consequently, $$ \begin{aligned} &\ \|\mathcal{F}_y\{ \mathcal{T}_{j,k,m,0}f\}(x,\eta)\|_{2}^2\\ \lesssim&\ 2^{-m/2-k}\chi_{S_l}(j)\sum_{w,v,l_1}\sum_{w',l'_1} \int_x \varrho(\frac{u(x)}{2^{j-k+m}}) \frac{1}{1+(\tilde{\Theta}_{v,w,l_1}(x))^2} \frac{1}{1+(\tilde{\Theta}_{v,w',l_1'}(x))^2}C_{l_1}^{w,v} C_{l_1'}^{w',v} dx\\ \lesssim&\ 2^{-m/2-k}\chi_{S_l}(j)\sum_{v\thicksim 2^\frac{m}{2}}\int_x \varrho(\frac{u(x)}{2^{j-k+m}}) (\sum_{w,l_1}\frac{1}{1+(\tilde{\Theta}_{v,w,l_1}(x))^2} C_{l_1}^{w,v} )^2 dx. \end{aligned} $$ Denote $\tilde{C}_{l_1}^{w,v}=2^{-\frac{jl+k}{2}}C_{l_1}^{w,v} $ then $ \|\mathcal{F}_y\{ \mathcal{T}_{j,k,m}f\}(x,\eta)\|_{2}^2 \lesssim\ \sum_{v\thicksim 2^\frac{m}{2}}\Re_{1v}^2, $ where \begin{equation}\label{key1} \Re_{1v}:=\Big(2^{jl-\frac{m}{2}} \int_x \chi_{S_l}(j) \varrho(\frac{u(x)}{2^{j-k+m}}) (\sum_{w,l_1}\frac{\tilde{C}_{l_1}^{w,v}}{1+(\tilde{\Theta}_{v,w,l_1}(x))^2} )^2 dx\Big)^\frac{1}{2}. \end{equation} To get the desired result, we need a lemma whose proof is based on Lemma \ref{p1} and postponed at the end of this subsection. \begin{lemma}\label{cha} There exists a constant $\epsilon_2>0$ such that \begin{equation}\label{chacha} \Re_{1v}^2\lesssim\ 2^{-\epsilon_2 m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2. \end{equation} \end{lemma} Thanks to this lemma, we complete the proof of Lemma \ref{l100} by Plancherel's identity and setting $\epsilon_1=\epsilon_2$. \end{proof} \begin{proof}[Proof of Lemma \ref{cha}] Recall $\tilde{\Theta}_{v,w,l_1}(x):=2^\frac{m}{2}(\bar{G}\circ f)(-\frac{u_{j,k,m}(x) v}{w})+2^{jl+\frac{m}{2}}x+l_1.$ Due to the definitions of $\bar{G}$ and $f$, it is not difficult to see that there exist two constants $c_3>0$ and $C_3>0$ such that \begin{equation}\label{decay1} c_3\le (\bar{G}\circ f)(-\frac{u_{j,k,m}(x) v}{w})\le C_3. \end{equation} Denote $I_{j\mathfrak{k}}=[\mathfrak{k}-\frac{1}{2},\mathfrak{k}+\frac{1}{2}]2^{-jl}$, then $\mathbb{R}=\sum_{\mathfrak{k}\in\mathbb{Z}}I_{j\mathfrak{k}}$. Then we split $\sum_{l_1\in\mathbb{Z}}$ in (\ref{key1}) into three parts, and further bound $\Re_{1v}^2$ by an absolute constant times the summation of $$\Re_{1v1}^2=2^{jl-\frac{m}{2}} \sum_{\mathfrak{k}\in\mathbb{Z}}\int_{I_{j\mathfrak{k}}} \chi_{S_l}(j) \varrho(\frac{u(x)}{2^{j-k+m}}) (\sum_{w\thicksim -2^\frac{m}{2}}\sum_{l_1<(-x2^{jl}-C_3) 2^\frac{m}{2}} \frac{\tilde{C}_{l_1}^{w,v}}{ 1+(\tilde{\Theta}_{v,w,l_1}(x))^2} )^2 dx,$$ $$\Re_{1v2}^2=2^{jl-\frac{m}{2}} \sum_{\mathfrak{k}\in\mathbb{Z}}\int_{ I_{j\mathfrak{k}}} \chi_{S_l}(j) \varrho(\frac{u(x)}{2^{j-k+m}}) (\sum_{w\thicksim -2^\frac{m}{2}}\sum_{l_1>(-x2^{jl} -c_3) 2^\frac{m}{2}} \frac{\tilde{C}_{l_1}^{w,v}}{ 1+(\tilde{\Theta}_{v,w,l_1}(x))^2} )^2 dx,$$ and $$\Re_{1v3}^2=2^{jl-\frac{m}{2}} \sum_{\mathfrak{k}\in\mathbb{Z}}\int_{ I_{j\mathfrak{k}}} \chi_{S_l}(j) \varrho(\frac{u(x)}{2^{j-k+m}}) (\sum_{w\thicksim -2^\frac{m}{2}}\sum_{-C_32^\frac{m}{2}\le l_1+2^{jl+\frac{m}{2}}x\le -c_3 2^\frac{m}{2}} \frac{\tilde{C}_{l_1}^{w,v}}{ 1+(\tilde{\Theta}_{v,w,l_1}(x))^2} )^2 dx.$$ For $\Re_{1v1}^2$, changing the variable $x\rightarrow 2^{-jl}(\mathfrak{k}+x)=:y_x$ gives $$ \begin{aligned} \Re_{1v1}^2=&\ 2^{-\frac{m}{2}} \int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j)\varrho(\frac{u(y_x)}{2^{j-k+m}}) \underbrace{\sum_{\mathfrak{k}\in\mathbb{Z}}(\sum_{w\thicksim -2^\frac{m}{2}}\sum_{l_1<(-\mathfrak{k}-x-C_3) 2^\frac{m}{2}} \frac{\tilde{C}_{l_1}^{w,v}}{ 1+(\tilde{\Theta}_{v,w,l_1}(y_x))^2} )^2}_{=:\Xi_0} dx. \end{aligned} $$ Let us denote $B_{x,k'}:=[-k'-1-x-C_3,-k'-x-C_3] 2^\frac{m}{2},$ so $\sharp\{l\in\mathbb{Z}:\ l\in B_{x,k'}\}\lesssim\ 2^\frac{m}{2}$, which implies $$ \begin{aligned} \Xi_0=&\ \chi_{S_l}(j)\sum_{\mathfrak{k}\in\mathbb{Z}}(\sum_{w\thicksim -2^\frac{m}{2}}\sum_{k'\ge \mathfrak{k}}\sum_{l_1\in B_{x,k'}} \frac{\tilde{C}_{l_1}^{w,v}}{ 1+(\tilde{\Theta}_{v,w,l_1}(y_x))^2} )^2 \lesssim\ \sum_{\mathfrak{k}\in\mathbb{Z}}(\sum_{k'\ge \mathfrak{k}} \frac{\sum_{w\thicksim -2^\frac{m}{2}}\sum_{l_1\in B_{x,k'}}\tilde{C}_{l_1}^{w,v}}{ 1+2^m|k'-\mathfrak{k}|^2} )^2\\ \lesssim&\ \sum_{\mathfrak{k}\in\mathbb{Z}}(\sum_{w\thicksim -2^\frac{m}{2}}\sum_{l_1\in B_{x,\mathfrak{k}}}\tilde{C}_{l_1}^{w,v})^2 \big(\sum_{\mathfrak{k}\ge 0}\frac{1}{1+2^m|\mathfrak{k}|^2}\big)^2 \lesssim\ \sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2, \end{aligned} $$ where we have used Young's inequality for series for the second inequality. Inserting this estimate into the above integral of $\Re_{1v1}^2$ yields $\Re_{1v1}^2\lesssim\ 2^{-m/2}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2. $ Similarly, we have $\Re_{1v2}^2\lesssim\ 2^{-m/2}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2. $ For the last term $\Re_{1v3}^2$, by the mean value theorem and change the variable $x\rightarrow 2^{-jl}(\mathfrak{k}+x):=y_x$ again, we obtain $$\begin{aligned} \Re_{1v3}^2 \lesssim&\ 2^{-\frac{m}{2}} \sum_{\mathfrak{k}\in\mathbb{Z}}\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x))\\ &\ \times(\sum_{w\thicksim -2^\frac{m}{2}}\sum_{ 2^{-\frac{m}{2}}l_1+x+\mathfrak{k}\thickapprox-1 } \frac{\tilde{C}_{l_1}^{w,v}}{ 1+\Big(2^\frac{m}{2}\big((G\circ \bar{G}^{-1})(-2^{-\frac{m}{2}}l_1-x-\mathfrak{k})+\frac{vu_{j,k,m}(x) }{w}\big)\Big)^2} )^2 dx. \end{aligned} $$ Here $2^{-\frac{m}{2}}l_1+x+\mathfrak{k}\thickapprox-1 $ means $-C_3\le2^{-\frac{m}{2}}l_1+x+\mathfrak{k}\le -c_3$ ($c_3$ and $C_3$ are defined as in (\ref{decay1})). Define $$\mathfrak{R}_m^{(1)}=[-C_3-1/2,C_3+1/2]\cap 2^{-\frac{m}{2}}\mathbb{Z},\ \ \mathfrak{R}_m^{(2)}=[-1,-1/2]\cap 2^{-\frac{m}{2}}\mathbb{Z},$$ we now only need to show that there exists $\delta_0>0$ such that for any measurable function $A_1(x)$ satisfying $|A_1(x)|\thicksim1$, \begin{equation}\label{dis} \tilde{R}\lesssim\ 2^{(1/2-\delta_0)m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2, \end{equation} where $$\tilde{R}:=\sum_{\mathfrak{k}\in\mathbb{Z}}\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x)) (\sum_{(l_1,w)\in \mathfrak{R}_m^{(1)}\times \mathfrak{R}_m^{(2)}} \frac{\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k})}^{2^{\frac{m}{2}}w,v} \chi_{\thickapprox-1}(l_1+x)}{ 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2} )^2 dx.$$ Let $\delta$ be a small enough positive constant depending only on $N$. Denote $$ \mathfrak{L}_{G,\epsilon}:= \{(l_1,w)\in \mathfrak{R}_m^{(1)}\times \mathfrak{R}_m^{(2)}:\ |\mathfrak{A}_{G,\epsilon}(l_1,w)|\le 2^{-2\delta m}\},\ \ \mathfrak{H}_{G,\epsilon}:=(\mathfrak{R}_m^{(1)}\times \mathfrak{R}_m^{(2)}) \setminus \mathfrak{L}_{G,\epsilon},$$ where $$\mathfrak{A}_{G,\epsilon}(l_1,w)=\{x\in [-\frac{1}{2},\frac{1}{2}]:\ |A_1(x)(G\circ \bar{G}^{-1})(x+l_1)+w^{-1}| \le 2^{-(1/2-2\delta)m}\}.$$ Thus in order to prove (\ref{dis}), it is enough to show there exist two positive constants $\delta_0'$ and $\delta_0''$ such that \begin{equation}\label{dis1} \tilde{R}^L\lesssim\ 2^{(1/2-\delta_0')m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2 \end{equation} and \begin{equation}\label{dis2} \tilde{R}^H\lesssim\ 2^{(1/2-\delta_0'')m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2, \end{equation} where $$\tilde{R}^L:=\sum_{\mathfrak{k}\in\mathbb{Z}}\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x)) (\sum_{(l_1,w)\in \mathfrak{L}_{G,\epsilon}} \frac{\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k})}^{2^{\frac{m}{2}}w,v} \chi_{\thickapprox-1}(l_1+x)}{ 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2} )^2 dx,$$ $$\tilde{R}^H:=\sum_{\mathfrak{k}\in\mathbb{Z}}\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x)) (\sum_{(l_1,w)\in \mathfrak{H}_{G,\epsilon}} \frac{\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k})}^{2^{\frac{m}{2}}w,v} \chi_{\thickapprox-1}(l_1+x)}{ 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2} )^2 dx.$$ {\bf $\bullet$ The estimate of (\ref{dis1})}\ \hskip.2in For the estimate of $\tilde{R}^L$, H\"{o}lder's inequality gives $$ \begin{aligned} \tilde{R}^L \lesssim&\ \sum_{\mathfrak{k}\in\mathbb{Z}}\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x)) \sum_{(l_1,w)\in \mathfrak{L}_{G,\epsilon}} \frac{|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k})}^{2^{\frac{m}{2}}w,v}|^2 \chi_{\thickapprox-1}(l_1+x)} { 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2}\\ &\ \sum_{(l_1',w')\in \mathfrak{L}_{G,\epsilon}} \frac{ \chi_{\thickapprox-1}(l_1'+x)} { 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1'+x)+(w')^{-1}\Big)^2} dx, \end{aligned} $$ which, with application of Fubini's theorem, implies the right side equals a constant times $$ \begin{aligned} &\ \sum_{\mathfrak{k}\in\mathbb{Z}}\sum_{(l_1,w)\in \mathfrak{L}_{G,\epsilon}}|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k}) }^{2^{\frac{m}{2}}w,v}|^2\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x)) \frac{ \chi_{\thickapprox-1}(l_1+x)} { 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2}\\ &\ \times \sum_{(l_1',w')\in \mathfrak{L}_{G,\epsilon}} \frac{ \chi_{\thickapprox-1}(l_1'+x)} { 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1'+x)+(w')^{-1}\Big)^2} dx\\ =&\ \sum_{\mathfrak{k}\in\mathbb{Z}}\sum_{(l_1,w)\in \mathfrak{L}_{G,\epsilon}}|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k}) }^{2^{\frac{m}{2}}w,v}|^2(\int_{\mathfrak{A}_{G,\epsilon}(l_1,w)} \cdot+\int_{[-\frac{1}{2},\frac{1}{2}]\setminus \mathfrak{A}_{G,\epsilon}(l_1,w)}\cdot) \end{aligned} $$ Thanks to the definition of $\mathfrak{L}_{G,\epsilon}$, the first term is majorized by $$ \begin{aligned} \lesssim&\ 2^\frac{m}{2} \sum_{\mathfrak{k}\in\mathbb{Z}}\sum_{(l_1,w)\in \mathfrak{L}_{G,\epsilon}}|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k}) }^{2^{\frac{m}{2}}w,v}|^2|\mathfrak{A}_{G,\epsilon}(l_1,w)| \lesssim\ 2^{(1/2-2\delta)m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2, \end{aligned} $$ where we have used the definition of $\mathfrak{L}_{G,\epsilon}$ and \begin{equation}\label{222} \sum_{w'\in \mathfrak{R}_m^{(2)}}\frac{\chi_{S_l}(j)}{ 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+(w')^{-1}\Big)^2} \lesssim 1 \end{equation} for the second inequality and the first inequality, respectively. For the second term, $x\in [-\frac{1}{2},\frac{1}{2}]\setminus \mathfrak{A}_{G,\epsilon}(l_1,w)$ yields $$1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2\ge 2^{2m\delta}.$$ As a result, $$\sum_{\mathfrak{k}\in\mathbb{Z}}\sum_{(l_1,w)\in \mathfrak{L}_{G,\epsilon}}|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k}) }^{2^{\frac{m}{2}}w,v}|^2\int_{[-\frac{1}{2},\frac{1}{2}]\setminus \mathfrak{A}_{G,\epsilon}(l_1,w)}\cdot \lesssim\ 2^{(1/2-2\delta)m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2. $$ This completes the proof of (\ref{dis1}) by setting $\delta_0'=2\delta$.\\ {\bf $\bullet$ The estimate of (\ref{dis2})}\ \hskip.2in For the estimate of $\tilde{R}^H$, we need Lemma \ref{p1}. Denote $$E_1:=\{l_1\in \mathfrak{R}_m^{(1)}:\ \exists\ w\in \mathfrak{R}_m^{(2)},\ s.t.\ (l_1,w)\in \mathfrak{H}_{G,\epsilon}\},$$ applying Lemma \ref{p1} with $B(x,t)=A_1(x)(G\circ \bar{G}^{-1})(t)$, we obtain \begin{equation}\label{key0} \sharp E_1\lesssim\ 2^{(\frac{1}{2}-\nu_0)m} \end{equation} for a small positive $\nu_0=\nu_0(N)$. By H\"{o}lder's equality, we have $$ \begin{aligned} \tilde{R}^H\lesssim&\ \sum_{\mathfrak{k}\in\mathbb{Z}}\int_{-\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x))\\ &\ \times (\sum_{l_1\in E_1}\sum_{w\in \mathfrak{R}_m^{(2)}} \frac{\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k})}^{2^{\frac{m}{2}}w,v} \chi_{\thickapprox-1}(l_1+x)}{ 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2} )^2 dx\\ \lesssim&\ \sum_{\mathfrak{k}\in\mathbb{Z}}\sum_{l_1\in E_1}\sum_{w\in \mathfrak{R}_m^{(2)}}|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k}) }^{2^{\frac{m}{2}}w,v}|^2\int_{\frac{1}{2}}^\frac{1}{2} \chi_{S_l}(j) \varrho(u_{j,k,m}(y_x))\\ &\ \times \Big(\sum_{l_1\in E_1}\sum_{w\in \mathfrak{R}_m^{(2)}} \frac{ \chi_{\thickapprox-1}(l_1+x)}{ 1+2^m\Big(A_1(x)(G\circ \bar{G}^{-1})(l_1+x)+w^{-1}\Big)^2} \Big)dx, \end{aligned} $$ which is majorized by a constant multiplied by $$ \begin{aligned} &\ \sharp E_1\sum_{\mathfrak{k}\in\mathbb{Z}}\sum_{l_1\in E_1}\sum_{w\in \mathfrak{R}_m^{(2)}}|\tilde{C}_{2^\frac{m}{2}(l_1-\mathfrak{k}) }^{2^{\frac{m}{2}}w,v}|^2 \lesssim\ 2^{(1/2-2\nu)m}\sum_{w, l_1}|\tilde{C}_{l_1}^{w,v}|^2 \end{aligned} $$ via (\ref{222}) and (\ref{key0}). This completes the proof of (\ref{dis2}) by setting $\delta_0''=2\nu$. \end{proof} \subsection{Proof of Lemma \ref{llp} ($L^p$ estimate)} In this subsection, we prove Lemma \ref{llp}. Denote $$M_{j,k,m,\Delta_4}(\xi,\eta):= \widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{\eta}{2^{k}}) \widehat{\Phi}(\frac{u(x)}{2^{j-k+m}})\int e^{i\phi_{j,\xi,\eta,x}(t)}\rho(t)dt,$$ and $$\bar{T}_{j,k,m}f(x,y):=\int_{\xi,\eta}\widehat{f}(\xi,\eta) e(\xi x+\eta y) M_{j,k,m,\Delta_4}(\xi,\eta) d\xi d\eta,$$ then $$\mathcal{H}_{\Delta_4,m}f(x,y)=\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \bar{T}_{j,k,m}f(x,y).$$ By the dual arguments, it suffices to show for all $g\in L^{p'}(\mathbb{R}^2)$, \begin{equation}\label{dd2} |\langle\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \bar{T}_{j,k,m}f,g \rangle|\lesssim\ m^2 \|f\|_p\|g\|_{p'}. \end{equation} Rewrite $\bar{T}_{j,k,m}f(x,y)$ as $$ \begin{aligned} \bar{T}_{j,k,m}f(x,y)=&\ \int_\mathbb{R} (f*_x \Phi_{jl+m}*_y\Phi_k) (x-P(2^{-j}t),y-u(x)2^{-j}t)\rho(t)dt. \end{aligned} $$ Recall the definition of $\Psi_k(\cdot)$ in (\ref{dfn1}), LHS of (\ref{dd2}) equals $$ \begin{aligned} &\ |\langle\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \bar{T}_{j,k,m}f, \sum_{|j'-j|\le1}\widehat{\Phi}(\frac{u(x)}{2^{j'-k+m}})\ (g*_y {\Psi}_k) \rangle|\\ \lesssim&\ \|\Big(\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} |\bar{T}_{j,k,m}f|^2\Big)^\frac{1}{2}\|_p \|\Big(\sum_{j\in S_l}\sum_{k\in\mathbb{Z}} \sum_{|j'-j|\le1}\widehat{\Phi}(\frac{u(x)}{2^{j'-k+m}})^2 |g*_y {\Psi}_k|^2\Big)^\frac{1}{2}\|_{p'}. \end{aligned} $$ Since $\sum_{j\in S_l}\sum_{|j'-j|\le1}\widehat{\Phi}(\frac{u(x)}{2^{j'-k+m}} )^2\lesssim1,$ as the previous process, we bound the second term by a constant times $\|\Big(\sum_{k\in\mathbb{Z}} |g*_y \tilde{\Phi}_k|^2\Big)^\frac{1}{2}\|_{p'}$, which is $\lesssim \|g\|_{p'}.$ Using the previous way yielding (\ref{a55}) with $r=0$ and $m=n$, we deduce that the first term is $\lesssim m^2\|f\|_p$. This completes the proof of Lemma \ref{llp}. \vskip.1in \section{Proof of Lemma \ref{l6.1}} \label{slemma} \begin{proof}[Proof of Lemma \ref{l6.1}] Denote $W_{\eta,x}(\xi):=\xi2^\frac{m}{2} Q(\frac{p_1\eta_m}{\xi })+2^{jl+\frac{m}{2}}x\xi+l_1\xi,$ $$I_x(\eta):=\int_\xi e^{iW_{\eta,x}(\xi)} B^{w,v}(x,p_1,\xi, \eta) d\xi,\ \ \mathcal{I}_x(\eta):=\int_{\xi'} e^{iW_{\eta,x}(\xi')} B^{w,v}(x,p_1,\xi', \eta) d\xi', $$ then $ \begin{aligned} \Omega(x)=&\ 2^{2jl+\frac{m}{2}+k} \int_\eta I_x(\eta) \overline{\mathcal{I}_x(\eta)} e^{i(s-s')\eta} d\eta. \end{aligned} $ Direct computations lead to $$W_{\eta,x}'(\xi)=2^\frac{m}{2} (\bar{G}\circ f)(-\frac{p_1\eta_m}{\xi})+2^{jl+\frac{m}{2}}x+l_1. $$ Due to the support of $ B^{w,v}(x,p_1,\xi, \eta)$, we have $|I_x(\eta)|\lesssim1$ and there exists an absolute constant $\bar{C}>0$ such that \begin{equation}\label{gap} |W_{\eta,x}'(\xi)-W_{v,x}'(w)|\le \bar{C}. \end{equation} If $|W_{v,x}'(w)|\ge\ 2\bar{C}$, we have $|W_{\eta,x}'(\xi)|\ge\ \bar{C}.$ Integrating by parts twice, we obtain $$ \begin{aligned} I_x(\eta)=&\ \int_\xi \frac{1}{iW_{\eta,x}'(\xi)}\frac{d}{ d\xi}(e^{iW_{\eta,x}(\xi)}) B^{w,v}(x,p_1,\xi, \eta) d\xi\\ =&\ -\int_\mathbb{R} e^{iW_{\eta,x}(\xi)} \frac{d}{d\xi}\big(\frac{B^{w,v}(x,p_1,\xi, \eta)}{iW_{\eta,x}'(\xi)}\big)d\xi\\ =&\ \int_\mathbb{R} e^{iW_{\eta,x}(\xi)} \frac{\Xi^{w,v}(x,p_1,\xi,\eta)}{(W_{\eta,x}'(\xi))^2}d\xi. \end{aligned} $$ Here $\|\Xi^{w,v}(x,p_1,\xi,\cdot)\|_{\mathcal{C}^2}\lesssim1.$ Via Taylor's expansion $\frac{1}{(1-z)^2}=\sum_{k=1}^\infty kz^{k-1}$ for $|z|<1$, it follows \begin{equation}\label{kp} \frac{1}{\big(W_{\eta,x}'(\xi)\big)^2} =\frac{1}{\big(W_{v,x}'(w)\big)^2} \frac{1}{\big(1-\frac{W_{v,x}'(w)-W_{\eta,x}'(\xi) }{W_{v,x}'(w)}\big)^2} =\frac{1}{\big(W_{v,x}'(w)\big)^2}\sum_{k=1}^\infty k \big(\frac{W_{v,x}'(w)-W_{\eta,x}'(\xi)}{W_{v,x}'(w)} \big)^{k-1}. \end{equation} Plugging (\ref{kp}) into $I_x(\eta)$ gives $$I_x(\eta)=\frac{1}{\big(W_{v,x}'(w)\big)^2}\int_\mathbb{R} e^{iW_{\eta,x}(\xi)}\ \sum_{k=1}^\infty k\ \Xi^{w,v}_k(x,p_1,\xi,\eta)d\xi,$$ where $$\Xi^{w,v}_k(x,p_1,\xi,\eta)=\Xi^{w,v}(x,p_1,\xi,\eta) \big(\frac{W_{v,x}'(w)-W_{\eta,x}'(\xi)}{W_{v,x}'(w)}\big)^{k-1}.$$ It suffices to give the estimate of $\Xi^{w,v}_k(x,p_1,\xi,\eta)$ for $k\ge 1$ individually since the sufficiently large lower bound of $|W_{v,x}'(w)|$ and (\ref{gap}) yielding $\frac{W_{v,x}'(w)-W_{\eta,x}'(\xi)}{W_{v,x}'(w)}$ is small enough can absorb the summation of $k$. We only show $k=1$ since other terms can be treated analogously. \vskip.1in Rewrite $I_x(\eta)$ as $$I_x(\eta)=\frac{1}{\big(W_{v,x}'(w)\big)^2}\int_\mathbb{R} e^{iW_{\eta,x}(\xi)} \Xi^{w,v}_1(x,p_1,\xi,\eta)d\xi$$ Following the above arguments line be line yields $$\mathcal{I}_x(\eta)=\frac{1}{ \big(W_{v,x}'(w')\big)^2}\int_\mathbb{R} e^{iW_{\eta,x}(\xi')} \Xi^{w',v}_1(x,p_1,\xi',\eta)d\xi'$$ when $|W_{v,x}'(w')|\ge\ \bar{C}$. Therefore, when $|W_{v,x}'(w)|\ge\ \bar{C}$ and $|W_{v,x}'(w')|\ge\ \bar{C}$ hold, we arrive at $$ \begin{aligned} \Omega(x)=&\ 2^{2jl+\frac{m}{2}+k}\frac{1}{\big(W_v'(w')\big)^2}\frac{1}{\big(W_v'(w)\big)^2} \int_\eta \int_{\xi,\xi'} e^{iW_\eta(\xi)}e^{iW_\eta(\xi')} \\ &\ \Xi^{w,v}_1(x,p_1,\xi,\eta) \Xi^{w',v}_1(x,p_1,\xi',\eta)d\xi d\xi' e^{i(s-s')\eta}d\eta. \end{aligned} $$ Denote $$\Xi^{w,w',v}_2(x,p_1,\xi,\xi',\eta):= \Xi^{w,v}_1(x,p_1,\xi,\eta) \Xi^{w',v}_1(x,p_1,\xi',\eta)$$ and $$W_{\xi,\xi',s-s_1}(\eta):= \xi2^\frac{m}{2} Q(\frac{p_1\eta_m}{\xi })-\xi'2^\frac{m}{2} Q(\frac{p_1\eta_m}{\xi' }),$$ we have \begin{equation}\label{FE1} \begin{aligned} |\Omega(x)|\lesssim&\ 2^{2jl+\frac{m}{2}+k}\frac{1}{\big(W_v'(w')\big)^2}\frac{1}{\big(W_v'(w)\big)^2} \int_{\xi,\xi'}|\underbrace{\int_\eta e^{iW_{\xi,\xi',s-s_1}(\eta)} \Xi^{w,w',v}_2(x,p_1,\xi,\xi',\eta)d\eta}_{Os_1}|d\xi d\xi', \end{aligned} \end{equation} where we have absorbed the linear term on $\xi$ and $\xi'$ in the phase function by the absolute value in (\ref{FE1}). We immediately have $$\frac{d}{d\eta}(W_{\xi,\xi',s-s_1}(\eta)) =\frac{p_1}{2^\frac{m}{2}}f(-\frac{p_1\eta_m}{\xi}) -\frac{p_1}{2^\frac{m}{2}}f(-\frac{p_1\eta_m}{\xi'})+s-s' =\Theta_{\eta,\xi,\xi',s,s'}(x). $$ Along the same way yielding (\ref{FE1}), with a trivial estimate $|Os_1|\lesssim \varrho(\xi-w) \varrho(\xi'-w')\varrho(\frac{u(x)}{2^{j-k}}-p_1)$ and an application of the process deducing (\ref{kp}), we have $$|Os_1|\lesssim \frac{1}{1+|\Theta_{v,\xi,\xi',s,s'}(x)|^2}\varrho(\xi-w) \varrho(\xi'-w')\varrho(\frac{u(x)}{2^{j-k}}-p_1).$$ Thus, when $|W_{v,x}'(w)|\ge\ C_1$ and $|W_{v,x}'(w')|\ge\ C_1$, we obtain $$ \begin{aligned} |\Omega(x)|\lesssim&\ 2^{2jl+\frac{m}{2}+k} \frac{1}{\big(W_v'(w')\big)^2}\frac{1}{\big(W_v'(w)\big)^2} \frac{1}{1+|\Theta_{v,w,w',s,s'}(x)|^2} \varrho(\frac{u(x)}{2^{j-k}}-p_1). \end{aligned} $$ As for $|W_{v,x}'(w)|\le\ C_1$ and $|W_{v,x}'(w')|\le\ C_1$, we can get the bound with $\frac{1}{\big(W_v'(w')\big)^2}$ and $\frac{1}{\big(W_v'(w)\big)^2}$) replaced by 1, repectively. This ends the proof of Lemma \ref{l6.1}. \end{proof} \vskip.2in \section{The maximal operator $M^\Gamma$ and the related Carleson type operator $\mathcal{C}^\Gamma$} \label{me} In this section, we give a remark on the uniform $L^p$ estimates of $M^\Gamma$ and a related Carleson type operator $\mathcal{C}^\Gamma$ given by $$\mathcal{C}^\Gamma f(x)=p.v.\int_\mathbb{R} f(x-P(t))e^{iu(x) t}\frac{dt}{t},$$ whose proofs are similar to that of $\mathcal{H}^\Gamma$. \subsection{The uniform $L^p$ estimate of $M^\Gamma$} Recall the definition of $M^\Gamma$ given by $$M^\Gamma f(x,y)=\sup_{\epsilon>0}\frac{1}{2\epsilon} \int_{-\epsilon}^\epsilon |f(x-P(t),y-u(x)t)|dt.$$ Utilizing $$\frac{1}{2\epsilon}\int_{-\epsilon}^\epsilon |f(x-P(t),y-u(x)t)|dt =\frac{1}{2}\frac{1}{\epsilon}\int_{-\frac{\epsilon}{2} }^{\frac{\epsilon}{2}} |f(x-P(t),y-u(x)t)|dt+ \frac{1}{2\epsilon}\int_{\frac{\epsilon}{2}\le|t|\le \epsilon} |f(x-P(t),y-u(x)t)|dt,$$ we obtain $M^\Gamma f(x,y)\le \frac{1}{2}M^\Gamma f(x,y)+C{\bf M}^\Gamma f(x,y),$ where $C$ is a uniform constant and ${\bf M}^\Gamma f(x,y)$ is defined by $$ {\bf M}^\Gamma f(x,y)=\sup_{j\in\mathbb{Z}} \int |f(x-P(t),y-u(x)t)| \theta_j(t) dt.$$ Here $\theta_j(t)=2^j\theta(2^jt)$ in which $\theta$ is defined as in section \ref{s2}. As a result, it suffices to show that $\|{\bf M}^\Gamma f\|_p\lesssim_N\|f\|_p,$ where the input function $f$ can be restricted to be non-negative function. Now, we rewrite ${\bf M}^\Gamma f(x,y)$ by Fourier inverse transform, that is, ${\bf M}^\Gamma f(x,y)=\sup_{j\in\mathbb{Z}}\mathcal{H}_jf(x,y), $ where $ \mathcal{H}_jf(x,y)=\int_{\xi,\eta}\widehat{f}(\xi,\eta) e(\xi x+\eta y) M_j(\xi,\eta)d\xi d\eta$ and $$M_j(\xi,\eta):=\int e(\phi_{j,\xi,\eta,x}(t))\theta(t)dt,\ \phi_{j,\xi,\eta,x}(t):=\xi P(2^{-j}t)+\eta u(x)2^{-j}t.$$ Following the estimate of $\mathcal{H}^\Gamma f(x,y)$ line by line can lead the desired result (In fact, the estimate of $ {\bf M}^\Gamma f(x,y)$ is easier since it does not have a summation of $j$). \subsection{The uniform $L^p$ estimate of $\mathcal{C}^\Gamma$} By linearization, $\mathcal{C}^\Gamma f$ can also defined by $$\mathcal{C}^\Gamma f(x)=\sup_{u\in\mathbb{R}}|p.v.\int_\mathbb{R} f(x-P(t))e^{iu t}\frac{dt}{t}|,$$ which can be seen as a variant of the classical Carleson maximal operator, we refer \cite{Car66,L19,L20,SW01}. It follows via (\ref{de1}) that $\mathcal{C}^\Gamma f(x):=\sum_{j\in\mathbb{Z}} \mathcal{C}_jf(x),$ where $\mathcal{C}_jf(x)$ is defined by $\mathcal{C}_jf(x)=\int f(x-P(t))e^{iu(x) t}\rho_j(t) dt.$ We now rewrite $\mathcal{C}_j f(x)$ as $ \mathcal{C}_jf(x)=\int_{\xi}\widehat{f}(\xi) e(\xi x) M_j(\xi)d\xi$ where $$M_j(\xi):=\int e(\phi_{j,\xi,x}(t))\rho(t)dt,\ \phi_{j,\xi,x}(t):=\xi P(2^{-j}t)+ u(x)2^{-j}t.$$ Using a new decomposition $ \sum_{(m,n)\in \mathbb{Z}^2}\widehat{\Phi}(\frac{\xi}{2^{jl+m}}) \widehat{\Phi}(\frac{u(x)}{2^{j+n}})=1, $ and then executing a similar process as yielding the uniform estimate of $\mathcal{H}^\Gamma$, we can get the uniform estimate of $\mathcal{C}^\Gamma$ as well. Actually, without the summation of $k$ in the decomposition, the proof is easier. \section{} \label{app} \begin{lemma}[\cite{L19}]\label{la1} Let $n,\tilde{N},K\in \mathbb{N}$ with $n,K\le \tilde{N}$. Assume we are given $\{I_l\}_{l=1}^\mathbb{N}$ sets such that $I_l\in [-1/2,1/2]$ and $|I_l|\ge K^{-1}$ hold whenever $1\le l\le \tilde{N}$, Then, if $\tilde{N}\ge 2M^nn^n$, there exists a subset $\mathrm{S}\subset \{1,\cdot\cdot\cdot,\tilde{N}\}$ such that $\sharp \mathrm{S}=n$ and the measure of $\cap_{l\in \mathrm{S}}I_l$ is greater than $2^{-1}K^{-n}$. \end{lemma} Here the interval $[-1/2,1/2]$ can be replaced by $[-\tilde{\mathfrak{C}},\tilde{\mathfrak{C}}]$ for arbitrary uniform constant $\tilde{\mathfrak{C}}\gtrsim 1$. \vskip.1in The shifted maximal operator $M^{[\sigma]}$ is defined by $$M^{[\sigma]}f (z):=\sup_{z\in I\subset \mathbb{R}}\frac{1}{|I|} \int_{I^{(\sigma)}}|f(\xi)|d\xi,$$ where $I^{(\sigma)}$ is given by $[a+\sigma |I|,b+\sigma|I|]$ if $I=[a,b]$. \vskip.1in Next, we introduce the estimate of $M^{[n]}f$ in $L^p$ and the vector-valued estimate of $\{M^{[n]}f_k\}_k$ in $L^p(l^q)$. \begin{lemma}[\cite{M14,S93}]\label{ssme} Let $1<p<\infty$, we have \begin{equation*}\label{ssme1} \|M^{[n]}f\|_p \lesssim\ \log (2+|n|) \|f\|_p. \end{equation*} Here the constant hidden in $``\lesssim"$ is independent of $|n|$ and $f$. \end{lemma} \begin{lemma}[\cite{GHLJ}]\label{sme} Let $1<p<\infty$, $1<q\le \infty$, we have \begin{equation}\label{sme1} \Big\|\big(\sum_{k\in\mathbb{Z}}|M^{[n]}f_k|^q\big)^\frac{1}{q}\Big\|_p \lesssim\ \log^2(2+|n|) \Big\|\big(\sum_{k\in\mathbb{Z}}|f_k|^q\big)^\frac{1}{q}\Big\|_p. \end{equation} Here the constant hidden in $``\lesssim"$ is independent of $|n|$ and $f$. \end{lemma} \vskip.1in We need a special case of Theorem 1.1 in \cite{GLY}. Let $\vec{T}$ be given as \begin{equation}\label{111} \vec{T}(F)(y):=\int_\mathbb{R}\vec{K}(y,s)(F(s))ds, \end{equation} where the kernel $\vec{K}$ satisfies H\"{o}rmander's conditions, i.e., there exists a positive $C_H$ such that for all $s,z\in \mathbb{R}$, \begin{equation}\label{112} \int_{|y-s|>2|s-z|}\|\vec{K}(y,s)-\vec{K}(y,z)\|_{l^2(\mathbb{Z})\rightarrow l^2(\mathbb{Z}^2)} dy\le C_H, \end{equation} and for all $x,w\in\mathbb{R}$, \begin{equation}\label{113} \int_{|x-y|>2|x-w|} \|\vec{K}(x,y)-\vec{K}(w,y)\|_{l^2(\mathbb{Z})\rightarrow l^2(\mathbb{Z}^2)} dy\le C_H. \end{equation} \begin{thm}[\cite{GLY}]\label{a.2} Assume that $\vec{T}$ defined by (\ref{111}) is a bounded linear operator from $L^r(\mathbb{R},l^2(\mathbb{Z}))$ to $L^r(\mathbb{R},l^2(\mathbb{Z}^2))$ for some $r\in (1,\infty)$ with norm $A_r>0$. Assume that $\vec{K}$ satisfies (\ref{112}) and (\ref{113}) for some $C_H>0$. Then $\vec{T}$ has well-defined extensions on $L^p(\mathbb{R},l^2(\mathbb{Z}))$ for all $p\in (1,\infty)$. Moreover, whenever $1<p<\infty$, for all $F\in L^p(l^2(\mathbb{Z}))$, $$\|\vec{T}(F)\|_{L^p(l^2(\mathbb{Z}^2))} \le \max\{p,(p-1)^{-1}\}(C_H+A_r)\|F\|_{L^p(l^2(\mathbb{Z}))}.$$ \end{thm} \vskip.1in \end{document}
arXiv
For some positive integer $n,$ $0 < n < 180,$ \[\csc (2^3)^\circ + \csc (2^4)^\circ + \csc (2^5)^\circ + \dots + \csc (2^{2019})^\circ = \sec n^\circ.\]Find $n.$ Note that \begin{align*} \cot x - \cot 2x &= \frac{\cos x}{\sin x} - \frac{\cos 2x}{\sin 2x} \\ &= \frac{2 \cos^2 x}{2 \sin x \cos x} - \frac{2 \cos^2 x - 1}{2 \sin x \cos x} \\ &= \frac{1}{2 \sin x \cos x} \\ &= \frac{1}{\sin 2x} \\ &= \csc 2x. \end{align*}Hence, summing over $x = (2^2)^\circ,$ $(2^3)^\circ,$ $(2^4)^\circ,$ $\dots,$ $(2^{2018})^\circ,$ we get \begin{align*} &\csc (2^3)^\circ + \csc (2^4)^\circ + \csc (2^5)^\circ + \dots + \csc (2^{2019})^\circ \\ &= (\cot (2^2)^\circ - \cot (2^3)^\circ) +(\cot (2^3)^\circ - \cot (2^4)^\circ) + (\cot (2^4)^\circ - \cot (2^5)^\circ) + \dots + (\cot (2^{2018})^\circ - \cot (2^{2019})^\circ) \\ &= \cot 4^\circ - \cot (2^{2019})^\circ. \end{align*}Note that $2^{14} \equiv 2^2 \pmod{180},$ so \[2^{2019} \equiv 2^{2007} \equiv 2^{1995} \equiv \dots \equiv 2^{15} \equiv 32768 \equiv 8 \pmod{180},\]so $\cot (2^{2019})^\circ = \cot 8^\circ.$ Then \[\cot 4^\circ - \cot 8^\circ = \csc 8^\circ = \sec 82^\circ,\]so $n = \boxed{82}.$
Math Dataset
Skip to main content Skip to sections Asia-Pacific Journal of Atmospheric Sciences pp 1–20 | Cite as Evaluation of Supercell Storm Triggering Factors Based on a Cloud Resolving Model Simulation Vlado Spiridonov Mladjen Ćurić First Online: 06 February 2019 An attempt has been made in this study to examine the conditional instability parameters in the selected area and to determine the main ingredients responsible for initiation and evolution of supercell storm over Skopje, Macedonia on 6 August 2016. WRF model forecasts provide the basic meteorological parameters for cloud model initialization and the detail information about atmospheric instability potential as triggering factor for severe convection. The cloud model simulation has been performed with very fine spatial and temporal resolution capable to resolve the detail aspects of convection. The results utilizing this novel method suggest that, upper level lifting, moisture advection, large CAPE, near surface convergence and increased potential vortices in the selected area play substantial role in early assessment of the atmospheric status, convective instability and storm potential. In addition the directional wind shear (veering) at the near surface layer, high storm helicity index, differential heating induced by the strong local forcing environment serve as triggering factors for initiation of supercell storm with rotational updrafts-mesocyclone. The cloud model simulation with fine resolution allows more detail insight into the storm dynamics and the mechanism of generation of rotational updrafts and mesocyclone, a hook echo signature and the presence of bounded weak echo region as ingredients for supercellular structure and evolution. The overshooting top of 15 km, peak updraft speed of 40 m/s, wind gust of 35 m/s and reflectivity which exceeds 70 dBZ indicates to the occurrence of a very severe storm. A longer live cycle of storm and the intense water production, with extreme rainfall rate of 38 mm/5 min, contribute to formation of excessive torrential rainfall and local catastrophic flooding. Severe convection Cloud model Supercell storm Flash-flooding Ingredients Responsible Editor: Ben Jong-Dao Jou. Supercell storms represent the most complex and violent type of all storms. They are usually associated with producing a local excessive precipitation and flash-floods, gusty winds, large hail and even tornado (e.g. Browning 1962, 1964). The main triggering factors for their occurrence are the presence of substantial wind shear-low-level veering, large CAPE and LI (Doswell III et al. 1996). In addition to these, the storm helicity in the layer between 0 and 3000 m, differential heating and the local forcing environment are also important factors for initiation of severe convection and supercells. Among different supercell storm types the HP supercell are responsible for heavy rainfall which are characterized by formation of quasi-steady deep rotating updrafts-mesocyclone, longer life cycle and severity. Important aspect is evolution of storms and some important processes as splitting or merging, which lead to convection intensification and occurrence of heavy rainfall and hailfall (e.g. Ćurić et al. 2009; Spiridonov et al. 2010; Ćurić and Janc 2012). Thunderstorm forecast is one of the most problematic issues in numerical weather prediction as the result of their small scale spatial and temporal resolution (Warner 2011). The increased computer resources have resulted in the possibility to use a finer spatiotemporal resolution to less than 5 km, thus allowing the explicit treatment of convection ignoring its parameterization. Many studies have been focused on using high-resolution convective permitting simulations to study the heavy precipitation processes related to severe storms (e.g. Klemp 2006; Litta and Mohanty 2008; Litta et al. 2011; Spiridonov and Ćurić 2015). Some useful results and conclusions regarding the sensitivity of idealized supercell simulations could be found in study by Potvin and Flora (2015). The evaluation of idealized cloud model setting and sensitivity tests reveal that decreasing horizontal grid resolution from 4 to 1 km showed a more reliable simulation of storm characteristics. However, the finest grid spacing runs emphasize the question about importance of using sophisticated data assimilation of high resolution spatiotemporal observations as it is suggested by Sun et al. (2010) or convective scale data assimilation with applying non-hydrostatic convection permitting COSMO model (Baldauf et al. 2011) or employing ensemble filter (e.g. Poterjoy et al. 2017; Jacques et al. 2017) is required for improved forecast at smaller domain. In some specific in nature very severe convective cases, the models even with a suitable configuration and fine horizontal grid resolution will not be sufficient to accurate forecast of storm structural and evolutionary properties and heavy rainfall. In such cases some local scale forcing environment plays a crucial role in storm initiation and its evolution. 2 The Main Motivation The main motivation of the present research is to evaluate the main triggering factors responsible for initiation of very severe mid-latitude convective cloud on 6th August 2016 in Skopje. At the evening hours on 6th August 2016 the wider urban and rural areas of Skopje was under influence of extremely severe weather, with the catastrophic consequences associated with torrential rainfall, gust wind, strong thunderstorm, lightning and flash flooding. During this disastrous heavy rainfall event 23 people lost their lives with huge material and economy damage. The first initiation of convective clouds started at the moon hours, in the western mountain regions of Macedonia, as a result of upper low approaching the presence of cyclonic circulation. The model has orography and it is taken through two influences: the first influence is due to the different warming of the sunny (sunny) sides and the senile (the side of the mountain in the shade), i.e., the northern slopes. This contributes to the occurrence of additional rotation in the Cb cloud vortex pair. This effect is parameterized by the method developed by Ćurić and Janc (2012). The second effect is the consequence of the channeled cold air movement that emanates from the cloud base Cb and moves along the trough. This causes the forced warm air along the front head of the cold air. Forced vertical velocity is calculated by the method of Lompar et al. (2017). In south-western flow aloft, there is a transport of moist air from southern parts of Adriatic and Ionian Seas in western and north-western parts and impact on developing of unstable weather condition across Macedonia. The specific thermal features of the atmosphere during the current day, due to intensive heating, formation of warm air trough in the Skopje valley and the topography of the terrain generated additional favorable conditions for developing of severe thunderstorms. The most intensive processes occur in the afternoon hours, when several convective cells are initiated in the convective instability line extended over north-western parts of Macedonia, which is successively transferred in organized Mesoscale Convective System (MCS) or cluster. Finally, during the evening hours the convective processes reached the most intensive phase of evolution turning into very severe supercell storm. Considering the negative consequences of this catastrophic weather phenomenon we found very important to study in more detail and to answer on some important questions about the reasons for occurrence of such disastrous case. Thus, our main motivation is to evaluate the main storm ingredients as triggering factors for initiation and evolution of supercell storm and examination of the physical processes responsible for production of heavy rainfall and flooding. 2.1 Cloud Model Overview The cloud model is a 3-D non-hydrostatic, compressible time-dependent, model with dynamic scheme from Klemp and Wilhelmson (1978), thermodynamics proposed by Orville and Kopp (1977), and bulk microphysical parameterization scheme (see Appendix 1) according to Lin et al. (1983). It uses a single-moment scheme for the six water categories: water vapour, cloud water, cloud ice, rain, snow and graupel or hail. Cloud water and cloud ice are assumed to be monodisperse, with zero terminal velocities. Rain, hail and snow have the Marshal-Palmer type size distributions with fixed intercept parameters. The source reference for the scheme to allow for the coexistence of cloud water and cloud ice in the temperature region of −40°C to 0°C is Hsie et al. (1980). The microphysical production terms are given in Appendix 1. The present version of the model contains ten prognostic equations: three momentum equations, the pressure and thermodynamic equations, four continuity equations for the water substances, and a subgrid-scale kinetic energy equation. More information regarding a cloud model could be found in Telenta and Aleksic (1988), and Spiridonov and Curic (2005), Barth et al. (2007). Boundary conditions are specified along all sides of the integration domain, since the computations take place within a finite domain. Along the top and bottom of the model domain the vertical velocity ω is set to zero. Lateral boundaries are open and time-dependent, so the disturbances can pass through with minimal reflection. Model equations are solved on a staggered Arakawa C grid (Arakawa and Lamb 1977). The horizontal and vertical advection terms are calculated by the centered fourth and second-order finite differences, respectively. Since the model equations are compressible, a time-splitting procedure is applied in order to achieve numerical stability. With this procedure the sound wave terms are solved separately using a smaller time step, while all other processes are treated with a larger time step Δt. The scalar prognostic equations, except for pressure, are stepped from t − Δt to t + Δt by a single leapfrog step. The terms that are not responsible for sound wave generation in the equations of motion and pressure are calculated at the central time level t. Finally, the wind and pressure prognostic variables are stepped forward fromt − Δt to t + Δt, with forward time differencing on the small time step. A high resolution Weather Research Forecast Non-hydrostatic Mesoscale Model WRF-NMM (see Janjic 2003) developed at the National Centers for Environmental Prediction (NCEP) has been employed to forecast this severe convective case. The model is configured to resolve a deep convection with the scale resolution, so the sub-grid cumulus parametrization is omitted. The entire grid system contains 38 layers with a terrain following hybrid sigma coordinate, and the model top is located at 50 hPa. The model uses WSM6 microphysics developed by Hong and Lim (2006), Noah Land Surface Model scheme operationally used at NCEP and Yonsei University (YSU) first order scheme Hong et al. (2006) with an explicit entrainment layer and a parabolic K-profile in an unstable mixed layer. High resolution run has been initialized using the initial and 3-hourly lateral and boundary conditions taken from NCEP GFS global longitude-latitude grid with 0.25-degree resolution, interpolated to a WRF-NMM model domain 1 shown in Fig. 1a. The cloud model is initialized under WRF conditions. The initial vertical profiles of the meteorological parameters, the potential temperature, specific moisture, u and v horizontal velocity component within (domain 2) are obtained from high-resolution WRF model forecasts. Fig. 1b shows the upper air sounding profile for Skopje case valid on 6 August 2016 1200 UTC. The forecasted upper air sounding shown on Fig. 1b, shows a low level moisture deficit due to temperature inversion, increase moisture content and strong directional wind shear in the near-surface layer. The cloud model run depends on the evaluated instability criteria and defined threshold values. A warm ellipsoid thermal bubble with minimal temperature perturbation in its centre was used to initiate convection. Through numerous cloud model simulations, emerged that averaged 0.2 °C temperature perturbation is suitable for highly unstable atmosphere to trigger severe convective storm. A three-dimensional (3-D) runs were performed within small domain with size 61x61x20 km3 that covers the urban area of Skopje city. The horizontal and vertical grid resolution is 250 m, respectively. The time step of the model is 1 s and the smaller one is 0.5 s for solving the sound waves. The forecast period is 7200 s. Open image in new window a) The WRF model domain 1 and a cloud model domain 2. b) The upper air sounding profile obtained from WRF-model output in the lat/lon of Skopje, positioned in the centre of the cloud model domain 2.2 Method The general principle of the method is to utilize the performances of both Weather Research Forecast WRF model and cloud resolving model in determination of the main physical processes and triggering factors responsible for development of severe convection. More specifically the proposed approach is focused on the definition of ingredients for initiation of supercell storms. It is achieved by performing a high resolution WRF forecasts with explicit treatment of convection avoiding cumulus parameterization. The hourly model outputs data are then used for both: determination of convective instability parameters and cloud model initialization. It allows a better initial meteorological input for initiation of convection with a small temperature perturbation, which is less influenced by the modeler. A three-dimensional numerical run has been performed with very fine spatiotemporal grid resolution. This suitable model configuration allows the convective processes to be resolve in more detail as key factor for understanding the structure and intensity of convective storm that could be initiated and developed under such thermodynamics and environmental conditions representative of a small volume of atmosphere. The basic supercell storm ingredients (e.g. upper low and transport of moist air, a strong low-level convergence zone, differential heating and the local strong forcing environment) for a specific domain are determined from high resolution WRF forecasts. Cloud resolving model provides an additional information about the potential for supercell storm occurrence. The rotational updrafts and downdrafts and development of meso-cyclone serve as primarily indicator for supercell initiation. The examination of microphysical parameters together with radar reflectivity fields gives more information about the life cycle of storm, its internal structure and severity, water production through various processes and assessment of the relative intensity of heavy convective rainfall. 3.1 WRF Forecast of the Basic Ingredients for Severe Convection Our study starts with a brief mesoscale analysis of the atmosphere and the processes which trigger mesoscale instability in Macedonia on 6 August 2016. As first in Fig. 2 we show the geopotential, temperature and wind at 500 and 850 hPa, relative humidity at 700 hPa and surface wind vectors at 10 m height. One sees approaching of upper low pressure system from west with corresponding favorable dynamical aspect of baroclinity, moist air inflow at 700 hPa and low-level convergence. A WRF-NMM 2.5 km grid forecast provides a basic physical ingredients involved in initiation of severe convection (e.g. moisture, instability, wind shear, and lifting). The advection of warm and moist air during the day may has favored the development of thunderstorms, especially along the north-western area due to orographic lifting. The severe storms which propagated towards Skopje valley in the period between 1600 and 1800 UTC are initiated over the southwestern mountain at about the same time when the advection of moist air with high θe-values takes place. The fact, that storms are initiated over the complex terrain indicates that the combination of warm/moist air and orographic induced uplift played an important role. The forecast correctly captured the area with a most intense dynamics and the atmospheric disturbance. Absolute and potential vorticity displayed on Fig.3 provide some general insights of the spin of air parcel. While there is no specific signs of distribution of absolute vorticity, potential vorticity (PV) shows some indications in respect to dynamics of convective scale weather and potential for initiation of severe storms especially in north-western portion of the model domain where positive and negative spins of the air parcels are noted. Enhanced low-level convergence zone as result of low level moisture advection is well adjusted with the area of positive potential vorticity. As the air converges it maintains the potential vorticity, thus increases vertical updrafts leading to formation of stretched ring vortex. In addition we have also shown the distribution of atmospheric column Brunt-Vaisala frequency, which indicates that the atmosphere is statically very unstable. Other parameter important to severe thunderstorm development is wind shear. Based on the modeled results a significant vertically-sheared environment is identified over northern part which suggests that severe storms could be expected. A storm relative helicity (SRH = 300) as a measure of the amount of rotation found in storm's updraft and the potential for cyclonic updraft rotation indicates a very favorable conditions to supercell development over north-western part and urban and rural areas of Skopje valley. Another important conditionally instability parameter for the formation of a thunderstorm is the surface Convective Available Potential Energy (CAPE). CAPE is a measure for the amount of energy available for convection. While much of the northern part of Macedonia which is under the moderate CAPE values, south-western mainly mountain part shows a high CAPE, ranging from 2000 to 2500 and Lifted Index LI from 6 to 8, indicate that the storms will be very severe. In addition we have consider important to examine the modeled top of the brightness temperature as important ingredient of severe convection. Fig. 4 shows distribution of forecasted top brightness temperature on 6 August from 1500 to 1800 UTC. One sees a three major convective cores with temperature which is lower than 50 °C. These patterns indicate areas where deep convection is developed. The modeled distributions of a brightness temperature coincide well with satellite image in respect to location, timing and temperature rate. It is a good indicator of initiation of very severe convective clouds with overshooting tops. This quasi-stationary convective system with many convective cells reach maturity and produce extreme rainfall over Skopje valley causing flash flooding. One of the key parameter for determining the storm structure, evaluation and the intensity is the radar reflectivity. The maximum radar reflectivity fields from 15:00–18:00 UTC are displayed on Fig. 5 The convective system with a group of organized cells extend from south-western region moving towards east-northeast passing over Skopje city. Around 16:00 UTC supercell storm evolves form this system in its frontal flank with reflectivity of about 65 dBZ. The strongest reflectivity signature well agrees with the forecast of heavy precipitation area positioned over Skopje valley. As can be seen from Fig. 6, the period of most intense rainfall occurs from 1600 to 1700 UTC and corresponds well with the maximum radar reflectivity patterns. During that period high rainfall rates are evident with the relative intensities ranging from 25 to 30 mm/h. All these ingredients indicate appearance of heavy precipitation supercell storm with extreme rainfall over the urban area of Skopje city. As the summary, WRF-NMM forecast provides a good initial information about meso-scale atmospheric processes and instabilities as triggering factors for initiation of severe convection. An upper level low pressure system, moisture advection and high energy were favorable factors for atmospheric destabilization and occurrence of severe convection. The distribution of moisture, instability, lift and wind fields serve as primarily factors have a profound influence on convective storm type and supercell. The existence of many other ingredients (e.g. low-level convergence, vorticity) and localized strong forcing environment induced by the topography of the terrain and the differential heating represent a key ingredients for temperature perturbation and initiation of a deep mist convection. a) WRF-NMM 2.5 km grid forecast for 6 August 2016 12:00 UTC: a) 500 hPa geopotential height (gpdm), temperature (°C) and wind (m/s); b) RH at 700 hPa level; c) same as (a) but for 850 hPa; d) MSL pressure (hPa), temperature (°C) and wind (m/s); e) equivalent potential temperature θe (K); and f) specific moisture at the surface (g/kg) WRF-NMM 2.5 km grid forecast for 6 August 2016 15:00 UTC: a) absolute vorticity potential vorticity (s-1); b) atmospheric column potential vorticity (mass-weight) (1/s/m); c) atmospheric column Brunt-Vaisala frequency (1/s2); d) atmospheric column vertical wind shear (1/s); e) Storm relative helicity (m2/s2) 0-1000 m (contour) and (0-3000 m) shaded; f) Surface Convective Available Potential Energy CAPE (J/kg) and Lifted Index LI (°C) WRF-NMM 2.5 km grid forecast of top of atmospheric brightness temperature (°C) for 6 August 2016 from 15:00 to 17:00 UTC WRF-NMM 2.5 km grid forecast of the maximum radar reflectivity (dBZ) for 6 August 2016 from 15:00 to 18:00 UTC WRF-NMM 2.5 km grid forecast of hourly accumulated precipitation (kg/m2) for 6 August 2016 from 15:00 to 18:00 UTC 3.2 A Three-Dimensional Numerical Simulation A supercell storm developed over Skopje was a long-lived (greater than 1 h) associated with a strong winds, heavy rainfall and hailfall, weak-to-strong tornado and flash flooding causing large material damages even more the human losses. All above forecasted derived parameters provide a good initial information about the thermodynamic status of atmosphere as basic triggering factors for development of severe convection. A more detail insight in the physical processes responsible for initiation of supercell storm is provided by employment of a cloud model that is initialized from WRF-NMM vertical profile data inserted in the cloud model domain. Fig. 7 shows some initial results from a three-dimensional numerical simulation, associate with a complex storm dynamics and microphysics. We present cloud sequences at the most intense phase of evolution of the storm life cycle when rotating updrafts developed with mesocyclone and existed for a few hours which was critical aspect for the flash flooding occurrence. The strong updrafts regions are usually compensated with downdrafts. For better illustration the conceptual model of mesocyclone is also shown. In respect to microphysics, the upper portion of simulated storm contains ice crystals and snow particles, while in the middle part supercooled water, graupel or hail and rainwater coexist. The sub-cloud area has a two zones with heavy rainfall. A more detail evaluation of the storm structural and evolutionary processes is given in the next topic. A three-dimensional evolution of the storm depicted on Fig. 8 during simulation time provides a more detail insight in storm structure and severity. The light grey belts delineate the areas with cloud ice and the supercooled water content, dark grey color shows the regions with formation of graupel or hail stones, yellowish color which is viewed at the upper portion of storm in the characteristic anvil area, at the negative temperatures, marks the snow crystals, and the green areas display the heavy rainfall zones. Undoubtedly is the fact that the three-dimensional depiction of the convective cloud system over Skopje gives realistic structural picture and intensity, as well as the type of this supercell storm. a) Vertical wind vectors in convective storm on 6 August 2016; b) The microphysical structure of severe storm at 20 min of the simulation time. c) The conceptual model of mesocyclone. Source: The Pennsylvania State University-Department of Meteorology. d) Vertical profile of updrafts and downdrafts regions within simulated storm A three-dimensional view of supercell storm over Skopje on 6 August 2016 viewed from SW-NE during most intense stage of evolution 3.3 Supercell Storm Based Ingredients Our research continues with examination of the main ingredients responsible for initiation of supercell convection and the physical processes participate in production of heavy rainfall and flooding. That is achieved by using a 3-d cloud model with fine space and temporal grid resolution which is sufficient to resolve the convective processes in more detail. The initial analyses imply on occurrence supercell storm with rotating updrafts and mesocyclone. The mesocyclone which developed over the Skopje valley represents the vorticity system, with strong rotating updrafts of air, formed into powerful supercell storm. To verify and document the significance of occurrence of mesocyclone we have calculated the vertical component of vorticity equation and all terms included in the vorticity equation in order to determine the important contributors for initiation and formation of supercell storm. Actually, we have examine the relative magnitudes of the each term in the vertical component of vorticity equation in (x, y, z, and t) given by the following relation: $$ \frac{\partial \zeta }{\partial t}=-u\frac{\partial \zeta }{\partial x}-v\frac{\partial \zeta }{\partial y}-w\frac{\partial \zeta }{\partial z}-v\frac{\partial f}{\partial y}-\left(\zeta +f\right)\left(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}\right)-\left(\frac{\partial w}{\partial x}\frac{\partial v}{\partial z}-\frac{\partial w}{\partial y}\frac{\partial u}{\partial z}\right)+\left(\frac{\partial {F}_{ry}}{\partial x}+\frac{\partial {F}_{rx}}{\partial y}\right)+\frac{1}{\rho^2}\left(\frac{\partial \rho }{\partial x}\frac{\partial p}{\partial y}-\frac{\partial \rho }{\partial y}\frac{\partial p}{\partial x}\right) $$ Here \( \frac{\partial \zeta }{\partial t} \) denotes the rate of change of the relative vorticity at a grid point. The other terms on the r.h.s. of the Eq. (1) represent: the zonal advection, meridional, vertical transfer of relative vorticity, advection of planetary vorticity, divergence term, tilting or twisting, friction and solenoidal term, respectively. Fig. 9 shows relative vorticity distribution at a different heights. Note that blue curves delineate increase rate of change of relative vorticity and red curves indicate decrease rate of change of relative vorticity. At the surface there is a narrow region with positive vorticity as the result of relatively high convergence at the near surface layer. There are also a few isolated divergence zones which contribute to decrease vorticity. A dramatic increase in vorticity at all levels is evidenced, mainly as the result of the PVA (zonal advection) at 1.5 km height and convergence, meridional advection and vertical transfer of relative vorticity at 3.0 km. (see Fig. 10). There is also contribution coming from the rate of change of the vertical velocity in the horizontal direction which impairs a tilting and spinning effect, although the rates are much lower compared to the other terms. The distribution of relative vorticity at 3.0 km height during simulation time is depicted on Fig. 11. It is obvious that individual pair of vortices with positive and negative signs is seen at the initial phase of storm evolution from 5 to 10 min of the simulation time. A significant increase in relative vorticity is simulated once storm enters its intense phase of evolution, especially in 30 min, when two cores with higher vorticity gradient are determined. Horizontal cross-sections of the simulated vertical vorticity at 0.25, 1.5, 3 and 4.5 km height in 25 min of the simulation time during the most intense phase of storm evolution on 6 Aug 2016, 1600 UTC The research continues with evaluation of the relative importance of the vorticity terms during simulation time (see Fig. 12). One sees that in the initial phase relative vorticity is gradually increased due to horizontal advection which turns horizontal vorticity into the vertical component through differential vertical motion. How storm is moving from its incipient to mature phase divergence term and the vertical transfer play a significant role in the magnitude of relative vorticity. Around 17:25 (25 min simulation time) a dramatic increase is evidenced when it reaches a peak value of −2.942 s−2. The second maximum is calculated 20 min later but this time divergence, zonal and meridional advection and the vertical transfer term, have a similar contribution rates to high relative vorticity. Horizontal distributions of various terms of relative vorticity at 0.25, 1.5 and 3.0 km height at the most intense phase of storm evolution Time evolution of divergence, meridional, zonal, vertical transfer, solenoidal and tilting (twisting) term which contribute in increase of rate of change of relative vorticity (s−2) at grid point during simulation time Horizontal cross sections of the relative vorticity (s−2) at 3 km height from 5 to 40 min of the simulation time at each 5 min intervals Based on radar observation and the modeling studies of fluid motion it is found that in general the intense vertical vortices in the cloud originate from a vortex with a horizontal axis. This usually occurs in the storm development phase when the horizontal component of the vorticity tilts, intensifies and transforms into a vertical vorticity. The development of midlevel rotation updrafts and mesocyclone can also be demonstrated by using the vertical vorticity component written as $$ \frac{d}{\mathrm{dt}}\left(\zeta +f\right)=\left(f+\zeta \right){w}_z+\left({\xi w}_y+\eta {w}_x\right) $$ Ignoring the Coriolis parameter, and linearize this equation under condition. \( \overset{\rightharpoonup }{v}=\left(\overline{u},0,0\right) \), we obtain the equation of vorticity perturbation $$ {\zeta_t}^{\prime }+\overline{u}{\zeta_x}^{\prime }={\overline{u}}_z{w}_y $$ If one sees the situation that the storm is moving at a speed \( \overline{u} \), then in a system that moves with the cloud, it is obtained from (3) that $$ {\zeta_t}^{\prime}\approx {\overline{u}}_z{w}_y $$ It can be seen that perturbed vorticity ζ′ happens as a result of a twisting term of vortex tube. The updrafts with positive and negative vorticity perturbation ζ′, respectively are generated on l.h.s. and r.h.s of cloud as it is shown in Fig. 7. The direction of rotation of vortex pair serves as triggering factor of the entrainment processes within cloud. When the cloud with a pair of vortices develops further, then non-linear effects became more evident. They would be seen in a linearized equation (4), in which a nonlinear term ζ′wz would also be retained. Then the equation for disturbed vertical vorticity component is given, also in the following form: $$ {\zeta_t}^{\prime }+\overline{u}{\zeta_x}^{\prime}\approx {\overline{u}}_z{w}_y+{\zeta}^{\prime }{w}_z $$ Areas with high values of (wz) are located at the top and bottom of the updrafts region. The "stretching" term (ζ′wz) is also important as a twisting term. After storm splitting, a two new generated clouds move along y-axis away from each other. The storm movement could be considered relative to the ground with velocity $$ {\overset{\rightharpoonup }{\boldsymbol{v}}}_c={u}_c\overset{\rightharpoonup }{\boldsymbol{i}}+{v}_c\overset{\rightharpoonup }{\boldsymbol{j}} $$ At the non-divergent level of cloud \( \overline{u}\approx {u}_c \). Then the vorticity perturbation equation is $$ {\zeta_t}^{\prime }-{v}_c{\zeta_y}^{\prime}\approx {\overline{u}}_z{w}_y $$ Hence, in stationary case we get $$ {\zeta}^{\prime}\approx -\frac{{\overline{u}}_z}{{\boldsymbol{v}}_c}w $$ It is evident that the maximum vorticity disturbance ζ′ occurs in the area of maximum vertical velocity (w). The splitting part of the storm with right movement has a positive vorticity \( {\overline{u}}_z>0. \) On this way a rotating updraft could be developed at the middle levels of an intense Cb clouds. Such rotating updraft referred as mesocyclone represents the largest and most important part of storm where tornado genesis is a common case. The mesocyclone basically represents one the most severe atmospheric weather phenomena of all types of storms. Our study follows with examination of the microphysical processes responsible for storm intensity and production of large amount of rain Table 1 list the microphysical production terms for water vapor, cloud water, cloud ice, rainwater, snow and hail averaged over simulation time. The main contribution to increase cloud water content comes from the accretion process of cloud water by graupel, (PGACW) with production rate of about 1.0 × 10−4 (kg kg−1 s−1). All other production terms have lower contribution rates. In respect to cloud ice the key microphysical process based on the model calculations is the accretion of cloud ice by graupel (PGACIP) with 10 times lower rate relative to (PGACW). Regarding the snow production the dominant one is the production rate for accretion of snow by graupel (PGACSP). It is evident that production rate for accretion of rain by cloud ice, snow and hail (PIACR, PSACR and PGACR) appear to be the most important processes regarding the large production of rain in the simulated storm with averaged rates (3.24 × 10−4, 1.10 × 10−4 and 1.57 × 10−4) respectively. In addition to all these microphysical transfer rates the responsible process to formation of rain is graupel melting to form rain at T ≥ T0 (PGMLT).The simulation of radar reflectivity field evolution whose maximum value exceeded 70 dBZ, tells us much more about the storm structure and intensity and detection of region with limited radar echo aloft, which occurs when strong updrafts of storm suspend and suspect the rainfall to be formed and fallout. The location of this radar echo coincides with the center of mesocyclone (rotating updraft) as it is illustrated in the Fig. 7c, d. The mean transfer rates (kg kg−1 s−1) of the microphysical processes averaged over 2 h simulation period Water vapor-QRR (kg kg−1 s−1) Cloud water-QLCW (kg kg−1 s−1) Cloud ice-QLCI (kg kg−1 s−1) Rainwater-RA1 (kg kg−1 s−1) Snow SN1 (kg kg−1 s−1) Hail HA1 (kg kg−1 s−1) PSDEP 7.76 × 10−7 PRAUT PSAUT PREVP −1.64 × 10−5 PSMLT PGMLT PRACW 3.77 × 10−13 PSACI PIACR 3.24 × 10 −4 PGAUT PGSUB PSACW PRACI PSACR PGACW PSFI PGFR PSSUB PSFW PGACI PGACR PGACS PGACIP PGACRP PGACSP Terms with bold letters indicate the most dominant cloud physics processes during model simulation time The cloud model has demonstrated a good potential in simulation of the supercell convective structure during the most intense stage of evolution from 16:00–17:00 UTC. The horizontal cross-sections of radar reflectivity fields show similar structure relative to the observed reflectivity (Fig. 13) for the same period of time. Especially, the radar echo zone with reflectivity higher than 60 dBZ is well captured by the model. One of the key finding in this research is the radar reflectivity structure and some characteristic signatures which specify the type and the character of the storm. As it is shown on Fig. 14 a high resolution cloud model simulation captured a hook echo, which clearly indicates occurrence of supercell storm with mesocyclone. It well to mentioned that some national weather services may consider the presence of a hook echo as sufficient to justify issuing heavy rainfall and a tornado warning. For comparison we have also shown a conceptual model of a hook echo signature (see Fig. 14). It is evident that the simulated horizontal cross section of the reflectivity at 3.5 km height coincides well with the typical signature illustrated by the conceptual model. In addition to similarities in shape and characteristic structure of radar reflectivity, vertical cross section of radar reflectivity (see Fig. 15) indicates the presence of a bounded weak echo region (BWER) which again imply about the characteristic feature and type of supercell storm. Regarding the precipitation the numerical simulation indicates that within 60 min simulation time that corresponds with 17:30–18:30 local time, the total accumulated amount of rainfall reaches 131.5 mm (see Fig. 16) which slightly overestimates the total rainfall registered at AMS "Gazi Baba" (source: HMS), of 107 mm for the period of 17:00–22:30. Based on the simulated values it is obvious that the extreme rainfall intensity happened on small limited area of 10–15 km2 approximately. The larger domain that was also affected with heavy precipitation core covered 50 km2. What is the most characteristic is the extreme rainfall intensity in three successive times, in the limited area with short time duration. The average intensity of falls (10 mm/5 min) during 1 hour simulation time, shows a slightly larger amount relative to the maximum observed rainfall intensity at the Automatic Meteorological Station AMS-"Gazi Baba" at 17:50 local time (source: HMS). Hence, there is the real estimation that during the force of the storm wave in the most affected area, the heavy rainfall intensity is really being extreme. That was also indicated by the results from the simulation. Undeniable is the fact, that such extreme intensity of falls is a rear feature of areas, where there are favorable conditions for the tropical convection, in the presence of extreme heat and moisture in the near surface layer, as it is a case of the tropical cyclones and storms. Table 2 lists the basic parameters of simulated three-dimensional supercell storm on 6 August 2016, which clearly indicates the severity of this type of weather phenomena and could serve as supportive criteria parameters for operational practice of forecasters for early warning of convective weather risks. a) Composite radar reflectivity CAPPI (dBZ) at 2 km height on 6 August 2016 from 17:45–18:30 local time, obtained by RHMS R. Serbia; b) Horizontal transects of the radar reflectivity along SW-NE direction, at the most intense phase of storm evolution Vertical (x-z) cross-section of radar reflectivity along SW-NE (upper panel), NW-SE (middle panel), horizontal cross-section at 2 km height in its most intense stage of evolution The conceptual model of supercell storm. (Source: The Pennsylvania State University-Department of Meteorology. a) The idealized radar reflectivity of a supercell with the hook echo in concert with the cyclonic circulation associated with the mesocyclone; b) A cross section through a supercell. The updraft appears as a minimum in radar reflectivity called a Bounded Weak Echo Region (BWER); c) Modeled hook echo with mesocyclone identified at 3.5 km height in 30 min of the simulation time of supercell storm on 6 August 2016; d) Modeled bounded weak echo region identified on a vertical cross section of radar reflectivity a) Isohyet chart of the total accumulated rainfall (mm/24 h) in the north-western part of Macedonia observed on 6 August 2016; b) Model simulated total accumulated rainfall and hailfall (kg/m2) during simulation time; c) The relative intensity of rainfall (mm/10 min) from 1700 to 1900 local time; d) Cumulative observed vs modeled rainfall in mm from 1700 to 2200 List of the basic storm ingredients based on cloud model simulation Cloud base Cloud overshooting top Maximum updraft Maximum downdraft 15.6 m/s Маximum horizontal wind gust The rate of change of vertical component of relative vorticity at grid point −2.942 s−2 Vertical wind shear 3.8 × 10−3 s−1 Total accumulated rainfall during 1 h simulation time 131.5 mm or kg/m2 Маximum radar reflectivity which reflects the storm intensity 74.5 dBZ A hook echo signature A bounded weak echo region (BWER) The model averaged rainfall intensity 38 mm/5 min (mean rate of three successive peaks) Estimated area with extreme rainfall Wider area affected by heavy rainfall Mean storm movement 35 km /h 4 Discussion with Conclusions Originally, the scientific examination confirms that severe supercell storm with mesocyclone has been developed with two separate zones with heavy rainfall which falls within 20 min. The total accumulated precipitation for 60 min simulation is about 123 mm, while the averaged 5 min rainfall intensity is 10 mm/5 min. The accurate quantitative forecast of the relative intensity of convective rainfall associate with such severe storms is a very complex task and still a big challenge of the scientific community in atmospheric modeling. One of the main reasons for the inability to provide reliable estimation in advance and tracking of this category of destructive storms is their initiation into the atmosphere that occurs as a result of certain thermo-dynamic effects and perturbation of temperature and humidity in the planetary boundary layer (PBL) in particular geographical area. For more detailed representation of atmospheric processes in the meso and local scales, non-hydrostatic models have been used with very fine spatial and temporal resolution in a single area or using the technique of nesting. These models have an advantage over those with sparse-resolution, exactly in accurate forecast of such intense convective processes, the structure and evolution of storms systems and distribution of precipitation, but in certain situations they overestimate or underestimate the relative intensity of heavy convective rainfall. These atmospheric processes related to convective storms, which are initiated, evolved and inflamed in a very short period of time on a small-scale are still the main concern of the atmospheric science worldwide. There is no doubt that in the most developed countries, services within their daily operational practice, despite the advanced technology, supercomputing centers, modern technical resources and existence of advanced meteorological alarm systems sometimes face gaps in assessment of the category and the strength of convective storms which initiate and develop at unstable atmospheric conditions with particular extreme weather events with large destructive influences. The nonlinear and wave nature of atmospheric processes has a spatial and temporal variability of processes. Also, the configuration of operating systems for numerical forecast exceeds current limit of computer possibilities, especially for countries that cover larger geographic areas. In operating practices, in addition to data from numerical models, the NMHS use other products derived from remote measurements (remote sensing control) such as: modern Doppler radar with dual polarization, lidars, sodars, satellites or automatic weather stations, which provide continuous monitoring of the state of the atmosphere, the initiation and development of clouds, their movements, dynamic, microphysical processes and precipitation. Hence, there is a need to develop a comprehensive integrated complex now-casting system for very short-range weather forecast and early warning of natural disasters, as it is envisaged by the World Meteorological Organisation. Basically, scientific analysis points out to the fact, that on August 6 Skopje area was hit by strong supercell storm with mesocyclone. Based on our detail evaluation, the main triggering factors for initiation of very severe convection were: low and moisture inflow, a positive potential vorticity anomaly, low-level directional wind shear, near surface convergence, high helicity index, CAPE and LI. This is one of the most disastrous type of storms with rising rotating air updrafts, intensive initiation and evolution of supercells echoes, production of intense rainfall core, strong and frequent thunder and lightning and longer life cycle of the storm. The maximum top of the cloud penetrates the troposphere and extends up to 15 km. The maximum vertical speed of 100 km/h and the maximum horizontal speed of about 115 km/h. One of the essential factors according to scientific findings, which were key to the flash flooding, is extreme intensity of rainfall (10 mm/5 min) on average during the simulation of 1 h, fallout in a localized area of 10–15 km2. The catastrophic storm that hit Skopje city was unusual for continental areas and more like a tropical storm. Three-dimensional simulation gives a very realistic picture of the nature and severity of this rare destructive weather phenomenon. As summarize-we believe that the scientific overview will at least help in the information on the initiation and evolution of such extreme atmospheric phenomena. We are also confident that this novel scientific method showed a good potential in more realistic simulation of the devastating intensity of this supercell storm with mesocyclone which hit the city of Skopje and the surrounding area and caused people loses and significant material damages. Open access funding provided by University of Vienna. Springer Nature remains neutral with regard tojurisdictional claims in published maps and institutionalaffiliations. List of symbols and description of the terms for the cloud physics processes simulated in the model. Notation Description. PIACR production rate for accretion of rain by cloud ice. PRAUT production rate for autoconversion of cloud water to form rain. PRACW production rate for accretion of cloud water by rain. PRACS production rate for accretion of snow by rain. PRACI production rate for accretion of cloud ice by rain. PREVP production rate for rain evaporation. PSACW production rate for accretion of cloud water by snow. PSACR production rate for accretion of rain by snow. PSAUT production rate for autoconversion of cloud ice to form snow. PSACI production rate for accretion of cloud ice by snow. PSFW production rate for Bergeron process-transfer of cloud water to form snow. PSFI production rate for Bergeron process embryos (cloud ice) used to calculate transfer rate of cloud ice to snow. PSMLT production rate for snow melting to form rain. PSDEP production rate for depositional growth of snow. PSSUB production rate for sublimation of snow. PGDRY dry growth of graupel; involves PGACS, PGACI, PGACW and PGACR. PGWET. wet growth of graupel; may involve PGACS and PGACI and must include PGACW or PGACR, or both (The amount of PGACW which is not able to freeze is shed to rain.) PGFR probabilistic freezing of rain to form graupel. PGACW production rate for accretion of cloud water by graupel. PGMLT production rate for graupel melting to form rain, T ≥ T0. PGACS production rate for accretion of snow by graupel. PGAUT production rate for autoconversion of snow to form graupel. PGACR production rate for accretion of rain by graupel. PGACI production rate for accretion of cloud ice by graupel. PGSUB production rate for graupel sublimation. Arakawa, A., Lamb, V.R.: Computational design of the basic dynamical processes of the UCLA general circulation model. In: Methods of Computational Physics, vol. 17, pp. 173–265. Academic Press, New York (1977)Google Scholar Baldauf, M., Seifert, A., Forstner, J., Majewski, D., Raschendorfer, M.: Operational convectivescale numerical weather prediction with the COSMO model: description and sensitivities. Mon. Weather Rev. 139, 3887–3905 (2011)CrossRefGoogle Scholar Barth, M.C., Kim, S.-W., Wang, C., Pickering, K.E., Ott, L.E., Stenchikov, G., Leriche, M., Cautenet, S., Pinty, J.-P., Barthe, C., Mari, C., Helsdon, J.H., Farley, R.D., Fridlind, A.M., Ackerman, A.S., Spiridonov, V., Telenta, B.: Cloud-scale model intercomparison of chemical constituent transport in deep convection. Atmos. Chem. Phys. 7, 4709–4731 (2007)CrossRefGoogle Scholar Browning, K.A.: The cellular structure of convective storms, met. Magazine. 91, 341–350 (1962)Google Scholar Browning, K.A.: Airflow and precipitation trajectories within severe local storms which travel to the right of the mean wind. J. Atmos. Sci. 21, 634–639 (1964)CrossRefGoogle Scholar Ćurić, M., Janc, D.: Differential heating influence on hailstorm vortex pair evolution. Q. J. R. Meteorol. Soc. 138, 72–80 (2012)CrossRefGoogle Scholar Ćurić, M., Janc, D., Vučković, V.: The influence of merging and individual storm splitting on mesoscale convective system formation. Atmos. Res. 93, 21–29 (2009)CrossRefGoogle Scholar Doswell, C. A., III, H. E. Brooks, and R. A. Maddox, 1996: Flash flood forecasting: An ingredients-based methodology. Wea. Forecasting, 11, 560–581, https://doi.org/10.1175/1520-0434(1996)011,0560: FFFAIB.2.0.CO;2 Hong, S.–.Y., Lim, J.–.O.J.: The WRF single–moment 6–class microphysics scheme (WSM6). J. Korean Meteor. Soc. 42, 129–151 (2006)Google Scholar Hong, S., Noh, Y., Dudhia, J.: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev. 134, 2318–2341 (2006). https://doi.org/10.1175/MWR3199.1 CrossRefGoogle Scholar Hsie, E.-Y., Farley, R.D., Orville, R.D.: Numerical simulation of ice=phase convective cloud seeding. J. Appl. Meteorol. 19, 950–977 (1980)CrossRefGoogle Scholar Jacques, D., Chang, W., Baek, S.-J., Fillion, L.: Developing a convective-scale EnKF data assimilation system for the Canadian MEOPAR project. Mon. Wea. Rev. 145, 1473–1494 (2017). https://doi.org/10.1175/MWR-D-16-0135.1 CrossRefGoogle Scholar Janjic, Z.I.: A nonhydrostatic model based on a new approach. Meteorog. Atmos. Phys. 82, 271–285 (2003)CrossRefGoogle Scholar Klemp, J.B.: Advances in the WRF-NMM model for convection-resolving forecasting. Adv. Geosci. 7, 25–29 (2006)CrossRefGoogle Scholar Klemp, J.B., Wilhelmson, R.B.: The simulation of three-dimensional convective storm dynamics. J. Atmos. Sci. 35, 1070–1096 (1978)CrossRefGoogle Scholar Lin, Y.L., Farley, R.D., Orville, H.D.: Bulk water parameterization in a cloud model. J.Climate Appl. Meteorologie. 22, 1065–1092 (1983)Google Scholar Litta, A.J., Mohanty, U.C.: Simulation of severe thunderstorm event during the field experiment of STORM programme 2006, usung WRF-NMM model. Curr. Sci. 95(2), 204–215 (2008)Google Scholar Litta, A.J., S. M. Ididcula, U. C. Mohanty, and S. K. Prasad, 2011: Comparison of thunderstorm Simulations from WRF-NMM-NMM and WRF-NMM-ARW models over east Indian region. The scientific world journal volume 2012, article ID 951870, 20 pages https://doi.org/10.1100/2012/951870 Lompar, M., Ćurić, M., Romanic, D.: Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols. Atmos. Res. 194, 164–177 (2017)CrossRefGoogle Scholar Orville, H.D., Kopp, F.J.: Numerical simulation of the history of a hailstorm. J. Atmos. Sci. 34, 1596–1618 (1977)CrossRefGoogle Scholar Poterjoy, J., Anderson, J., Sobash, R.: Convective-scale data assimilation for the weather research and forecasting model using the local particle filter. Mon. Weather Rev. 145, 1897–1918 (2017) 1175/MWR-D-16-0298.1 CrossRefGoogle Scholar Potvin, C.K., Flora, M.L.: Sensitivity of idealized supercell simulation to horizontal grid spcing: implications for warn-on-forecast. Mon. Weather Rev. 143, 2998–3024 (2015). https://doi.org/10.1175/MWR-D-14-00416.1 CrossRefGoogle Scholar Spiridonov, V., Ćurić, M.: The relative importance of scavenging,oxidation, and ice-phase processes in the production and wet deposition of sulfate. J. Atmos. Sci. 62, 2118–2135 (2005)Google Scholar Spiridonov, V., Curic, M.: A storm modeling system as an advanced tool in prediction of well organized slowly moving convective cloud system and early warning of severe weather risk. Asia-Pac. J. Atmos. Sci. 51(1), 1–15 (2015). https://doi.org/10.1007/s13143-000-0000-0 CrossRefGoogle Scholar Spiridonov, V., Dimitrovski, Z., Ćurić, M.: A three-dimensional simulation of supercell convective storm. Adv. Meteorol. 2010, 15 (2010)CrossRefGoogle Scholar Sun, J., Chen, M., Wang, Y.: A frequent-updating analysis system based on radar, surface, and mesoscale model data for the Beijing 2008 forecast demonstration project. Weather Forecast. 25, 1715–1735 (2010)CrossRefGoogle Scholar Telenta, B., Aleksic, N.: A three-dimensional simulation of the 17 June 1978 HIPLEX case with observed ice multiplication, 2nd international cloud modeling workshop, Toulouse, 8-12 august 1988. WMO/TD No. 268, 277–285 (1988)Google Scholar Warner, T.T.: Numerical Weather and Climate Prediction, p. 526. Cambridge University Press, Cambridge (2011)Google Scholar Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Email authorView author's OrcID profile 1.Department of Meteorology and Geophysics, Faculty of Earth Science, Geography and AstronomyUniversity of ViennaViennaAustria 2.Institute of MeteorologyUniversity of BelgradeBelgradeSerbia Spiridonov, V. & Ćurić, M. Asia-Pacific J Atmos Sci (2019). https://doi.org/10.1007/s13143-018-0070-7 Revised 20 September 2018 Accepted 02 October 2018 First Online 06 February 2019 Publisher Name Korean Meteorological Society Not logged in Massachusetts Institute of Technology MIT Libraries (1600010533) 18.209.104.7
CommonCrawl
About DML-CZ | FAQ | News | Conditions of Use | Math Archives | Contact Us Previous | Up | Next DML-CZ Home Mathematica Bohemica Xuan, Wei-Feng A note on star Lindelöf, first countable and normal spaces. (English). Mathematica Bohemica, vol. 142 (2017), issue 4, pp. 445-448 MSC: 54D20, 54E35 | MR 3739027 | Zbl 06819595 | DOI: 10.21136/MB.2017.0012-17 Full entry | PDF (0.2 MB) Feedback star Lindelöf space; first countable space; normal space; countable extent A topological space $X$ is said to be star Lindelöf if for any open cover $\mathcal U$ of $X$ there is a Lindelöf subspace $A \subset X$ such that $\operatorname {St}(A, \mathcal U)=X$. The "extent" $e(X)$ of $X$ is the supremum of the cardinalities of closed discrete subsets of $X$. We prove that under $V=L$ every star Lindelöf, first countable and normal space must have countable extent. We also obtain an example under $\rm MA +\nobreak \neg CH$, which shows that a star Lindelöf, first countable and normal space may not have countable extent. [1] Bing, R. H.: Metrization of topological spaces. Can. J. Math. 3 (1951), 175-186. DOI 10.4153/CJM-1951-022-3 | MR 0043449 | Zbl 0042.41301 [2] Engelking, R.: General Topology. Sigma Series in Pure Mathematics 6. Heldermann, Berlin (1989). MR 1039321 | Zbl 0684.54001 [3] Fleissner, W. G.: Normal Moore spaces in the constructible universe. Proc. Am. Math. Soc. 46 (1974), 294-298. DOI 10.2307/2039914 | MR 0362240 | Zbl 0314.54028 [4] Ginsburg, J., Woods, R. G.: A cardinal inequality for topological spaces involving closed discrete sets. Proc. Am. Math. Soc. 64 (1977), 357-360. DOI 10.2307/2041457 | MR 0461407 | Zbl 0398.54002 [5] Hodel, R.: Cardinal functions I. Handbook of Set-Theoretic Topology K. Kunen et al. North-Holland, Amsterdam (1984), 1-61. MR 0776620 | Zbl 0559.54003 [6] Miller, A. W.: Special subsets of the real line. Handbook of Set-Theoretic Topology K. Kunen et al. North-Holland, Amsterdam (1984), 201-233. MR 0776624 | Zbl 0588.54035 [7] Tall, F. D.: Normality versus collectionwise normality. Handbook of Set-Theoretic Topology K. Kunen et al. North-Holland, Amsterdam (1984), 685-732. MR 0776634 | Zbl 0552.54011 [8] Douwen, E. K. van, Reed, G. M., Roscoe, A. W., Tree, I. J.: Star covering properties. Topology Appl. 39 (1991), 71-103. DOI 10.1016/0166-8641(91)90077-Y | MR 1103993 | Zbl 0743.54007 [9] Xuan, W. F., Shi, W. X.: Notes on star Lindelöf space. Topology Appl. 204 (2016), 63-69. DOI 10.1016/j.topol.2016.02.009 | MR 3482703 | Zbl 1342.54015 About DML-CZ © 2010 Institute of Mathematics CAS
CommonCrawl
\begin{document} \parindent = 0pt \baselineskip = 22pt \parskip = \the\baselineskip \newcommand{\la}{\lambda} \newcommand{\Aa}{\mathbb{A}} \newcommand{\RR}{\mathbb{R}} \newcommand{\FF}{\mathbb{F}} \newcommand{\CC}{\mathbb{C}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\HH}{\mathbb{H}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\cB}{{\cal B}} \newcommand{\cP}{{\cal P}} \newcommand{\cV}{{\cal V}} \newcommand{\cC}{{\cal C}} \newcommand{\cS}{{\cal S}} \newcommand{\cW}{{\cal W}} \newcommand{\cF}{{\cal F}} \newcommand{\Real}{\mathop{\rm Re}} \newcommand{\mathop{\rm l{.}i{.}m{.}}}{\mathop{\rm l{.}i{.}m{.}}} \newtheorem{thm}{Theorem}[section] \newtheorem{thmdef}[thm]{Theorem and Definition} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{defi}[thm]{Definition} \newtheorem{pre-note}[thm]{Note} \newenvironment{note}{\begin{pre-note}\rm}{\end{pre-note}} \newenvironment{proof}{\bf Proof\ \rm}{$\;\bullet$} \begin{titlepage} \rightline{math.NT/0103058} \rightline{(submitted)} \indent \begin{center} {\bf\large A lower bound in an approximation problem involving the zeros of the Riemann zeta function}\\ \vskip 1cm Jean-Fran\c{c}ois Burnol\\ March 2001\\ \end{center} \vskip 1cm {\bf Abstract:} We slightly improve the lower bound of B\'aez-Duarte, Balazard, Landreau and Saias in the Nyman-Beurling formulation of the Riemann Hypothesis as an approximation problem. We construct Hilbert space vectors which could prove useful in the context of the so-called ``Hilbert-P\'olya idea''. {\parskip = 0pt\parindent = 100bp\baselineskip=14bp Author's affiliation:\par Jean-Fran\c{c}ois Burnol\par Universit\'e de Nice \--\ Sophia Antipolis\par Laboratoire J.-A. Dieudonn\'e\par Parc Valrose\par F-06108 Nice Cedex 02\par France\par electronic mail: [email protected]\par} \end{titlepage} \setcounter{page}{2} \tableofcontents \section{Introduction} In \cite{Co1} and the subsequent paper \cite{Co2}, Connes gave a rather intrinsic construction of a Hilbert space intimately associated with the zeros of the Riemann zeta function on the critical line. But the zeros having multiplicities higher than a certain level (which is a parameter in Connes's construction), have (if they at all exist) their contributions limited to that level, and not to the extent given by their natural multiplicities. Thus subsists the problem of a natural definition of a so-called ``Hilbert-P\'olya space'', with orthonormal basis indexed by the zeros $\rho$ of $\zeta$ and integers $k$ varying from $0$ to $m_\rho - 1$ where $m_\rho$ is the multiplicity of $\rho$. We do not solve that problem here but we do propose a rather natural construction of Hilbert space vectors $X^\la_{\rho, k}$, $\zeta(\rho)=0$, $k<m_\rho$, which in the limit when the parameter $\la$ goes to $0$ become perpendicular (when they correspond to distinct zeros. The vectors corresponding to a multiple root $\rho$ are independent but need to be orthogonalized.) As in Connes's constructions these vectors live in a quotient space. Controlling the limit $\la\to0$ to obtain a so-called Hilbert-P\'olya space probably involves considerations from mathematical scattering theory (we have previously studied in \cite{Bu3}, \cite{Bu4} some connections with the problems of $L-$functions.) The context in which our construction takes place is that of the Nyman-Beurling formulation of the Riemann Hypothesis as an approximation problem \cite{Nym}, \cite{Beu}. Let $K = L^2(]0,\infty[,dt)$ (over the complex numbers), let $\chi$ be the indicator function of the interval $]0,1]$, and let $\rho$ be the function ``fractional part'' (the letter $\rho$ is also used to refer to a zero of the Riemann zeta function, hopefully no confusion will arise.) Let $0<\lambda<1$ and let $\cB_\lambda$ be the sub-vector space of $K$ consisting of the finite linear combinations of the functions $t\mapsto \rho({\theta\over t})$, for $\lambda\leq\theta\leq1$. \begin{thm}[Nyman \cite{Nym}, Beurling \cite{Beu}] The Riemann Hypothesis holds if and only if $$\chi\in \overline{\bigcup_{0<\lambda<1} \cB_\lambda}$$ \end{thm} Actually we are following \cite{Bal} here in using a slight variant of the original Nyman-Beurling formulation. It is a disappointing fact that this theorem can be proven without leading to any new information whatsoever on the zeros lying on the critical line (basically what is at works is the factorization of functions belonging to the Hardy space of a half-plane \cite{Hof}.) The following is thus rather remarkable: \begin{thm}[B\'aez-Duarte, Balazard, Landreau and Saias \cite{Bal}] Let us write $D(\la)$ for the Hilbert-space distance $\inf_{f\in\cB_\lambda} \| \chi -f \|$. We have $$\liminf_{\la\to0} D(\la)\sqrt{\log({1\over\la})} \geq \sqrt{\sum_\rho {1\over |\rho|^2}}$$ \end{thm} If the Riemann Hypothesis fails this result is true but trivial as the left-hand side then takes the value $+\infty$. So we will assume that the Riemann Hypothesis holds. The sum on the right-hand side is over all non-trivial zeros $\rho$ of the zeta function, counted \emph{only once} independently of their multiplicities $m_\rho$. We prove the following: \begin{thm} We have: $$\liminf_{\la\to0} D(\la)\sqrt{\log({1\over\la})} \geq \sqrt{\sum_\rho {m_\rho^2\over |\rho|^2}}$$ \end{thm} So the zeros are counted according to the \emph{square} of their multiplicities. To prove this lower bound we will construct remarkable Hilbert space vectors $X^\la_{\rho, k}\;$, $\zeta(\rho)=0$, $k<m_\rho$ and use them to control $D(\la)$. The following ``toy-model'' gives us reasons to expect that the lower bound in fact gives the exact order of decrease of $D(\la)$: \begin{thm}\label{theotoy} Let $Q(z) = \prod_\alpha (1-\overline{\alpha}\cdot z)^{m_\alpha}$ be a polynomial of degree $q\geq1$ will all its roots $\alpha$ on the unit circle (the root $\alpha$ having multiplicity $m_\alpha$). Let $P(z)$ be an arbitrary polynomial. Let $$E(N,P) := \inf_{\deg(A)\leq N} \int_{S^1} |P(z) - Q(z)A(z)|^2{d\theta\over2\pi}$$ We have as $N$ goes to infinity: $$\lim\ N\,E(N,P) = \sum_\alpha {m_\alpha^2}\;|P(\alpha)|^2$$ \end{thm} \section{The prediction error for a singular MA(q)} As motivation for our result we first consider a simpler approximation problem, in the context of the Hardy space of the unit disc rather than the Hardy space of a half-plane. Let $Q(z) = \prod_\alpha (1-\overline{\alpha}\cdot z)^{m_\alpha}$ be a polynomial of degree $q\geq1$ will all its roots $\alpha$ on the unit circle (the root $\alpha$ having multiplicity $m_\alpha$ so that $q = \sum_\alpha m_\alpha$.) Let us define: $$E(N) := \inf_{\deg(A)\leq N} \int_{S^1} |1 - Q(z)A(z)|^2{d\theta\over2\pi}$$ The measure ${d\theta\over2\pi}$ is the rotation invariant probability measure on the circle $S^1$, with $z =\exp(i\theta)$. The minimum is taken over all complex polynomials $A(z)$ with degree at most $N$. We are guaranteed that $\lim_{N\to\infty} E(N) = 0$ as $Q(z)$ is an outer factor (\cite{Hof}). More precisely: \begin{thm} As $N$ goes to infinity we have: $$\lim\ N\,E(N) = \sum_\alpha {m_\alpha^2}$$ \end{thm} \begin{note} In case $Q(z)$ has a root in the open unit disc then $E(N)$ is bounded below by a positive constant. In case $Q(z)$ has all its roots outside the open unit disc, then the result above holds but only the roots on the unit circle contribute. Finally if all its roots are outside the closed unit disc then the decrease is exponential: $E(N) = O(c^N)$, with $c<1$. \end{note} The theorem, although not stated explicitely there, is easily extracted from the work of Grenander and Rosenblatt \cite{Gre}. They state an $O({1\over N})$ result, in a much wider set-up than the one considered here (which is limited to simple-minded $q$-th order moving averages.) Unfortunately the $O({1\over N})$ bound is now believed not to be systematically true under their hypotheses (as is explained in \cite{Nev}; I thank Professor W.~Van~Assche for pointing out this fact to me.) Nevertheless their technique of proof goes through smoothly in the case at hand and yields the exact asymptotic result as stated above. We only sketch briefly the idea, as nothing beyond the tools used in \cite{Gre} is needed. We point out in passing that it is of course possible to express $E(N)$ explicitely in terms of the Toeplitz determinants for the measure $d\mu = |Q(\exp(i\theta))|^2 {d\theta\over2\pi}$. But already for an $MA(2)$ this gives rise to unwieldy computations\dots. Rather: let $\cP_{N}$ be the vector space of polynomials of degrees at most $N+q$, let $\cV_{N}$ be the subspace of polynomials divisible by $Q(z)$, and let $\cW_N$ be its $q$-dimensional orthogonal complement. Then $E(N)$ is the squared norm of the orthogonal projection of the constant function $1$ to $\cW_N$. A spanning set in $\cW_N$ is readily identified: to each root $\alpha$ one associates $Y_{\alpha,0}^N$, $Y_{\alpha,1}^N$, \dots, $Y_{\alpha,m_\alpha -1}^N$ defined as $$Y_{\alpha,0}^N := 1 + \overline{\alpha} z + \dots +\overline{\alpha}^{N+q} z^{N+q}$$ $$Y_{\alpha,1}^N := z + 2\overline{\alpha} z^2 + \dots + (N+q)\overline{\alpha}^{N+q-1} z^{N+q}$$ and similarly for $k=2, \dots, m_\alpha - 1$. We can then express $E(N)$ using a Gram formula in terms of (the inverse) of the positive matrix (of fixed size $q\times q$ but depending on $N$) built with the scalar products of the $Y$'s. It turns out that in the limit when $N$ goes to infinity and after the rescaling $Y_{\alpha,k}^N\mapsto X_{\alpha,k}^N := N^{-k-1/2}Y_{\alpha,k}^N$ the Gram matrix decomposes into Cauchy blocks $(1/(i+j+1))_{0\leq i,j < m_\alpha}$ of size $m_\alpha$, one for each root $\alpha$. It is known from Cauchy that the top-left element of the inverse matrix is $m_\alpha^2$. This is how ${\sum_\alpha {m_\alpha^2}\over N}$ arises, after keeping track of the scalar products $(1, X_{\alpha,k}^N)$. Instead of the constant polynomial $1$ we could have looked at the approximation rate to an arbitrary polynomial $P(z)$. The proof just sketched applies identically and gives the Theorem \ref{theotoy} from the Introduction. \section{Invariant analysis and a construction of B\'aez-Duarte} The Mellin transform $f(t)\mapsto \widehat{f}(s)=\int_{t>0} f(t)t^{s-1}dt$ establishes the Plancherel isometry between $K = L^2(]0,\infty[,dt)$ and $L^2(s={1\over2}+i\tau,{d\tau\over2\pi})$, with inverse $F(s)\mapsto \int_{s=1/2+i\tau} F(s) t^{-s}{d\tau\over2\pi}$. Let $a(s)$ be a measurable function of $s$ (as a rule when using the letter $s$ we implicitely assume $\Real(s) = {1\over2}$. We will use letters $w$ and $z$ for general complex numbers.) If $a(s)$ is essentially bounded then $F(s)\mapsto a(s)F(s)$ defines a bounded operator on $K$ which commutes with the unitary group $D_{\theta}:f(t)\mapsto {1\over\sqrt{\theta}}f({t\over{\theta}})$, and all bounded operators commuting with the $D_\theta$ ($0<\theta<\infty$) are obtained in such a manner. More generally all \emph{closed} invariant operators are associated to a measurable multiplier $a(s)$ (finite almost everywhere, but not necessarily essentially bounded). For the details of this technical statement, see \cite{Bu5}. For example the Hardy averaging operator $M:f(t)\mapsto {1\over t}\int_{]0,t]} f(u)du$ corresponds to the spectral multiplier $1\over 1-s$. The operator $1 - M$ corresponds to the spectral multiplier $s\over s-1$ and is thus unitary. Another (see \cite{Bu2}) remarkable invariant operator is the (even) ``Gamma'' operator $\Gamma_+ = \cF_+ I$. Here $I$ is the inversion $f(t)\mapsto {1\over t}f({1\over t})$ and $\cF_+$ is the additive Fourier transform as applied to even functions (the cosine transform). The multiplier associated to $\Gamma_+$ is the (Tate) function $$\gamma_+(s) = \pi^{{1\over2}-s}{\Gamma(s/2)\over\Gamma((1-s)/2)} = {\zeta(1-s)\over\zeta(s)} = 2^{1-s}\pi^{-s}\cos({\pi\,s\over2})\Gamma(s) = (1-s)\int_0^\infty u^{s-1}\,{\sin(2\pi u)\over\pi u}\,du$$ A further invariant operator is the operator $U$ introduced by B\'aez-Duarte \cite{Bae} in connection with the Nyman-Beurling formulation of the Riemann Hypothesis: its spectral multiplier is ${s\over 1-s}{\zeta(1-s)\over\zeta(s)}$, so $U = (M-1)\cF_+ I = \cF_+ I (M-1)$. From the results recalled above on invariant operators, we see that invariant orthogonal projectors correspond to indicator functions of measurable sets on the critical line. So a function $f(t)$ is such that its multiplicative translates $D_{\theta}(f)$ ($0<\theta<\infty$) span $K$ if and only if $F(s)=\widehat{f}(s)$ is almost everywhere non-vanishing (Wiener's $L^2$-Tauberian Theorem.) In that case the phase function $$U_f(s) = {\ \overline{F(s)}\ \over F(s)}$$ is almost everywhere defined and of modulus $1$. It thus corresponds to an invariant unitary operator, also denoted $U_f$. Let us introduce the \emph{anti-unitary} ``time-reversal'' operator $J$ acting on $K$ as $g\mapsto \overline{I(g)}$. The operator $U_f$ commutes with the contractions-dilations, is unitary, and sends $f$ to $J(f)$. We call this the \emph{B\'aez-Duarte construction} as it appears in \cite{Bae} (up to some non-essential differences) in relation with the Nyman-Beurling problem (the phase function arises in other contexts, especially in scattering theory.) To relate this with the operator $U = (M-1)\cF_+ I$, one needs the formula $${\zeta(s)\over s} = - \int_0^\infty \rho({1\over t}) t^{s-1} dt$$ which is fundamental in the Nyman-Beurling context. This formula shows that $U$ is the phase operator associated with $\rho({1\over t})$. Generally speaking, the operators $U_f$ are related to the Hardy spaces $\HH^2 = L^2(]0,1],dt)$ and ${\HH^2}^\perp = L^2([1,\infty[,dt)$ (we will also use the notation $\HH^2$ for the Mellin transform of $L^2(]0,1],dt)$.) Indeed the time-reversal $J$ is an isometry (anti-unitary) between $\HH^2$ and ${\HH^2}^\perp$. Let us assume that the function $f$ belongs to $\HH^2$. The operator $U_f$ has the same effect as $J$ on $f$, but contrarily to $J$ is an \emph{invariant} operator. This puts the space $\cB_\la(f)$ (of finite linear combinations of contractions $D_\theta(f)$ for $\la\leq\theta\leq1$) isometrically in a new light as a subspace of $L^2([\la,\infty[,dt)$. The marvelous thing is that in this new incarnation it appears to be sometimes possible to find vectors orthogonal to $\cB_\la(f)$ and thus to get some control on $\cB_\la(f)$ as $\la$ decreases (as in the Grenander-Rosenblatt method.) \section{The vectors $Y^\la_{s,k}$} To get started on this we first replace the $L^2$ function $-{\zeta(s)\over s}$ with an element of $\HH^2$. This is elementary: \begin{prop}[\cite{Bu4}, \cite{Ehm}] The function $Z(s) = {s-1\over s}{\zeta(s)\over s}$ belongs to $\HH^2$. Its inverse Mellin transform $A(t)$ is given by the formula $$A(t) = [{1\over t}]\log(t) + \log([{1\over t}]\,!) + [{1\over t}]$$ One has (\/for $0<t\leq 1$) $A(t)= {1\over2}\log({1\over t}) + O(1)$. \end{prop} The B\'aez-Duarte construction will then associate to $A(t)$ the operator $V$ with spectral multiplier $$V(s) = \left({s\over 1-s}\right)^3\;{\zeta(1-s)\over \zeta(s)}$$ so that $$V = (1 - M)^2 \cdot U$$ This last representation will prove useful as it allows to use the formulae related to $U$ from \cite{Bae} and \cite{Bal}. Let $\cC_\la$ ($0<\la<1$) be the sub-vector space of $\HH^2$ of linear combinations of the contractions $D_\theta(A)$ for $\la\leq\theta\leq1$. The function ${s-1\over s}{1\over s} = {1\over s} - {1\over s^2}$ is the Mellin transform of $\chi_1(t):=(1+\log(t))\chi(t)$. The quantity $D(\lambda)$ considered by B\'aez-Duarte, Balazard, Landreau and Saias is thus the Hilbert space distance between $\chi_1(t)$ and $\cC_\la$. To bound it from below we will exhibit remarkable Hilbert space vectors $X^\lambda_{\rho,k}$ indexed by the zeros of the Riemann zeta function and perpendicular to $\cC_\la$. We then compute the exact asymptotics of the orthogonal projection of $\chi_1$ to the vector spaces spanned by the $X^\lambda_{\rho,k}$, for a finite set of roots, exactly as in the Grenander-Rosenblatt method. To each complex number $w$ and natural integer $k\geq0$ we associate the funtion $\psi_{w,k}(t) = (\log({1\over t}))^k\, t^{-w}\,\chi(t)$ on $]0,\infty[$. For $\Real(w)<1$ it is integrable, for $\Real(w)<{1\over2}$ it is in $K$. Let $Q_\la$ be the orthogonal projector from $K$ onto $L^2([\la,\infty[)$. The main point of this paper is the following: \begin{thmdef} For each $0<\la\leq 1$, each $s$ on the critical line, and each integer $k\geq0$ the $L^2$-limit in $K$ of $V^{-1}Q_\la V(\psi_{w,k})$ exists as $w$ tends to $s$ from the left half-plane: $$Y^\la_{s,k}:= \mathop{\rm l{.}i{.}m{.}}_{w\to s\atop\Real(w)<{1\over2}}\ V^{-1}Q_\la V(\psi_{w,k})$$ For each $\la\leq\theta\leq1$ the scalar products between $D_\theta(A)$ and the vectors $Y^\la_{s,k}$ are: $$\la\leq\theta\leq1\ \Rightarrow\ (D_\theta(A), Y^\la_{s,k}) = \left(-{d\over ds}\right)^k\;\theta^{s - {1\over 2}} Z(s)$$ \end{thmdef} \begin{note} The proof shows the existence of an analytic continuation in $w$ accross the critical line, but we shall not make use of this fact. \end{note} Clearly one has the following statement as an immediate consequence: \begin{cor} Let $0<\la<1$. The vector $Y^\la_{s,k}$ is perpendicular to $\cC_\la$ if and only if $\zeta^{(j)}(s) = 0$ for all $j\leq k$, if and only if $s$ is a zero $\rho$ of the zeta function and $k<m_\rho$. \end{cor} \begin{note} Our scalar products $(f,g)$ are complex linear in the first factor and conjugate-linear in the second factor. \end{note} \begin{note} The operator ${d\over ds}$ when applied to a not necessarily analytic function on the critical line is defined to act as ${1\over i}{d\over d\tau}$ (where $s = {1\over2} + i\tau$.) \end{note} \begin{proof} The proof of existence will be given later. Here we check the statement involving the scalar product, assuming existence. The following holds for $\la\leq\theta\leq1$ and $\Real(w)<{1\over2}$: \begin{eqnarray*} (V^{-1}Q_\la V(\psi_{w,k}), D_\theta(A)) &=& (Q_\la V(\psi_{w,k}), VD_\theta(A))\cr &=& (Q_\la V(\psi_{w,k}), D_\theta\cdot V(A))\cr &=& (V(\psi_{w,k}), Q_\la\cdot D_\theta\cdot J(A))\cr &=& (V(\psi_{w,k}), D_\theta\cdot V(A))\cr &=& (\psi_{w,k}\;, D_\theta(A))\cr &=& \left({d\over dw}\right)^k\ (\psi_{w,0}\;, D_\theta(A))\cr &=& \left({d\over dw}\right)^k\ (D_{\theta}^{-1}(t^{-w}\,\chi(t)), A)\cr &=& \left({d\over dw}\right)^k\ (\theta^{1/2 - w}t^{-w}\,\chi(\theta t), A)\cr &=& \left({d\over dw}\right)^k\ \theta^{1/2 - w} \int_{]0,1]} t^{-w} \overline{A(t)}\,dt \cr \end{eqnarray*} Taking the limit when $w\to s$ gives \begin{eqnarray*} (Y^\la_{s,k}, D_\theta(A)) &=& \left({d\over ds}\right)^k\ \theta^{1/2 - s} \int_{]0,1]} t^{-s} \overline{A(t)}\,dt\cr &=& \left({1\over i}{d\over d\tau}\right)^k\ \theta^{-i\tau}\int_{]0,1]}t^{-{1\over2} - i\tau}\;\overline{A(t)}\,dt \end{eqnarray*} Taking the complex conjugate: \begin{eqnarray*} (D_\theta(A), Y^\la_{s,k}) &=& \left({i}{d\over d\tau}\right)^k\ \theta^{i\tau}\int_{]0,1]}t^{-{1\over2} + i\tau}\;A(t)\,dt\cr &=& \left(-{d\over ds}\right)^k\;\theta^{s - {1\over 2}} \int_{]0,1]}t^{s-1}\;A(t)\,dt\cr &=& \left(-{d\over ds}\right)^k\;\theta^{s - {1\over 2}} Z(s) \end{eqnarray*} which completes the proof (assuming existence.) \end{proof} To prove the existence we will use in an essential manner the key {\bf Lemme 6} from \cite{Bal}. We have seen that $V = (1 - M)^2 U$ where $M$ is the Hardy averaging operator and $U$ the B\'aez-Duarte operator. The spectral function $U(s)$ extends to an analytic function $U(w)$ in the strip $0<\Real(w)<1$. We need pointwise expressions for $V(\psi_{w,k})(t)$, $t>0$ (at first only $\Real(w)<{1\over2}$ is allowed here). Thanks to the general study of $U$ given in \cite{Bae}, we know that for $\Real(w)<{1\over2}$ the vector $U(\psi_{w,k})$ in $K$ is given as the following limit in square mean: $$\mathop{\rm l{.}i{.}m{.}}_{\delta\to0}\int_\delta^1 (\log({1\over v}))^k\, v^{-w}\,{d\over dv}{\sin(2\pi t/v)\over \pi t/v}\,dv$$ Following \cite{Bal}, with a slight change of notation, we now study for each complex number $w$ with $\Real(w)<1$ (and each integer $k\geq0$) the \emph{pointwise} limit as a function of $t>0$ for $\delta\to 0$: $$\varphi_{w,k}(t) := \lim_{\delta\to0}\int_\delta^1 (\log({1\over v}))^k\, v^{-w}\,{d\over dv}{\sin(2\pi t/v)\over \pi t/v}\,dv$$ \begin{thm}[\cite{Bal}]\label{theo2} Let $k=0$. For each $t>0$ and $\Real(w)<1$ the pointwise limit defining $\varphi_{w,0}(t)$ exists. It is holomorphic in $w$ for each fixed $t$. When $w$ is restricted to a compact set in $\Real(w)<1$, one has uniformly in $w$ the bound $\varphi_{w,0}(t) = O({1\over t})$ on $[1,\infty[$. Uniformly with respect to $w$ satisfying $0<\Real(w)<1$ one has $ \varphi_{w,0}(t) = U(w)\,t^{-w} + O(1)$ on $0<t\leq 1$. \end{thm} \begin{proof} Everything is either stated explicitely in \cite{Bal}, Lemme 6 and Lemme 4, or follows from their proofs. We will give more details for $k\geq1$ as this is not treated in \cite{Bal}. \end{proof} \begin{cor} For each $w$ in the critical strip $0<\Real(w)<1$ the Hardy operator $M: f(t)\to {1\over t}\int_0^t f(v)\,dv$ can be applied arbitrarily many times to $\varphi_{w,0}(t)$. The functions $M^L(\varphi_{w,0})$ ($L\in\mathbb{N}$) are $O({(1+\log(t))^L\over t})$ on $[1,\infty[$, uniformly with respect to $w$ when it is restricted to a compact subset of the open strip, and satisfy on $t\in\,]0,1]$ the estimate $M^L(\varphi_{w,0})(t) = \left({1\over 1 - w}\right)^L U(w)\,t^{-w} + O(1)$, uniformly with respect to $w$. \end{cor} \begin{proof} A simple recurrence. \end{proof} We thus obtain: \begin{cor} The vectors $Y^\la_{s,0}$ exist (for $\Real(s) = {1\over2}$). One has the estimates: $$V(Y^\la_{s,0})(t) = O({(1+\log(t))^2\over t})\qquad (t\in [1,\infty[)$$ $$V(Y^\la_{s,0})(t) = V(s)\;t^{-s} + O(1)\qquad (\lambda<t\leq 1)$$ $$V(Y^\la_{s,0})(t) = 0\qquad (0<t<\lambda)$$ uniformly with respect to $s$ when its imaginary part is bounded. \end{cor} \begin{thm}\label{pretheo} Let $k\geq 1$. For each $t>0$ and $\Real(w)<1$ the pointwise limit defining $\varphi_{w,k}(t)$ exists. It is holomorphic in $w$ for each fixed $t$. When $w$ is restricted to a compact set in $\Real(w)<1$, one has uniformly in $w$ the bound $\varphi_{w,k}(t) = O({1\over t})$ on $[1,\infty[$. Uniformly for $0<\Real(w)<1$ one has $\varphi_{w,k}(t) = \left({d\over dw}\right)^k (U(w)\,t^{-w}) + O(1)$ on $0<t\leq 1$. \end{thm} \begin{proof} The formula defining $\varphi_{w,k}(t)$ is equivalent to (after integration by parts and the change of variable $u=1/v$): $$\varphi_{w,k}(t) = \lim_{\Lambda\to\infty} {1\over\pi\,t}\int_1^\Lambda (k+w\log(u))\left(\log(u)\right)^{k-1}\,u^{w-1}\sin(2\pi t\,u)\,{du\over u}$$ This proves the existence of $\varphi_{w,k}(t)$, its analytic character in $w$, and the uniform $O({1\over t})$ bound on $[1,\infty[$. The formula can be rewritten as: $$\varphi_{w,k}(t) = \left({d\over dw}\right)^k {w\,\over\pi\,t}\int_1^\infty u^{w-1}\sin(2\pi t\,u)\,{du\over u}$$ When $w$ is in the critical strip the integral $\int_0^\infty u^{w-1}\sin(2\pi t\,u)\,{du\over u}$ is absolutely convergent and its value is $t^{1-w}\int_0^\infty u^{w-1}\sin(2\pi u)\,{du\over u} = {1\over 1-w}(2\pi\,t)^{1-w}\cos({\pi w\over2})\Gamma(w)$ from well-known integral formulae, so that: $$\varphi_{w,k}(t) = \left({d\over dw}\right)^k \left({w\over 1-w}2^{1-w}\pi^{-w}\cos({\pi w\over2})\Gamma(w) t^{-w} - {w\,\over\pi\,t}\int_0^1 u^{w-1}\sin(2\pi t\,u)\,{du\over u}\right)$$ The first term is $\left({d\over dw}\right)^k (U(w)\,t^{-w})$ and the second term can be explicitely evaluated using the series expansion of $\sin(2\pi t\,u)$ with the final result $$\varphi_{w,k}(t) = \left({d\over dw}\right)^k (U(w)\,t^{-w}) + 2 (-1)^k \; k!\sum_{j\geq1}(-1)^{j}{(2\pi t)^{2j}\over (2j+1)!}{2j\over (w+2j)^{k+1}}$$ which shows $\varphi_{w,k}(t) = \left({d\over dw}\right)^k (U(w)\,t^{-w}) + O(1)$, on $0<t\leq 1$, uniformly for $0<\Real(w)<1$. \end{proof} As was the case for $k=0$ we then deduce that the Hardy operator can be applied arbitrarily many times to $\varphi_{w,k}$ for $0<\Real(w)<1$. The existence of the $Y^\la_{s,k}$ follows. \begin{thm}\label{theo} Let $k\geq 0$. The vectors $Y^\la_{s,k}$ exist (for $\Real(s) = {1\over2}$). One has the estimates: \begin{eqnarray*} V(Y^\la_{s,k})(t) &=& O({(1+\log(t))^2\over t})\qquad (t\in [1,\infty[)\cr V(Y^\la_{s,k})(t) &=& \left({d\over ds}\right)^k (V(s)\;t^{-s}) + O(1)\qquad (\lambda<t\leq 1)\cr V(Y^\la_{s,k})(t) &=& 0\qquad (0<t<\lambda)\cr \end{eqnarray*} the implied constants are independent of $\la$ and are uniform with respect to $s$ when its imaginary part is bounded. \end{thm} \begin{proof} Clearly a corollary to \ref{pretheo}. \end{proof} \section{The vectors $X^\la_{\rho,k}$ and completion of the proof} \begin{defi} Let $0<\la<1$. To each zero $\rho$ of the Riemann zeta function on the critical line, of multiplicity $m_\rho$, and each integer $0\leq k<m_\rho$ we associate the Hilbert space vector $$X^\la_{\rho,k} := \left(\log({1\over\lambda})\right)^{-{1\over2}-k}\cdot Y^\la_{\rho,k}$$ where $Y^\la_{\rho,k} = \mathop{\rm l{.}i{.}m{.}}_{w\to s} V^{-1}Q_\la V(\psi_{w,k})$, $V$ is the unitary operator $(M-1)^3 \cF_+\,I$, $Q_\la$ is orthogonal projection to $L^2([\la,\infty[,dt)$, and $\psi_{w,k}(t) = (\log({1\over t}))^k\, t^{-w}\;\chi(t)$. \end{defi} \begin{note} Of course there is no reason except psychological to allow only zeros of the Riemann zeta function at this stage. \end{note} \begin{thm}\label{theogram} As $\la$ decreases to $0$ one has: \begin{eqnarray*} \lim_{\la\to0}\ (X^\la_{\rho_1,k}, X^\la_{\rho_2,l}) &=& 0\qquad(\rho_1\neq\rho_2)\cr \lim_{\la\to0}\ (X^\la_{\rho,k}, X^\la_{\rho,l}) &=& {1\over k+l+1} \end{eqnarray*} \end{thm} \begin{proof} To establish this we first consider, for $\Real(s_1) = \Real(s_2) = {1\over2}$: $$\int_\la^1 \left(\log({1\over t})\right)^{j_1} t^{-s_1}\;\left(\log({1\over t})\right)^{j_2} {t^{-(1-s_2)}}\,dt$$ If $s_1\neq s_2$ an integration by parts shows that it is $O\left(\log({1\over\lambda})\right)^{j_1+j_2} $. On the other hand when $s_1 = s_2$ its exact value is ${1\over j_1 + j_2 + 1}\left(\log({1\over\lambda})\right)^{j_1+j_2 + 1}$. With this information the theorem follows directly from \ref{theo} as (for example) the leading divergent contribution as $\la\to0$ to $\left(V(Y^\la_{s,k}),\;V(Y^\la_{s,l})\right)$ is $V(s)\overline{V(s)}\int_\la^1 \left({d\over ds}\right)^k\;t^{-s}\ \overline{\left({d\over ds}\right)^l\;t^{-s}}\,dt$ which gives ${1\over k + l + 1}\left(\log({1\over\lambda})\right)^{k+l+ 1}$. The rescaling $Y\mapsto X$ is chosen so that a finite limit for $(X^\la_{\rho,k}, X^\la_{\rho,l})$ is obtained. As the scalar products involving distinct zeros have a smaller divergency, the rescaling let them converge to $0$. \end{proof} \begin{thm}\label{theoscal} Let $\chi_1(t) = (1 + \log(t))\chi(t)$. As $\la$ decreases to $0$ one has: \begin{eqnarray*} \lim_{\la\to0}\ \sqrt{\log({1\over\la})}\;(\chi_1, X^\la_{\rho,k}) &=& 0\qquad(k\geq1)\cr \lim_{\la\to0}\ \sqrt{\log({1\over\la})}\;(\chi_1, X^\la_{\rho,0}) &=& {\rho - 1\over\rho^2}\cr \end{eqnarray*} \end{thm} \begin{proof} We have $(1 - M)\chi_1 = \chi$, and $V = (1-M)^2\,U$ so $V\chi_1 = (1-M)\,U\chi$. From \cite{Bae} we know that $U\chi$ is ${\sin(2\pi t)\over \pi t}$ so $V\chi_1$ is the function ${\sin(2\pi t)\over \pi t} - {1\over t}\int_0^t {\sin(2\pi v)\over \pi v}\,dv$. It is thus $0(t^2)$ as $t\to0$, and from \ref{theo} we then deduce that the scalar products $(\chi_1, Y^\la_{\rho,k})$ admit finite limits as $\la\to0$. This settles the case $k\geq1$. For $k=0$, one uses the uniformity with respect to $w$ in \ref{theo2} to get $$\lim_{\la\to0} (\chi_1, Y^\la_{\rho,0}) = \lim_{w\to\rho} (\chi_1, \varphi_{w,0})$$ which gives $\lim_{w\to\rho} \int_0^1 (1+\log(t))\,t^{w-1}\,dt = {1\over \rho} - {1\over\rho^2} = {\rho - 1\over\rho^2}$. \end{proof} We can now conclude the proof of our estimate. \begin{thm}\label{mytheo} We have: $$\liminf_{\la\to0} D(\la)\sqrt{\log({1\over\la})} \geq \sqrt{\sum_\rho {m_\rho^2\over |\rho|^2}}$$ \end{thm} \begin{proof} Let $R$ be a non-empty finite set of zeros. We showed that $D(\la)$ is the Hilbert space distance from $\chi_1$ to $\cC_\lambda$, and that the vectors $X^\la_{\rho,k}$ for $0\leq k<m_\rho$ are perpendicular to $\cC_\lambda$. So $D(\la)$ is bounded below by the norm of the orthogonal projection of $\chi_1$ to the finite-dimensional vector space $H_R$ spanned by the vectors $X^\la_{\rho,k}$, $0\leq k<m_\rho$, $\rho\in R$. This is given by a well-known formula involving the inverse of the Gram matrix of the $X^\la_{\rho,k}$'s as well as the scalar products $(\chi_1, X^\la_{\rho,k})$. From \ref{theogram} the Gram matrix converges to diagonal blocks, one for each zero, given by Cauchy matrices of sizes $m_\rho\times m_\rho$. From Cauchy we know that the top-left element of the inverse matrix is $m_\rho^2$. Combining this with the scalar products evaluated in \ref{theoscal} we get that the squared norm of the orthogonal projection of $\chi_1$ to $H_R$ is asymptotically equivalent as $\la\to0$ to $\sum_{\rho\in R} {m_\rho^2\over |\rho|^2}\over \log({1\over\la})$. The proof is complete. \end{proof} We can apply our strategy to a fully singular MA(q) on the unit circle. The relevant B\'aez-Duarte phase operator will then be (up to a non-important constant of modulus 1) the operator of multiplication by $z^{-q}$ and it is apparent that this leads to a proof equivalent to the one we gave in our previous discussion, inspired by \cite{Gre}. In the case of the Nyman-Beurling approximation problem for the zeta funtion, we expect in the quotient of $\HH^2$ by $\overline{\cC_\la}$ a ``continuous spectrum'' additionally to the ``discrete spectrum'' provided by the (projection to $\HH^2$ of the) $X^\la_{\rho,k}$'s, $\zeta(\rho)=0$, $k<m_\rho$. It is tempting to speculate that the rescaling will kill this continuous part as $\la\to0$, so that in the end only subsists a so-called ``Hilbert-P\'olya'' space. This would appear to require \ref{mytheo} to give the exact order of decrease of the quantity $D(\la)$ and the numerical explorations reported by B\'aez-Duarte, Balazard, Landreau and Saias in \cite{Bal} seem to support this. \end{document}
arXiv
\begin{document} {\footnotesize } \vskip 1.2 true cm \begin{center} {\bf On the limiting extremal vanishing for configuration spaces} \\ {by}\\ {\sc Muhammad Yameen} \end{center} \pagestyle{myheadings} \markboth{Limiting extremal vanishing for configuration spaces}{Muhammad Yameen} \begin{abstract} We study the limiting behavior of extremal cohomology groups of $k$-points configuration spaces of complex projective spaces of complex dimension $m\geq 4.$ In the previous work, we prove that the extremal cohomology groups of degrees $(2m-2)k+i$ are eventually vanish for each $i\in\{1,2,3\}.$ In this paper, we investigate the extremal cohomology groups for non-positive integers, and show that these cohomology groups are eventually vanish for $i\in\{-1,-2,0\}.$ As an application, we confirm the validity of more general question of Knudsen, Miller and Tosteson for non-positive integers. We give a certain families of unstable cohomology groups, which are not eventually vanish. The degrees of these families of cohomology groups are depend on the number of points and the dimension of projective spaces. We formulate the conjecture that the cohomology groups of higher slopes are eventually vanish. \end{abstract} \begin{quotation} \noindent{\bf Key Words}: {Configuration spaces, Extremal vanishing, Limiting extremal vanishing, Extremal stability, Hilbert function, Reduced Chevalley–Eilenberg complex} \noindent{\bf 2010 Mathematics Subject Classification}: Primary 55R80, Secondary 55P62. \end{quotation} \thispagestyle{empty} \section{Introduction} \label{sec:intro} For any connected manifold $\mathscr{M}$ of finite type (Betti numbers are finite), the space $$\mathscr{F}_{k}(\mathscr{M}):=\{(x_{1},\ldots,x_{k})\in \mathscr{M}^{k}| x_{i}\neq x_{j}\,for\,i\neq j\}$$ is called the configuration space of $k$ distinct ordered points in $\mathscr{M}.$ The symmetric group $\mathscr{S}_{k}$ acts on $\mathscr{F}_{k}(\mathscr{M})$ by permuting the coordinates. This action is transitive and the orbit $$\mathscr{C}_{k}(\mathscr{M}):=\mathscr{F}_{k}(\mathscr{M})/\mathscr{S}_{k} $$ is the unordered configuration space. Configuration spaces, that are parameter spaces for reduced zero-cycles on manifolds, are cornerstones objects in topology, source of several crucial topological data. For example, when the base manifold if the affine plane, the fundamental groups are the so-called braid groups which are fundamental in geometric group theory. Braid spaces or configuration spaces of unordered pairwise distinct points on manifolds have important applications to a number of areas of mathematics, physics and computer sciences. It is a fundamental problem in algebraic topology to understand the homological properties of such spaces. The homological stability of the spaces $\mathscr{C}_{k}(\mathscr{M})$ proved by McDuff \cite{MD}, Segal \cite{S} and Church \cite{C}: for each $i\geq0$ the function $$k\mapsto \text{dim}H_{i}(\mathscr{C}_{k}(\mathscr{M});\mathbb{Q})$$ is eventually constant. This result was extended by Randal-William \cite{RW} and Knudsen \cite{Kn}. More recently, Knudsen, Miller and Tosteson \cite{KMT} study the extremal stability of the spaces $\mathscr{C}_{k}(\mathscr{M})$: for each $i\geq0$ the function $$k\mapsto \text{dim}H_{\nu_{k}-i}(\mathscr{C}_{k}(\mathscr{M});\mathbb{Q})$$ is eventually a quasi-polynomial, where $\nu_{k}=(d-1)k+1$ and $dim(\mathscr{M})=d.$ They asked the following question:\\\\ \textbf{Question.} (see Question 4.10 of \cite{KMT}) Suppose that $H_{d-1}(\mathscr{M};\mathbb{Q})=0.$ For $i\in\mathbb{N},$ is the Hilbert function $$k\mapsto \text{dim}H_{k(d-2)+i}(\mathscr{C}_{k}(\mathscr{M});\mathbb{Q})$$ eventually a quasi-polynomial?\\\\ For the definition of quasi-polynomial, see section 2.3 of \cite{KMT}. Since the initial draft of the paper of Knudsen, Miller and Tosteson \cite{KMT} appeared, the author of this paper has answered the above question in the affirmative in the case of complex projective spaces: \begin{theorem}\label{maino}\cite{Y} For $i,m\in\mathbb{N},$ the Hilbert function $$k\mapsto \emph{dim}H_{k(2m-2)+i}(\mathscr{C}_{k}(\emph{CP}^{m});\mathbb{Q})$$ is eventually a quasi-polynomial. \end{theorem} Actually, the cohomology groups $$H^{k(2m-2)+i}(\mathscr{C}_{k}(\text{CP}^{m});\mathbb{Q})$$ are eventually vanished for $m>1$ and $i\in\mathbb{N}.$ In particular, we proved that these cohomology groups are entirely vanish for $i>\mu_{k}$ and $k\geq1,$ where $\mu_{k}=(2m-2)k+3.$ We called this vanishing is \emph{entire extremal vanishing}. Moreover, the cohomology groups $H^{k(2m-2)+i}(\mathscr{C}_{k}(\text{CP}^{m});\mathbb{Q})$ are eventually vanish for $i\in\{1,2,3\}.$ For small value of $k,$ these cohomology groups are not necessarily vanish. We called this vanishing is \emph{limiting extremal vanishing}. In the new published form of the paper, Knudsen, Miller and Tosteson \cite{KMTP} extent the above question from natural numbers to integer numbers:\\\\ \textbf{Question.} (see Question 4.11 of \cite{KMTP}) Suppose that $H_{d-1}(\mathscr{M};\mathbb{Q})=0.$ For $i\in\mathbb{Z},$ is the Hilbert function $$k\mapsto \text{dim}H_{k(d-2)+i}(\mathscr{C}_{k}(\mathscr{M});\mathbb{Q})$$ eventually a quasi-polynomial?\\\\ \textbf{Note:} We will consider the higher dimensional projective spaces $\text{CP}^{m\geq4}.$ The explicit computations for low dimensional projective spaces $\text{CP}^{m<4}$ are already present in the literature (see \cite{F-Ta}, \cite{RW2} and \cite{K-M}). The limiting behavior of the spaces $\mathscr{C}_{k}(\text{CP}^{1})$ and $\mathscr{C}_{k}(\text{CP}^{2})$ are also discussed by Vakil-Wood (see Conjecture H of \cite{VW}). We investigate the limiting behavior of extremal cohomology groups for non-positive integers. The limiting extremal vanishing can be extent to non-positive value of $i:$ \begin{theorem}\label{main1} For each $i\in\{-2,-1,0\}$ and $m\geq4,$ the sequence of cohomology groups $$\{H^{k(2m-2)+i}(\mathscr{C}_{k}(\emph{CP}^{m});\mathbb{Q})\}_{k=1}^{\infty}$$ is eventually vanish. \end{theorem} As an application of Theorem \ref{main1}, we confirm the validity of the question of Knudsen, Miller and Tosteson for non-positive value of $i:$ \begin{corollary}\label{corollarymain} For $m\geq4$ and $i\in\{-2,-1,0\}$ the Hilbert function $$k\mapsto \emph{dim}(H_{k(2m-2)+i}(\mathscr{C}_{k}(\emph{CP}^{m});\mathbb{Q}))$$ is eventually a quasi-polynomial. \end{corollary} \begin{center} \begin{picture}(250,150) \put(30,20){\vector(0,1){120}} \put(20,138){$k$} \put(225,10){$i$} \put(30,20){\vector(1,0){200}} \put(30,20){\vector(3,4){90}} \put(30,20){\vector(2,1){200}} \put(30,20){\vector(4,1){205}} \put(237,66){$k=\mu_{k}$} \put(55,5){$\text{Figure 1. Extremal vanishings}$} \put(35,110){$\text{Homological}$} \put(43,100){$\text{stability}$} \put(161,78){$\text{Limiting}$} \put(202,78){$\text{vanishing}$} \put(135,31){$\text{Entire}$} \put(165,31){$\text{vanishing}$} \end{picture} \end{center} Apart from extremal cohomology groups, we give a certain families of unstable cohomology groups, which are not eventually vanish. \begin{theorem}\label{main2} For each $m\geq4$ and $a\in\{2,4,\ldots,2\lceil\frac{m}{2}\rceil-2\},$ we have non-vanishing $$\displaystyle{\lim_{k \to \infty}}\emph{dim}(H^{a(k-3)+4m-1}(\mathscr{C}_{k}(\emph{CP}^{m});\mathbb{Q}))\neq0.$$ \end{theorem} It seems that the cohomology groups of higher slopes are eventually vanish. We formulate the following conjecture. \begin{conjecture} For each $m\geq4$ and $a\in\{2\lceil\frac{m}{2}\rceil,2\lceil\frac{m}{2}\rceil+2,\ldots,2m-4\},$ we have vanishing $$\displaystyle{\lim_{k \to \infty}}\emph{dim}(H^{a(k-3)+4m-1}(\mathscr{C}_{k}(\emph{CP}^{m});\mathbb{Q}))=0.$$ \end{conjecture} \subsection{Outline of the paper and general conventions} In section 2, we give a quick tour of Chevalley–Eilenberg complex. In section 3, we explicitly discuss the general properties of reduced Chevalley–Eilenberg complex defined by author. The proof of Theorem \ref{main1} is contain in section 4. In section 5, we give the proof of Theorem \ref{main2}. In the last section, we give the final remark on the optimal range of the limiting extremal vanishing.\\\\ $\bullet$ We work throughout with finite dimensional graded vector spaces. The degree of an element $v$ is written $deg(v)$.\\\\ $\bullet$ The symmetric algebra $Sym(\mathscr{V}^{*})$ is the tensor product of a polynomial algebra and an exterior algebra: $$ Sym(\mathscr{V}^{*})=\bigoplus_{k\geq0}Sym^{k}(\mathscr{V}^{*})=Poly(\mathscr{V}^{even})\bigotimes Ext(\mathscr{V}^{odd}), $$ where $Sym^{k}$ is generated by the monomials of length $k.$\\\\ $\bullet$ Throughout the paper, we will consider the homology and cohomology over $\mathbb{Q}$.\\\\ $\bullet$ The $n$-th suspension of the graded vector space $\mathscr{V}$ is the graded vector space $\mathscr{V}[n]$ with $\mathscr{V}[n]_{i} = \mathscr{V}_{i-n},$ and the element of $\mathscr{V}[n]$ corresponding to $a\in \mathscr{V}$ is denoted $s^{n}a;$ for example $$ H_{*}(S^{2};\mathbb{Q})[n] =\begin{cases} \mathbb{Q}, & \text{if $*\in\{n,n+2 \}$} \\ 0, & \mbox{otherwise}.\\ \end{cases} $$ \\\\ $\bullet$ We write $H_{-*}(\mathscr{M};\mathbb{Q})$ for the graded vector space whose degree $-i$ part is the $i$-th homology group of $M;$ for example $$ H_{-*}(\text{CP}^m;\mathbb{Q}) =\begin{cases} \mathbb{Q}, & \text{if $*\in\{-2m,-2m+2,\ldots,0. \}$} \\ 0, & \mbox{otherwise}.\\ \end{cases} $$ \section{Chevalley–Eilenberg complex} Fulton--Macpherson \cite{F-M} described a model $F(k)$ for the cohomology of $\mathscr{F}_{k}(\mathscr{X})$ of a smooth projective variety $\mathscr{X}$, where $F(k)$ depends on the cohomology ring of $\mathscr{X}$, the canonical orientation class and the Chern classes of $\mathscr{X}$. A simplified version of the Fulton--MacPherson model is obtained by Kriz \cite{K} (see also \cite{BMP}). The kriz's model does not depend on Chern classes. The natural action of the symmetric group on the configuration spaces $\mathscr{F}_{k}(\mathscr{X})$ induces an action on the Kriz model. The cohomology of $\mathscr{C}_{k}(\mathscr{X})$ is obtained by the $\mathscr{S}_{k}$--invariant part of Fulton--MacPherson and Kri\v{z} models (see corollary 8c of \cite{F-M} and remark 1.3 of \cite{K}):$$H^{i}(\mathscr{C}_{k}(\mathscr{X});\mathbb{Q})\approx H^{i}(\mathscr{F}_{k}(\mathscr{X});\mathbb{Q})^{\mathscr{S}_{k}}.$$ F\'{e}lix--Thomas \cite{F-Th} (see also \cite{F-Ta}) constructed a Sullivan model for the rational cohomology of configuration spaces of closed oriented even dimensional manifolds. The identification was established in full generality by the Knudsen in \cite{Kn} using the theory of factorization homology \cite{AF}. We will restrict our attention to the case of closed even dimensional manifolds.\\ Let $\mathscr{M}$ be a connected closed oriented manifold of dimension $2m.$ The diagonal comultiplication $\Delta\,:\,H_{*}(\mathscr{M})\rightarrow H_{*}(\mathscr{M})\otimes H_{*}(\mathscr{M})$ is defined on a dual basis $x_{l}^{*}\in H_{*}(\mathscr{M})$ as $$\Delta(x_{l}^{*})=\sum_{i,j}(\text{coefficient of $x_{l}$ in $x_{i}\cup x_{j}$})x_{i}^{*}\otimes x_{j}^{*},\quad \text{where $x_{i},\,x_{j}\in H^{*}(\mathscr{M})$}.$$ We consider the two shifted copies of vector spaces $$\mathscr{V}^{*}=H_{-*}(\mathscr{M};\mathbb{Q})[2m],\quad\mathscr{W}^{*}=H_{-*}(\mathscr{M};\mathbb{Q})[4m-1]$$ $$ \mathscr{V}^{*}=\bigoplus_{i=0}^{2m}\mathscr{V}^{i},\quad\mathscr{W}^{*}=\bigoplus_{j=2m-1}^{4m-1}\mathscr{W}^{j},$$ and a differential $\partial $ (induced by $\Delta$): $$\partial|_{\mathscr{V}^{*}}=0,\quad \partial|_{\mathscr{W}^{*}}:\,\mathscr{W}^{*} \longrightarrow Sym^{2}(\mathscr{V}^{*}).$$ We choose bases in $\mathscr{V}^{i}$ and $\mathscr{W}^{j}$ as $$ \mathscr{V}^{i}=\mathbb{Q}\langle v_{i,1},v_{i,2},\ldots\rangle,\quad \mathscr{W}^{j}=\mathbb{Q}\langle w_{j,1},w_{j,2},\ldots\rangle $$ (the degree of an element is marked by the first lower index). Now we consider the graded algebra: $$ \Omega^{*,*}_{k}(\mathscr{M})=\bigoplus_{i\geq 0}\bigoplus_{\omega=0}^{\left\lfloor\frac{k}{2}\right\rfloor} \Omega^{i,\omega}_{k}(\mathscr{M})=\bigoplus_{\omega=0}^{\left\lfloor\frac{k}{2}\right\rfloor}\,(Sym^{k-2\omega}(\mathscr{V}^{*})\otimes Sym^{\omega}(\mathscr{W}^{*})) $$ where $i$ and $\omega$ are the total degree and weight grading respectively. Now, we get identification $$H^{*}(\mathscr{C}_{k}(\mathscr{M}))\simeq H^{*}(\Omega^{*,*}_{k}(\mathscr{M}),\partial).$$ By definition of differential, we have $$\partial:\Omega^{*,*}_{k}(\mathscr{M})\longrightarrow\Omega^{*+1,*-1}_{k}(\mathscr{M}).$$ The relation between configuration spaces of $\mathbb{R}^{m}$ and Lie algebra homology is also studied by Cohen \cite{Co}. \section{Reduced Chevalley–Eilenberg complex} In this section, we study the general properties of a reduced complex correspond to $\text{CP}^{m}.$ First, we construct the differential graded algebra $\Omega_{k}^{*,*}(\text{CP}^{m}).$ The cohomology ring of $\text{CP}^{m}$ is: $$H^{*}(\text{CP}^{m};\mathbb{Q})=\dfrac{\mathbb{Q}[\zeta]}{\langle \zeta^{m+1}\rangle},\quad\text{where } deg(\zeta)=2.$$ The corresponding two graded vector spaces are $$\mathscr{V}^{*}=\langle v_{0}, v_{2},\ldots,v_{2m}\rangle,\quad \mathscr{W}^{*}=\langle w_{2m-1}, w_{2m+1},\ldots,w_{4m-1}\rangle.$$ We can write explicit $\partial$ (differential): $$\partial(v_{2i})=0\qquad \quad\qquad\qquad 0\leq i\leq m,$$ $$\partial(w_{2i-1})=\sum_{\substack{a+b=i \\ 0\leq a, b\leq m}}v_{2a}v_{2b}\qquad m\leq i\leq 2m.$$ We have an isomorphism: $$H^{*}(\mathscr{C}_{k}(\text{CP}^{m}))\simeq H^{*}(\Omega^{*,*}_{k}(\text{CP}^{m}),\partial).$$ \begin{lemma}\label{lemma1}\cite{Y} For $k\geq 2,$ the sub-complex $\Omega_{k-2}^{*,*}(\emph{CP}^{m}).(v_{2m}^{2}, w_{4m-1})$ of $\Omega_{k}^{*,*}(\emph{CP}^{m})$ is a-cyclic. \end{lemma} \begin{proof} An element in $\Omega_{k-2}^{*,*}(\text{CP}^{m}).(v_{2m}^{2}, w_{4m-1})$ has a unique expansion $v_{2m}^{2}\beta+\gamma w_{4m-1},$ where $\beta$ and $\gamma$ have no monomial containing $w_{4m-1}.$ The operator $$h(v_{2m}^{2}\beta+\gamma w_{4m-1})=w_{4m-1}\beta$$ gives a homotopy $id\simeq 0.$ \end{proof} We denote the reduced complex $(\Omega_{k}^{*,*}(\text{CP}^{m})/\Omega_{k-2}^{*,*}(\text{CP}^{m}).(v_{2m}^{2}, w_{4m-1}),\partial_{\text{induced}})$ by $$({}^{r}\Omega_{k}^{*,*}(\text{CP}^{m}),\partial).$$ \begin{corollary}\label{corollary} For $k\geq 2,$ we have an isomorphism $H^{*}({}^{r}\Omega_{k}^{*,*}(\emph{CP}^{m}),\partial)\cong H^{*}(\mathscr{C}_{k}(\emph{CP}^{m})).$ \end{corollary} Now, we explicitly discusses the support of the reduced complex. \begin{lemma} For $\omega>\emph{min}\{\lfloor\frac{k}{2}\rfloor,m\},$ we have ${}^{r}\Omega_{k}^{\omega,*}(\emph{CP}^{m})=0.$ \end{lemma} \begin{proof} The odd degree elements are concentrated in $\mathscr{W}^{*}/\langle w_{4m-1}\rangle.$ The weight grading $\omega$ in ${}^{r}\Omega_{k}^{\omega,*}(\text{CP}^{m})$ is depend on the length of monomials in $Sym(\mathscr{W}^{*}/\langle w_{4m-1}\rangle).$ The monomial of maxiam length in $Sym(\mathscr{W}^{*}/\langle w_{4m-1}\rangle)$ is $w_{2m-1}w_{2m+1}\ldots w_{4m-3}.$ This implies that $\omega\leq m.$ Also, by definition of complex ${}^{r}\Omega_{k}^{*,*}(\text{CP}^{m})$ the wight grading must be less than and equal to $\lfloor\frac{k}{2}\rfloor.$ This complete the proof. \end{proof} Now, we can write the reduced graded algebra as: $${}^{r}\Omega_{k}^{*,*}(\text{CP}^{m})=\bigoplus_{\omega=0}^{\text{min}\{\lfloor\frac{k}{2}\rfloor,m\}}{}^{r}\Omega_{k}^{\omega,*}(\text{CP}^{m}).$$ Let us consider the set: $$\Phi_{\omega,k,m}=\{deg(x)\in\mathbb{N}\cup\{0\}\,| \,x\in {}^{r}\Omega_{k}^{\omega,*}(\text{CP}^{m})\}.$$ \begin{lemma} For $k\geq1,$ $m\geq1$ and $0<\omega\leq \emph{min}\{\lfloor\frac{k}{2}\rfloor,m\}$ we have $$ \emph{max}\Phi_{\omega,k,m} =\begin{cases} 4\omega m-(\omega^{2}+\omega), & \text{if $k\geq 2$ even, $\omega=\lfloor\frac{k}{2}\rfloor,$ $\text{min}\{\lfloor\frac{k}{2}\rfloor,m\}=\lfloor\frac{k}{2}\rfloor$} \\ (2m-2)k-(\omega^{2}-2\omega-2), & \mbox{otherwise}.\\ \end{cases} $$ \end{lemma} \begin{proof} For $\omega=0,$ the highest degree monomial in reduced complex is $v_{2m-2}^{k-1}v_{2m}.$ The degree of this monomial is $(2m-2)k+2.$ Let $\omega=\lfloor\frac{k}{2}\rfloor\geq1$ and $\text{min}\{\lfloor\frac{k}{2}\rfloor,m\}=\lfloor\frac{k}{2}\rfloor.$ In this case, the highest degree monomial in the reduced complex ${}^{r}\Omega_{k}^{\omega,*}(\text{CP}^{m})$ is $$w=w_{4m-(2\omega+1)}w_{4m-(2\omega-1)}\ldots w_{4m-3}.$$ The degree of this monomial is: \begin{align*} \text{deg}(w)=&4m-(2\omega+1)+4m-(2\omega-1)+\ldots+4m-5+4m-3\\ =&\underbrace{4m+4m+\ldots+4m}_{\omega-\text{times}}-(3+5+\ldots+(2\omega+1))\\ =&4\omega m-(\omega^{2}+2\omega). \end{align*} Let $\omega\geq 1.$ Suppose either $\omega\neq\lfloor\frac{k}{2}\rfloor\geq1$ or $\text{min}\{\lfloor\frac{k}{2}\rfloor,m\}\neq\lfloor\frac{k}{2}\rfloor.$ The highest degree monomial in this case is $$u=v_{2m-2}^{k-2\omega-1}v_{2m}w_{4m-(2m+1)}\ldots w_{4m-3}.$$ The degree of this monomial is: \begin{align*} \text{deg}(u)=&(2m-2)(k-2\omega-1)+2m+\{4m-(2m+1)+\ldots+4m-3\}\\ =&(2m-2)k-4\omega m+2+\underbrace{4m+4m+\ldots+4m}_{\omega-\text{times}}-(3+5+\ldots+(2\omega+1))\\ =&(2m-2)k-4\omega m+2+4\omega m-(\omega^{2}+2\omega)\\ =&(2m-2)k-(\omega^{2}-2\omega-2). \end{align*} \end{proof} \begin{lemma} For $k\geq1,$ $m\geq1$ and $0<\omega\leq \emph{min}\{\lfloor\frac{k}{2}\rfloor,m\}$ we have $$\emph{min}\Phi_{\omega,k,m} = 2\omega m+\omega(\omega-2).$$ \end{lemma} \begin{proof} For $\omega=0,$ the lowest degree monomial in reduced complex is $v_{0}^{k}.$ Let $\omega>0.$ The lowest degree monomial in this case is $$v=v_{0}^{k-2\omega}w_{2m-1}\ldots w_{2m+2\omega-3}.$$ The degree of this monomial is: \begin{align*} \text{deg}(v)=&(2m-1)+\ldots+(2m+2\omega-3)\\ =&\underbrace{2m+2m+\ldots+2m}_{\omega-\text{times}}+\{-1+1+\ldots+(2\omega-3)\}\\ =&2\omega m+\omega^{2}-2\omega. \end{align*} \end{proof} \begin{center} \begin{picture}(360,180) \put(10,40){\vector(0,1){130}} \put(10,40){\vector(1,0){310}} \put(7,173){$\omega$} \put(322,35){$i$} \multiput(8,38)(20,0){15}{$\bullet$} \multiput(80,58)(20,0){12}{$\bullet$} \multiput(128,78)(20,0){9}{$\bullet$} \multiput(160,98)(20,0){7}{$\bullet$} \multiput(190,118)(20,0){4}{$\bullet$} \multiput(20,30)(20,0){13}{$\ldots$} \multiput(210,130)(20,0){2}{$\vdots$} \put(3,38){$0$} \put(3,58){$1$} \put(3,78){$2$} \put(3,98){$3$} \put(3,118){$4$} \put(4,130){$\vdots$} \put(290.5,39){\line(0,1){10}}\put(290.5,49){\vector(1,0){21}}\put(312,45){$(2m-2)k+2$} \put(302,61){\vector(1,0){9}}\put(312,58){$(2m-2)k+3$} \put(290.5,80){\vector(1,0){20}}\put(312,77){$(2m-2)k+2$} \put(282,100.5){\vector(1,0){30}}\put(312,97){$(2m-2)k-1$} \put(255,120.5){\vector(1,0){57}}\put(312,117){$(2m-2)k-6$} \put(80,60.5){\vector(-1,0){10}}\put(40,58){$2m-1$} \put(128,80.5){\vector(-1,0){10}}\put(103,78){$4m$} \put(160,100.5){\vector(-1,0){10}}\put(118,98){$6m+3$} \put(190,120.5){\vector(-1,0){10}}\put(148,118){$8m+8$} \put(120,10){$\text{Figure 2. Reduced complex}$} \end{picture} \end{center} \begin{lemma} The differential $\partial$ on the right side of the complex ${}^{r}\Omega_{k}^{*,*}(\emph{CP}^{m})$ is injective except the following cases\\ (i) $\omega=0$\\ (ii) $\omega=1$ and $k\geq3.$ \end{lemma} \begin{proof} For each weight $\omega\geq0,$ the right side of reduced complex is generated by single element. For $\omega=0,$ the differential of monomial $v_{2m-2}^{k-1}v_{2m}$ is zero. Also, for $\omega=3$ and $k\geq 3,$ we have $\partial(v_{2m-2}^{k-3}v_{2m}w_{4m-3})=0.$ Apart from these cases, the differential is non-trivial on the right side of the reduced complex. \end{proof} \section{Proof of Theorem \ref{main1}} In this section, we give the proof of Theorem \ref{main1}. The matrix of differential $$\partial:\Omega^{a,b}_{k}(\text{CP}^{m})\longrightarrow\Omega^{a+1,b-1}_{k}(\text{CP}^{m})$$ is denoted by $M_{a,b}.$ \begin{lemma}\label{lemma1} For each $m\geq4$ and $k>8,$ we have exact sequence (in the square brackets are given the dimensions) $$0\longrightarrow\underset{[2]}{{}^{r}\Omega_{k}^{(2m-2)k-3,3}(\emph{CP}^{m})}\longrightarrow\underset{[6]}{{}^{r}\Omega_{k}^{(2m-2)k-2,2}(\emph{CP}^{m})}\longrightarrow \underset{[6]}{{}^{r}\Omega_{k}^{(2m-2)k-1,1}(\emph{CP}^{m})}\longrightarrow$$ $$\longrightarrow \underset{[2]}{{}^{r}\Omega_{k}^{(2m-2)k,0}(\emph{CP}^{m})}\longrightarrow 0.$$ \end{lemma} \begin{proof} First we define the bases elements: \begin{align*} {}^{r}\Omega_{k}^{(2m-2)k-3,3}(\text{CP}^{m})=&\langle v_{2m-4}v_{2m-2}^{k-8}v_{2m}w_{4m-7}w_{4m-5}w_{4m-3},\,v_{2m-2}^{k-7}v_{2m}w_{4m-9}w_{4m-5}w_{4m-3}\rangle,\\ {}^{r}\Omega_{k}^{(2m-2)k-2,2}(\text{CP}^{m})=&\langle v_{2m-2}^{k-5}v_{2m}w_{4m-7}w_{4m-5},\,v_{2m-4}v_{2m-2}^{k-6}v_{2m}w_{4m-7}w_{4m-3},\\ &v_{2m-2}^{k-4}w_{4m-7}w_{4m-3},\,v_{2m-2}^{k-5}v_{2m}w_{4m-9}w_{4m-3},\\ &v_{2m-6}v_{2m-2}^{k-6}v_{2m}w_{4m-5}w_{4m-3},\,v_{2m-4}^{2}v_{2m-2}^{k-7}v_{2m}w_{4m-5}w_{4m-3} \rangle,\\ {}^{r}\Omega_{k}^{(2m-2)k-1,1}(\text{CP}^{m})=&\langle v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-5},\,v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-3},\, v_{2m-2}^{k-3}v_{2m}w_{4m-7},\\ &v_{2m-2}^{k-2}w_{4m-5},\, v_{2m-4}v_{2m-2}^{k-3}w_{4m-3},\, v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-3}\rangle,\\ {}^{r}\Omega_{k}^{(2m-2)k,0}(\text{CP}^{m})=&\langle v_{2m-4}v_{2m-2}^{k-2}v_{2m},\,v_{2m-2}^{k} \rangle. \end{align*} The differential $\partial$ is defined on the bases elements as: \begin{align*} \partial(v_{2m-4}v_{2m-2}^{k-8}v_{2m}w_{4m-7}w_{4m-5}w_{4m-3})=&2v_{2m-4}^{2}v_{2m-2}^{k-7}v_{2m}w_{4m-5}w_{4m-3}-\\ -&v_{2m-4}v_{2m-2}^{k-6}v_{2m}w_{4m-7}w_{4m-3},\\ \partial(v_{2m-2}^{k-7}v_{2m}w_{4m-9}w_{4m-5}w_{4m-3})=&2v_{2m-6}^{2}v_{2m-2}^{k-6}v_{2m}w_{4m-5}w_{4m-3}-\\ -&v_{2m-2}^{k-5}v_{2m}w_{4m-9}w_{4m-3},\\ \partial(v_{2m-2}^{k-5}v_{2m}w_{4m-7}w_{4m-5})=&2v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-5}-v_{2m-2}^{k-3}v_{2m}w_{4m-7},\\ \partial(v_{2m-4}v_{2m-2}^{k-6}v_{2m}w_{4m-7}w_{4m-3})=&2v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-3},\\ \partial(v_{2m-2}^{k-4}w_{4m-7}w_{4m-3})=&2v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-3}+2v_{2m-4}v_{2m-2}^{k-3}w_{4m-3}-\\ -&2v_{2m-2}^{k-3}v_{2m}w_{4m-7},\\ \partial(v_{2m-2}^{k-5}v_{2m}w_{4m-9}w_{4m-3})=&2v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-3}+v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-3},\\ \partial(v_{2m-6}v_{2m-2}^{k-6}v_{2m}w_{4m-5}w_{4m-3})=&v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-3},\\ \partial(v_{2m-4}^{2}v_{2m-2}^{k-7}v_{2m}w_{4m-5}w_{4m-3})=&2v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-3},\\ \partial(v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-5})=&v_{2m-4}v_{2m-2}^{k-2}v_{2m},\\ \partial(v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-3})=&0,\\ \partial(v_{2m-2}^{k-3}v_{2m}w_{4m-7})=&2v_{2m-4}v_{2m-2}^{k-2}v_{2m},\\ \partial(v_{2m-2}^{k-2}w_{4m-5})=&2v_{2m-4}^{2}v_{2m-2}^{k-2}v_{2m}+v_{2m-2}^{k},\\ \partial(v_{2m-4}v_{2m-2}^{k-3}w_{4m-3})=&v_{2m-4}v_{2m-2}^{k-2}v_{2m},\\ \partial(v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-3})=&0,\\ \partial(v_{2m-4}v_{2m-2}^{k-2}v_{2m})=&0,\\ \partial(v_{2m-2}^{k} )=&0. \end{align*} Note that the monomial $v_{2m}^{a}$ is zero in ${}^{r}\Omega_{k}^{*,*}(\text{CP}^{m})$ for $a>1.$ The matrices of the differentials are following: \begin{equation*} M_{(2m-2)k-3,3}= \begin{pmatrix} 0 &-1 &0 &0 &0 &2\\ 0&0 &0&-1 &2&0 \end{pmatrix}, \quad M_{(2m-2)k-2,2}= \begin{pmatrix} \hfil 2&0&\ \llap{-}1&0 &\hfil 0 &\hfil 0\\ \hfil 0&0 &\hfil 0&0&\hfil 0&\hfil 2\\ \hfil 0&2 &\ \llap{-}2 &0 &\hfil 2&\hfil 0 \\ \hfil 0&2 &\hfil 0 &0 &0 &\hfil 1\\ \hfil 0& 1&\hfil 0& 0&\hfil 0&\hfil 0\\ \hfil 0&0 &\hfil 0&0 &\hfil 0&\hfil 1 \end{pmatrix} \end{equation*} \begin{equation*} M_{(2m-2)k-1,1}= \begin{pmatrix} \hfil 1&\hfil 0\\ \hfil 0&\hfil 0\\ \hfil 2&\hfil 0\\ \hfil 2&\hfil 1\\ \hfil 1&\hfil 0\\ \hfil 0&\hfil 0 \end{pmatrix}. \end{equation*} We see that the differential $\partial:{}^{r}\Omega_{k}^{(2m-2)k-3,3}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k-2,2}(\text{CP}^{m})$ is injective. Also, the differential $\partial:{}^{r}\Omega_{k}^{(2m-2)k-1,1}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k,0}(\text{CP}^{m})$ is surjective. The dimensions of the kernal and the image of the map $\partial:{}^{r}\Omega_{k}^{(2m-2)k-2,2}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k-1,1}(\text{CP}^{m})$ are respectively 2 and 4. From these computations, we conclude that the sub-complex: $$0\longrightarrow\underset{[2]}{{}^{r}\Omega_{k}^{(2m-2)k-3,3}(\text{CP}^{m})}\longrightarrow\underset{[6]}{{}^{r}\Omega_{k}^{(2m-2)k-2,2}(\text{CP}^{m})}\longrightarrow \underset{[6]}{{}^{r}\Omega_{k}^{(2m-2)k-1,1}(\text{CP}^{m})}\longrightarrow$$ $$\longrightarrow \underset{[2]}{{}^{r}\Omega_{k}^{(2m-2)k,0}(\text{CP}^{m})}\longrightarrow 0.$$ is exact. \end{proof} \begin{lemma} \label{lemma2} For each $m\geq4$ and $k>8,$ we have exact sequence $$0\longrightarrow\underset{[1]}{{}^{r}\Omega_{k}^{(2m-2)k-1,3}(\emph{CP}^{m})}\longrightarrow\underset{[3]}{{}^{r}\Omega_{k}^{(2m-2)k,2}(\emph{CP}^{m})}\longrightarrow \underset{[3]}{{}^{r}\Omega_{k}^{(2m-2)k+1,1}(\emph{CP}^{m})}\longrightarrow$$ $$\longrightarrow \underset{[1]}{{}^{r}\Omega_{k}^{(2m-2)k+2,0}(\emph{CP}^{m})}\longrightarrow 0.$$ \end{lemma} \begin{proof} We define the bases elements: \begin{align*} {}^{r}\Omega_{k}^{(2m-2)k-1,3}(\text{CP}^{m})=&\langle v_{2m-2}^{k-7}v_{2m}w_{4m-7}w_{4m-5}w_{4m-3}\rangle,\\ {}^{r}\Omega_{k}^{k(2m-2),2}(\text{CP}^{m})=&\langle v_{2m-2}^{k-5}v_{2m}w_{4m-7}w_{4m-3}, v_{2m-4}v_{2m-2}^{k-6}v_{2m}w_{4m-5}w_{4m-3},\\ & v_{2m-2}^{k-4}w_{4m-5}w_{4m-3}\rangle,\\ {}^{r}\Omega_{k}^{k(2m-2)+1,1}(\text{CP}^{m})=&\langle v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-3}, v_{2m-2}^{k-3}v_{2m}w_{4m-5}, v_{2m-2}^{k-2}w_{4m-3}\rangle,\\ {}^{r}\Omega_{k}^{(2m-2)k+2,0}(\text{CP}^{m})=&\langle v_{2m-2}^{k-1}v_{2m}\rangle. \end{align*} The differential $\partial$ is defined on the bases elements as: \begin{align*} \partial(v_{2m-2}^{k-7}v_{2m}w_{4m-7}w_{4m-5}w_{4m-3})=&2v_{2m-4}^{2}v_{2m-2}^{k-6}v_{2m}w_{4m-5}w_{4m-3}-\\ -&v_{2m-2}^{k-5}v_{2m}w_{4m-7}w_{4m-3},\\ \partial(v_{2m-2}^{k-5}v_{2m}w_{4m-7}w_{4m-3})=&2v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-3},\\ \partial(v_{2m-4}v_{2m-2}^{k-6}v_{2m}w_{4m-5}w_{4m-3})=&v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-3},\\ \partial(v_{2m-2}^{k-4}w_{4m-5}w_{4m-3})=&2v_{2m-4}v_{2m-2}^{k-3}v_{2m}w_{4m-3}+v_{2m-2}^{k-2}w_{4m-3}-\\ -&2v_{2m-2}^{k-3}v_{2m}w_{4m-5},\\ \partial(v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-3})=&0,\\ \partial(v_{2m-2}^{k-3}v_{2m}w_{4m-5})=&v_{2m-2}^{k-1}v_{2m},\\ \partial(v_{2m-2}^{k-2}w_{4m-3})=&2v_{2m-2}^{k-1}v_{2m},\\ \partial(v_{2m-2}^{k-1}v_{2m})=&0. \end{align*} The matrices of the differentials are following: \begin{equation*} M_{(2m-2)k-1,3}= \begin{pmatrix} \ \llap{-}1&\hfil 2&\hfil 0 \end{pmatrix}, \quad M_{(2m-2)k,2}= \begin{pmatrix} \hfil 2&0&\hfil 0 \\ \hfil 1&0 &0 \\ \hfil 2&\ \llap{-}2&\hfil 1 \end{pmatrix}, \quad M_{(2m-2)k+1,1}= \begin{pmatrix} 0 \\ 1\\ 2 \end{pmatrix}. \end{equation*} We see that the differential $\partial:{}^{r}\Omega_{k}^{(2m-2)k-1,3}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k,2}(\text{CP}^{m})$ is injective. Also, the differential $\partial:{}^{r}\Omega_{k}^{(2m-2)k+1,1}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k+2,0}(\text{CP}^{m})$ is surjective. The dimensions of the kernal and the image of the map $\partial:{}^{r}\Omega_{k}^{(2m-2)k,2}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k+1,1}(\text{CP}^{m})$ are respectively 1 and 2. From these computations, we conclude that the sub-complex: $$0\longrightarrow\underset{[1]}{{}^{r}\Omega_{k}^{(2m-2)k-1,3}(\text{CP}^{m})}\longrightarrow\underset{[3]}{{}^{r}\Omega_{k}^{(2m-2)k,2}(\text{CP}^{m})}\longrightarrow \underset{[3]}{{}^{r}\Omega_{k}^{(2m-2)k+1,1}(\text{CP}^{m})}\longrightarrow$$ $$\longrightarrow \underset{[1]}{{}^{r}\Omega_{k}^{(2m-2)k+2,0}(\text{CP}^{m})}\longrightarrow 0.$$ is exact. \end{proof} \begin{lemma}\label{lemma3} For each $m\geq4$ and $k>8,$ the cohomology group of degree $(2m-2)k-2$ is vanish in the subcomplex $$\ldots\longrightarrow\underset{[8]}{{}^{r}\Omega_{k}^{(2m-2)k-3,1}(\emph{CP}^{m})}\longrightarrow \underset{[3]}{{}^{r}\Omega_{k}^{(2m-2)k-2,0}(\emph{CP}^{m})}\longrightarrow 0.$$ \end{lemma} \begin{proof} We define the bases elements: \begin{align*} {}^{r}\Omega_{k}^{(2m-2)k-3,1}(\text{CP}^{m})=&\langle v_{2m-8}v_{2m-2}^{k-4}v_{2m}w_{4m-3},\,v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-5},\,v_{2m-2}^{k-3}v_{2m}w_{4m-9},\\ &v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-7},\,v_{2m-2}^{k-2}w_{4m-7},\,v_{2m-4}v_{2m-2}^{k-3}w_{4m-5},\\ &v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-5},\,v_{2m-6}v_{2m-4}v_{2m-2}^{k-5}v_{2m}w_{4m-3}\rangle,\\ {}^{r}\Omega_{k}^{(2m-2)k-2,0}(\text{CP}^{m})=&\langle v_{2m-6}v_{2m-2}^{k-2}v_{2m},\, v_{2m-4}^{2}v_{2m-2}^{k-3}v_{2m},\,v_{2m-4}v_{2m-2}^{k-1}\rangle. \end{align*} The differential $\partial$ is defined on the bases elements as: \begin{align*} \partial(v_{2m-8}v_{2m-2}^{k-4}v_{2m}w_{4m-3})=&0,\\ \partial(v_{2m-6}v_{2m-2}^{k-4}v_{2m}w_{4m-5})=&v_{2m-6}v_{2m-2}^{k-2}v_{2m},\\ \partial(v_{2m-2}^{k-3}v_{2m}w_{4m-9})=&2v_{2m-6}v_{2m-2}^{k-2}v_{2m}+v_{2m-4}^{2}v_{2m-2}^{k-3}v_{2m},\\ \partial(v_{2m-4}v_{2m-2}^{k-4}v_{2m}w_{4m-7})=&2v_{2m-4}^{2}v_{2m-2}^{k-3}v_{2m},\\ \partial(v_{2m-2}^{k-2}w_{4m-7})=&2v_{2m-6}v_{2m-2}^{k-2}v_{2m}+2v_{2m-4}v_{2m-2}^{k-1},\\ \partial(v_{2m-4}v_{2m-2}^{k-3}w_{4m-5})=&2v_{2m-4}^{2}v_{2m-2}^{k-3}v_{2m}+v_{2m-4}v_{2m-2}^{k-1},\\ \partial(v_{2m-4}^{2}v_{2m-2}^{k-5}v_{2m}w_{4m-5})=&v_{2m-4}^{2}v_{2m-2}^{k-3}v_{2m},\\ \partial(v_{2m-6}v_{2m-4}v_{2m-2}^{k-5}v_{2m}w_{4m-3})=&0,\\ \partial(v_{2m-6}v_{2m-2}^{k-2}v_{2m})=&0,\\ \partial(v_{2m-4}^{2}v_{2m-2}^{k-3}v_{2m})=&0,\\ \partial(v_{2m-4}v_{2m-2}^{k-1})=&0. \end{align*} The matrix of the differential is following: \begin{equation*} M_{(2m-2)k-3,1}= \begin{pmatrix} \hfil 0&0&\hfil 0 \\ 1&0 &0 \\ \hfil 2&1&\hfil 0 \\ 0&2 &0 \\ \hfil 2&0&\hfil 2 \\ 0&2 &1 \\ 0&1 &0 \\ \hfil 0&0 &\hfil 0 \end{pmatrix} \end{equation*} We see that the differential $\partial:{}^{r}\Omega_{k}^{(2m-2)k-3,1}(\text{CP}^{m})\rightarrow {}^{r}\Omega_{k}^{(2m-2)k-2,0}(\text{CP}^{m})$ is surjective. Hence the cohomology group of degree $(2m-2)k-2$ is vanish in sub-complex: $$\ldots\longrightarrow\underset{[8]}{{}^{r}\Omega_{k}^{(2m-2)k-3,1}(\text{CP}^{m})}\longrightarrow \underset{[3]}{{}^{r}\Omega_{k}^{(2m-2)k-2,0}(\text{CP}^{m})}\longrightarrow 0.$$ \end{proof} \textit{Proof of Theorem \ref{main1}.} Let $m\geq 4$ and $k$ is sufficiently large bigger than 8. There is no element of degree higher than $k(2m-2)+3$ in reduced complex $({}^{r}\Omega_{k}^{*,*}(\text{CP}^{m}),\partial).$ Also, the highest degree element of wight $\omega\geq4$ in $({}^{r}\Omega_{k}^{*,*}(\text{CP}^{m}),\partial)$ is $v_{2m-2}^{k-9}v_{2m}w_{4m-9}w_{4m-7}w_{4m-5}w_{4m-3}.$ The degree of monomial $v_{2m-2}^{k-9}v_{2m}w_{4m-9}w_{4m-7}w_{4m-5}w_{4m-3}$ is $(2m-2)k-6.$ Therefore, we just focus on the weights $\omega<4.$ The elements of degrees $k(2m-2),$ $k(2m-2)-1,$ $k(2m-2)-2$ and $k(2m-2)-3$ are concentrated in weights $\omega\leq3.$ The complete proof follows from Lemmas \ref{lemma1}, \ref{lemma2} and \ref{lemma3}.\\\\ \textit{Proof of Corollary \ref{corollarymain}.} It is obvious from Theorem \ref{main1}. $ \square$\\ \section{Proof of Theorem \ref{main2}} In this section, we give the proof of Theorem \ref{main2}.\\\\ \textit{Proof of Theorem \ref{main2}.} Let $k=3$ and $m\geq4.$ We take $$\alpha=\sum_{j=1}^{m}(2m-3j)v_{2j}w_{4m-(2j+1)}\in {}^{r}\Omega_{k}^{4m-1,1}(\text{CP}^{m}).$$ By using the linearity property of differential, we get $$\partial(\alpha)=\sum_{j=1}^{m}(2m-3j)\partial(v_{2j}w_{4m-(2j+1)}).$$ Also, by using the definition and the leibniz rule of differential, we have $$\partial(\alpha)=\sum_{j=1}^{m}[(2m-3j)v_{2j}(\sum_{\substack{a+b=m-j \\ 0\leq a, b\leq m}}v_{2a}v_{2b})]=0.$$ This shows that $\alpha$ is cocycle. For each $a\in\{2,4,\ldots,2m\},$ we take $$\beta_{a,k}=v_{a}^{k-3}\alpha\in {}^{r}\Omega_{k}^{(k-3)a+4m-1,1}(\text{CP}^{m}).$$ Clearly, $\beta_{a,k}$ is also cocycle for $k\geq3.$ Now, we want to show that $\beta_{a,k}$ is not coboundary for each $a\in\{2,4,\ldots,2\lceil\frac{m}{2}\rceil-2\}.$ We can write $$\beta_{a,k}=\left(2m-\frac{3a}{2}\right)v_{a}^{k-2}w_{4m-(a+1)}+\sum_{\substack{j=1 \\ j\neq\frac{a}{2}}}^{m}(2m-3j)v_{a}^{k-3}v_{2j}w_{4m-(2j+1)}.$$ For $k\geq5,$ the general element of bases in weight 2 is written as $$\gamma=v_{i_{1}}\ldots v_{i_{k-4}}w_{j}w_{l},$$ where $i_{1},\ldots,i_{k-4}\in \{0,2,\ldots,2m\}$ and $j,l\in\{2m-1,\ldots,4m-3\}.$ For each $a\in\{2,4,\ldots,2\lceil\frac{m}{2}\rceil-2\},$ the power of $v_{a}$ in the image of $w_{j}$ and $w_{l}$ is at most one. Therefore, the power of $v_{a}$ in the image of $\gamma$ is at most $k-3.$ This implies that $\beta_{a,k}$ is a permanent cocycle and never coboundary. Hence, for each $m\geq4,$ $k\geq5$ and $a\in\{2,4,\ldots,2\lceil\frac{m}{2}\rceil-2\},$ we have non-vanishing $$H^{a(k-3)+4m-1}(\mathscr{C}_{k}(\text{CP}^{m});\mathbb{Q})\neq0.$$ \section{Final remark} With more careful analysis, one can improve the limiting extremal vanishing range. However, the optimal range is unclear, and we ask the following.\\\\ \textbf{Question.} What is the smallest value of $i\in\mathbb{Z}$ such that $$\displaystyle{\lim_{k \to \infty}}\text{dim}(H^{(2m-2)k+i}(\mathscr{C}_{k}(\text{CP}^{m});\mathbb{Q}))\neq0?$$ \noindent\textbf{Acknowledgement}\textit{.} The author gratefully acknowledge the support from the ASSMS GC, university Lahore. This research is partially supported by Higher Education Commission of Pakistan. \vskip 0,65 true cm \null Abdus Salam School of Mathematical Sciences,\\ \null GC University Lahore, Pakistan. \\ \null E-mail: {[email protected]} \end{document}
arXiv
\begin{document} \title{Strongly sublinear separators and polynomial expansion} \begin{abstract} A result of Plotkin, Rao, and Smith implies that graphs with polynomial expansion have strongly sublinear separators. We prove a converse of this result showing that hereditary classes of graphs with strongly sublinear separators have polynomial expansion. This confirms a conjecture of the first author. \end{abstract} \section{Introduction} The concept of graph classes with bounded expansion was introduced by Ne\v{s}et\v{r}il and Ossona de Mendez~\cite{grad1} as a way of formalizing the notion of sparse graph classes. Let us give a few definitions. For a graph $G$, a \emph{$k$-minor of $G$} is any graph obtained from $G$ by contracting pairwise vertex-disjoint subgraphs of radius at most $k$ and removing vertices and edges. Thus, a $0$-minor is just a subgraph of $G$. Let us define $\nabla_k(G)$ as $$\max\left\{\frac{|E(G')|}{|V(G')|}:\mbox{$G'$ is a $k$-minor of $G$}\right\}.$$ For a function $f:\mathbf{Z}_0^+\to \mathbf{R}_0^+$, we say that an expansion of a graph $G$ is \emph{bounded by $f$} if $\nabla_k(G)\le f(k)$ for every $k\ge 0$. We say that a class ${\cal G}$ of graphs has \emph{bounded expansion} if there exists a function $f:\mathbf{Z}_0^+\to \mathbf{R}_0^+$ such that $f$ bounds the expansion of every graph in ${\cal G}$. If such a function $f$ is a polynomial, we say that ${\cal G}$ has \emph{polynomial expansion}. The definition is quite general---examples of classes of graphs with boun\-ded expansion include proper minor-closed classes of graphs, classes of graphs with bounded maximum degree, classes of graphs excluding a subdivision of a fixed graph, classes of graphs that can be embedded on a fixed surface with bounded number of crossings per edge and many others, see~\cite{osmenwood}. On the other hand, bounded expansion implies a wide range of interesting structural and algorithmic properties, generalizing many results from proper minor-closed classes of graphs. For a more in-depth introduction to the topic, the reader is referred to the book of Ne\v{s}et\v{r}il and Ossona de Mendez~\cite{nesbook}. One of the useful properties of graph classes with bounded expansion is the existence of small balanced separators. A \emph{separator} of a graph $G$ is a pair $(A,B)$ of subsets of $V(G)$ such that $A\cup B=V(G)$ and no edge joins a vertex of $A\setminus B$ with a vertex of $B\setminus A$. The \emph{order} of the separator is $|A\cap B|$. A separator $(A,B)$ is \emph{balanced} if $|A\setminus B|\le 2|V(G)|/3$ and $|B\setminus A|\le 2|V(G)|/3$. Note that $(V(G),V(G))$ is a balanced separator. For $c\ge 1$ and $0\le\beta<1$, we say that a graph $G$ has \emph{\bal{c}{\beta}-separators} if every subgraph $H$ of $G$ has a balanced separator of order at most $c|V(H)|^\beta$. For a graph class ${\cal C}$, let $s_{\cal C}(n)$ denote the smallest nonnegative integer such that every graph in ${\cal C}$ with at most $n$ vertices has a balanced separator of order at most $s_{\cal C}(n)$. We say that ${\cal C}$ has \emph{strongly sublinear separators} if there exist $c\ge 1$ and $0<\delta\le 1$ such that $s_{\cal C}(n)\le cn^{1-\delta}$ for every $n\ge 0$. Note that if ${\cal C}$ is subgraph-closed, this implies that every graph in ${\cal C}$ has \bal{c}{1-\delta}-separators. Lipton and Tarjan~\cite{lt79} proved that the class ${\cal P}$ of planar graphs satisfies $s_{{\cal P}}(n)=O(\sqrt{n})$, and demonstrated the importance of this fact in the design of algorithms~\cite{lt80}. This result was later generalized to graphs embedded on other surfaces~\cite{gilbert} and all proper minor-closed graph classes~\cite{alon1990separator,kreedsep}. The following result by Plotkin, Rao, and Smith connects the expansion and separators. \begin{theorem}\label{thm-plotkin} Given a graph $G$ with $m$ edges and $n$ vertices, and integers $l$ and $h$, there is an $O(mn/l)$-time algorithm that finds either an $(l\log_2 n)$-minor of $K_h$ in $G$, or a balanced separator of order at most $O(n/l + lh^2\log n)$. \end{theorem} Using this result, Ne\v{s}et\v{r}il and Ossona de Mendez~\cite{grad2} observed that graphs with expansion bounded by subexponential function have separators of sublinear order. The bound on expansion is tight in the sense that 3-regular expanders (which have exponential expansion) do not have sublinear separators. In fact, polynomial expansion implies strongly sublinear separators, which qualitatively generalizes the results of~\cite{lt79,gilbert,kreedsep}. \begin{corollary}\label{cor-subsep} For any $d\ge 0$ and $k\ge 1$, there exists $c\ge 1$ and $\delta=\frac{1}{4d+3}$ such that if the expansion of a graph $G$ is bounded by $f(r)=k(r+1)^d$, then $G$ has \bal{c}{1-\delta}-separators. Furthermore, there exists an algorithm that returns a balanced separator of $G$ of order at most $c|V(G)|^{1-\delta}$ in time $O(|V(G)|^\delta|E(G)|)$. \end{corollary} \begin{proof} For any integer $n\ge 1$, let $l(n)=\lceil n^{\delta}\rceil$ and $h(n)=\lceil n^{1/4-\delta/2}\rceil$. Since $f(l(n)\log_2 n)=O(n^{d\delta}\log^d n)$ and $1/4-\delta/2>d\delta$, there exists $n_0\ge 1$ such that $f(l(n)\log_2 n)<\frac{h(n)-1}{2}$ for every $n\ge n_0$. Consider any subgraph $G'$ of $G$, and let $n=|V(G')|$. Since the expansion of $G$ is bounded by $f(r)$, the expansion of $G'$ is bounded by $f(r)$ as well. We aim to show that $G'$ has a balanced separator of order $O(n^{1-\delta})$. Without loss of generality, we can assume that $n\ge n_0$. We apply Theorem~\ref{thm-plotkin} to $G'$ with $l=l(n)$ and $h=h(n)$. Every $(l\log_2 n)$-minor of $G'$ has edge density at most $f(l\log_2 n)<\frac{h-1}{2}$, and thus $G'$ does not contain $K_h$ as an $(l\log_2 n)$-minor. Consequently, the algorithm of Theorem~\ref{thm-plotkin} produces a balanced separator of order $O(n/l+lh^2\log n)=O(n^{1-\delta}+n^{1/2}\log n)=O(n^{1-\delta})$. \end{proof} Our main result is the converse: in subgraph-closed classes, strongly sublinear separators imply polynomial expansion. \begin{theorem}\label{thm-expansion} For any $c\ge 1$ and $0< \delta\le 1$, there exists a function $f(r)=O\bigl(r^{5/\delta^2}\bigr)$ such that if a graph $G$ has \bal{c}{1-\delta}-separators, then its expansion is bounded by $f$. \end{theorem} Let us remark that a weaker variant of Theorem~\ref{thm-expansion} was conjectured by Dvo\v{r}\'ak~\cite{twd}, who hypothesized that strongly sublinear separators imply subexponential expansion, and proved this weaker claim under the additional assumption that $G$ has bounded maximum degree. Together with Corollary~\ref{cor-subsep}, Theorem~\ref{thm-expansion} shows the equivalence between strongly sublinear separators and polynomial expansion. \begin{corollary} Let ${\cal C}$ be a subgraph-closed class. Then ${\cal C}$ has strongly sublinear separators if and only if ${\cal C}$ has polynomial expansion. \end{corollary} Note that to guarantee separators of order $O(n^{1-\delta})$, Corollary~\ref{cor-subsep} only requires the expansion to be bounded by $r^{O(1/\delta)}$, while given separators of order $O(n^{1-\delta})$, Theorem~\ref{thm-expansion} guarantees the expansion bounded by $r^{O(1/\delta^2)}$. For a cubic graph $G$, let $G_\delta$ denote the graph obtained from $G$ by subdividing each edge exactly $\lfloor |V(G)|^{\delta/(1-\delta)}\rfloor$ times, and let $${\cal C}_\delta=\{H:\text{$H\subseteq G_\delta$ for a cubic graph $G$}\}.$$ Then balanced separators in ${\cal C}_\delta$ have order $\Omega(n^{1-\delta})$ and the expansion of ${\cal C}_\delta$ is $r^{O(1/\delta)}$. Hence, the relationship between the exponents in Corollary~\ref{cor-subsep} is tight up to constant multiplicative factors. On the other hand, we believe Theorem~\ref{thm-expansion} can be improved. \begin{conjecture}\label{conj-lindep} There exists $k>0$ such that for any $c\ge 1$ and $0< \delta\le 1$, if a graph $G$ has \bal{c}{1-\delta}-separators, then its expansion is bounded by $f(r)=O\bigl(r^{k/\delta}\bigr)$. \end{conjecture} The rest of the paper is devoted to the proof of Theorem~\ref{thm-expansion}. In Section~\ref{sec-expand}, we recall some results relating separators with tree-width and expanders. In Section~\ref{sec-dens}, we give partial results towards bounding the density of minors of graphs with strongly sublinear separators. In Section~\ref{sec-sep}, we show that a bounded-depth minor of a graph with strongly sublinear separators still has strongly sublinear separators (for a somewhat worse bound on their order). Finally, in Section~\ref{sec-poly}, we combine these results to give a proof of Theorem~\ref{thm-expansion}. \section{Separators, tree-width and expanders}\label{sec-expand} For $\alpha>0$, a graph $G$ is an \emph{$\alpha$-expander} if for every $A\subseteq V(G)$ of size at most $|V(G)|/2$, there exist at least $\alpha|A|$ vertices of $V(G)\setminus A$ adjacent to a vertex of $A$. Random graphs are asymptotically almost surely expanders. \begin{lemma}[Kolesnik and Wormald~\cite{kolesnik2014lower}]\label{lemma-exexp} There exists an integer $n_0$ such that for every even $n\ge n_0$, there exists a $3$-regular $\frac{1}{7}$-expander on $n$ vertices. \end{lemma} Let us recall a well-known fact on the relationship between tree-width and separators, see e.g.~\cite{rs2}. \begin{lemma}\label{lemma-wtsep} Any graph $G$ has a balanced separator of order at most $\mbox{tw}(G)+1$. \end{lemma} \begin{corollary}\label{cor-twexp} If $H$ is an $\alpha$-expander for some $\alpha>0$, then $\mbox{tw}(H)\ge \frac{\alpha}{3(1+\alpha)}|V(H)|-1$. \end{corollary} \begin{proof} Let $(A,B)$ be a balanced separator of $H$ of order at most $\mbox{tw}(H)+1$. Let $S=A\cap B$, let $A'=A\setminus B$ and let $B'=B\setminus A$. Without loss of generality, $|A'|\le |B'|$, and thus $|A'|\le |V(H)|/2$. Since $H$ is an $\alpha$-expander, we have $|S|\ge\alpha |A'|$, and thus $|A'|\le \frac{1}{\alpha}|S|$. On the other hand, since the separator $(A,B)$ is balanced, we have $|B'|\le \frac{2}{3}|V(H)|$, and thus $|A'|+|S|\ge \frac{1}{3}|V(H)|$. Therefore, \begin{align*} \left(\frac{1}{\alpha}+1\right)|S|&\ge \frac{1}{3}|V(H)|\\ |S|&\ge \frac{\alpha}{3(1+\alpha)}|V(H)|. \end{align*} The claim follows, since $|S|\le \mbox{tw}(H)+1$. \end{proof} For later use, let us remark that an approximate converse to Lemma~\ref{lemma-wtsep} holds, as was proved by Dvo\v{r}\'ak and Norin~\cite{dnorin}. \begin{theorem}\label{thm-septw} If every subgraph of $G$ has a balanced separator of order at most $k$, then $G$ has tree-width at most $105k$. \end{theorem} \begin{corollary}\label{cor-septotw} For $c\ge 1$ and $0\le\beta<1$, if a graph $G$ has \bal{c}{\beta}-separators, then every subgraph $H$ of $G$ has tree-width at most $105c|V(H)|^\beta$. \end{corollary} The aim of this section is to argue that in a dense graph, we can always find a large expander of maximum degree $3$ as a subgraph. To prove this, we first find a bounded-depth clique minor, using the following result of Dvo\v{r}\'ak~\cite{twd}. \begin{theorem}\label{cor-iter2} Suppose that $0<\varepsilon\le 1$ and let $m=\left\lceil\frac{1}{2\varepsilon^2}\right\rceil$. If a graph on $n$ vertices has at least $2\cdot 32^mt^4n^{1+\varepsilon}$ edges, then it contains $K_t$ as a $4^m$-minor. \end{theorem} Next, we take a $3$-regular expander subgraph of this clique which exists by Lemma~\ref{lemma-exexp}, and we observe that it corresponds to a subgraph of the original graph of maximum degree $3$, in which each path of vertices of degree two has bounded length. Such a subgraph is still a decent expander. However, we will only need the corresponding lower bound for the tree-width of such a graph. \begin{lemma}\label{lemma-hdtw} Suppose that $0<\varepsilon\le 1$ and let $m=\left\lceil\frac{1}{2\varepsilon^2}\right\rceil$. Let $n_0$ satisfy Lemma~\ref{lemma-exexp} and let $t\ge \max(n_0,600)$ be an even integer. If a graph on $n$ vertices has at least $2\cdot 32^mt^4n^{1+\varepsilon}$ edges, then it contains a subgraph $H$ of maximum degree $3$ with $|V(H)|\le 4^{m+1}t$ and with $\mbox{tw}(H)\ge \frac{t}{25}$. \end{lemma} \begin{proof} By Theorem~\ref{cor-iter2}, $G$ contains $K_t$ as a $4^m$-minor, and by Lemma~\ref{lemma-exexp}, $G$ contains a $3$-regular $\frac{1}{7}$-expander $H_0$ with $t$ vertices as a $4^m$-minor. Hence, $G$ contains a subgraph $H$ of maximum degree three such that $|V(H)|\le (3\cdot 4^m+1)|V(H_0)|\le 4^{m+1}t$ and $H_0$ is a minor of $H$. By Corollary~\ref{cor-twexp}, $\mbox{tw}(H)\ge\mbox{tw}(H_0)\ge \frac{1}{24}|V(H_0)|-1=\frac{t}{24}-1\ge \frac{t}{25}$. \end{proof} \section{The densities of graphs with strongly sublinear separators and their minors}\label{sec-dens} In this section, we give two bounds on edge densities of graphs. Firstly, we show that graphs with strongly sublinear separators have bounded edge density; in other words, they satisfy the condition from the definition of bounded expansion for $\nabla_0$. \begin{lemma}\label{lemma-dens} For any $c\ge 1$ and $0<\delta\le 1$, let $a_\delta(c)\ge 1$ be the unique real number satisfying $a_\delta(c)^\delta=4c\log^2 (ea_\delta(c))$. If $G$ has \bal{c}{1-\delta}-separators, then $G$ has at most $a_\delta(c)|V(G)|$ edges. \end{lemma} \begin{proof} Let $h(n)=\frac{n}{2\log en}$. By induction on the number of vertices of $G$, we prove a stronger claim: If $G$ has \bal{c}{1-\delta}-separators, then $G$ has at most $a_\delta(c)(|V(G)|-h(|V(G)|))$ edges. Note that $a_\delta(c)(|V(G)|-h(|V(G)|))\ge a_\delta(c)|V(G)|/2$, and thus the claim trivially holds if $|V(G)|\le a_\delta(c)$. Suppose that $|V(G)|> a_\delta(c)$. Let $(A,B)$ be a balanced separator of $G$ of order at most $c|V(G)|^{1-\delta}$ and let $G_1=G[A]$ and $G_2=G[B]$. Let $n=|V(G)|$, $n_0=|V(G_1\cap G_2)|$, $n_1=|V(G_1)\setminus V(G_2)|$ and $n_2=|V(G_2)\setminus V(G_1)|$. Since $n>a_\delta(c)$, we have $n/3>cn^{1-\delta}\ge n_0$. For $i\in\{1,2\}$, we have $|V(G_i)|=n_0+n_i<n/3+2n/3<n$, and thus $|E(G_i)|\le a_\delta(c)(n_0+n_i-h(n_0+n_i))$ by the induction hypothesis. It follows that \begin{align*} |E(G)|&\le|E(G_1)|+|E(G_2)|\\ &\le a_\delta(c)(n+n_0-h(n_0+n_1)-h(n_0+n_2))\\ &=a_\delta(c)(n-h(n)+[n_0+h(n)-h(n_0+n_1)-h(n_0+n_2)]). \end{align*} Therefore, we need to prove that $n_0\le h(n_0+n_1)+h(n_0+n_2)-h(n)$. Recall that $n_0\le cn^{1-\delta}$, $n_1,n_2\le 2n/3$, and $n=n_0+n_1+n_2> a_\delta(c)$. Without loss of generality, $n_1\le n_2$. Since $h$ is increasing and concave for $n\ge 3$, and since $n_0+n_1=n-n_2\ge n/3\ge 3$, we have $h(n_0+n_1)+h(n_0+n_2)\ge h(n/3)+h(2n/3+n_0)\ge h(n/3)+h(2n/3)$. We conclude that \begin{align*} h(n_0+n_1)&{}+h(n_0+n_2)-h(n)\\ &\ge h(n/3)+h(2n/3)-h(n)\\ &=\frac{n}{6}\left(\frac{1}{\log(en)-\log(3)}+\frac{2}{\log(en)-\log(3/2)}-\frac{3}{\log(en)}\right)\\ &\ge\frac{n}{6\log^2 en}((\log(en)+\log(3))+2(\log(en)+\log(3/2))-3\log(en))\\ &\ge\frac{n}{4\log^2 en}=\frac{n^\delta}{4\log^2 en}n^{1-\delta}\ge\frac{a_\delta(c)^\delta}{4\log^2 (ea_\delta(c))}n^{1-\delta}\\ &=cn^{1-\delta}\ge n_0, \end{align*} as required. \end{proof} Let us remark that for a fixed $\delta>0$, we have $a_\delta(c)=O\bigl((c\log^3 c)^{1/\delta}\bigr)$. Secondly, we aim to show that the density of bounded-depth minors of graphs with strongly sublinear separators grows only slowly with the number of their vertices---slower than $n^\varepsilon$ for every $\varepsilon>0$. Of course, we will eventually show that it is actually bounded by a constant, but we will need this auxiliary result to do so. \begin{lemma}\label{lemma-subpolyden} For any $c\ge 1$, $0<\delta\le 1$, $0<\varepsilon\le 1$ and $r\ge 1$, let $m=\left\lceil\frac{1}{2\varepsilon^2}\right\rceil$, let $n_0$ satisfy Lemma~\ref{lemma-exexp}, let $t$ be the smallest even integer greater than $\max\Bigl(n_0,(42000c4^mr)^{1/\delta}\Bigr)$ and let $b_{c,\delta,\varepsilon}(r)=2\cdot 32^mt^4$. If $G$ has \bal{c}{1-\delta}-separators, then every $r$-minor $F$ of $G$ has less than $b_{c,\delta,\varepsilon}(r)|V(F)|^{1+\varepsilon}$ edges. \end{lemma} \begin{proof} Suppose that $F$ has at least $b_{c,\delta,\varepsilon}(r)|V(F)|^{1+\varepsilon}$ edges. By Lemma~\ref{lemma-hdtw}, $F$ contains a subgraph $H$ of maximum degree $3$ with $|V(H)|\le 4^{m+1}t$ and with $\mbox{tw}(H)\ge \frac{t}{25}$. Hence, $G$ contains a subgraph $H'$ of maximum degree $3$ with $|V(H')|\le (3r+1)|V(H)|\le 4^{m+2}rt$ such that $H$ is a minor of $H'$, and thus $\mbox{tw}(H')\ge \mbox{tw}(H)\ge \frac{t}{25}$. By Corollary~\ref{cor-septotw}, we have $\mbox{tw}(H')\le 105c|V(H')|^{1-\delta}$. Therefore, \begin{align*} \frac{t}{25}&\le 105c4^{m+2}rt^{1-\delta}\\ t^\delta&\le 42000c4^mr. \end{align*} This contradicts the choice of $t$. \end{proof} Let us remark that for fixed $c$, $\delta$, and $\varepsilon$, we have $b_{c,\delta,\varepsilon}(r)=O\bigl(r^{4/\delta}\bigr)$. \section{Sublinear separators in bounded-depth minors}\label{sec-sep} We now prove that if $G$ has strongly sublinear separators, then any bounded-depth minor of $G$ also has strongly sublinear separators. Together with Lemma~\ref{lemma-dens}, this will give the bound on their density. \begin{lemma}\label{lemma-sepmin} For any $c\ge 1$, $0<\delta\le 1$ and $r\ge 1$, let $\varepsilon=\min\left(1,\frac{\delta}{6(1-\delta)}\right)$ and let $p_{c,\delta}(r)=316c(b_{c,\delta,\varepsilon}(r)r)^{1-\delta}$. If $G$ has \bal{c}{1-\delta}-separators, then every $r$-minor of $G$ has \bal{p_{c,\delta}(r)}{1-\frac{5}{6}\delta}-separators. \end{lemma} \begin{proof} Let $H$ be an $r$-minor of $G$. Since every subgraph of $H$ is also an $r$-minor of $G$, it suffices to prove that $H$ has a balanced separator of order at most $p_{c,\delta}(r)|V(H)|^{1-\frac{5}{6}\delta}$. By Lemma~\ref{lemma-subpolyden}, $H$ has at most $b_{c,\delta,\varepsilon}(r)|V(H)|^{1+\varepsilon}$ edges. Hence, there exist a subgraph $H'$ of $G$ with at most $2b_{c,\delta,\varepsilon}(r)r|V(H)|^{1+\varepsilon}+|V(H)|\le 3b_{c,\delta,\varepsilon}(r)r|V(H)|^{1+\varepsilon}$ vertices such that $H$ is a minor of $H'$. By Corollary~\ref{cor-septotw}, we have $$\mbox{tw}(H')\le 315c\left(b_{c,\delta,\varepsilon}(r)r|V(H)|^{1+\varepsilon}\right)^{1-\delta}\le 315c(b_{c,\delta,\varepsilon}(r)r)^{1-\delta}|V(H)|^{1-\frac{5}{6}\delta}.$$ Since $H$ is a minor of $H'$, we have $\mbox{tw}(H)\le \mbox{tw}(H')$. Therefore, by Lemma~\ref{lemma-wtsep}, $H$ has a balanced separator of order at most $$\mbox{tw}(H')+1\le 315c(b_{c,\delta,\varepsilon}(r)r)^{1-\delta}|V(H)|^{1-\frac{5}{6}\delta}+1\le p_{c,\delta}(r)|V(H)|^{1-\frac{5}{6}\delta}$$ as required. \end{proof} Let us remark that for fixed $c$ and $\delta$, we have $p_{c,\delta}(r)=O\bigl(r^{(4/\delta+1)(1-\delta)}\bigr)=O\bigl(r^{4/\delta}\bigr)$. \section{Polynomial expansion}\label{sec-poly} Finally, we can prove our main result. \begin{proof}[Proof of Theorem~\ref{thm-expansion}] For any $r\ge 1$, every $r$-minor of $G$ has \bal{p_{c,\delta}(r)}{1-\frac{5}{6}\delta}-separators by Lemma~\ref{lemma-sepmin}, where $p_{c,\delta}(r)=O(r^{4/\delta})$. By Lemma~\ref{lemma-dens}, every $r$-minor of $G$ has edge density at most $a_{\frac{5}{6}\delta}\bigl(p_{c,\delta}(r)\bigr)=O\bigl(p_{c,\delta}(r)^{\frac{5}{4\delta}}\bigr)=O\bigl(r^{5/\delta^2}\bigr)$. Therefore, $\nabla_r(G)\le O\bigl(r^{5/\delta^2}\bigr)$. \end{proof} \end{document}
arXiv
Identification of contributing genes of Huntington's disease by machine learning Jack Cheng ORCID: orcid.org/0000-0002-9305-07811,2 na1, Hsin-Ping Liu ORCID: orcid.org/0000-0002-2569-30723 na1, Wei-Yong Lin ORCID: orcid.org/0000-0002-8443-61801,2,4 & Fuu-Jen Tsai2,5,6,7 BMC Medical Genomics volume 13, Article number: 176 (2020) Cite this article Huntington's disease (HD) is an inherited disorder caused by the polyglutamine (poly-Q) mutations of the HTT gene results in neurodegeneration characterized by chorea, loss of coordination, cognitive decline. However, HD pathogenesis is still elusive. Despite the availability of a wide range of biological data, a comprehensive understanding of HD's mechanism from machine learning is so far unrealized, majorly due to the lack of needed data density. To harness the knowledge of the HD pathogenesis from the expression profiles of postmortem prefrontal cortex samples of 157 HD and 157 controls, we used gene profiling ranking as the criteria to reduce the dimension to the order of magnitude of the sample size, followed by machine learning using the decision tree, rule induction, random forest, and generalized linear model. These four Machine learning models identified 66 potential HD-contributing genes, with the cross-validated accuracy of 90.79 ± 4.57%, 89.49 ± 5.20%, 90.45 ± 4.24%, and 97.46 ± 3.26%, respectively. The identified genes enriched the gene ontology of transcriptional regulation, inflammatory response, neuron projection, and the cytoskeleton. Moreover, three genes in the cognitive, sensory, and perceptual systems were also identified. The mutant HTT may interfere with both the expression and transport of these identified genes to promote the HD pathogenesis. Huntington's disease (HD) is an inherited disorder that results in neurodegeneration characterized by chorea, loss of coordination, cognitive decline, depression, and psychosis [1]. The prevalence of HD is 13.7/100,000 in North America [2] and 16.8/100,000 for the elderly in Western Europe [3]. The neurodegeneration of HD is featured by a general shrinkage of the brain, especially the medium spiny neurons (MSNs) of the striatum [4]. The loss of cortical mass is an early hallmark in the pathology of HD [5]. HD is caused by the polyglutamine (poly-Q) mutations in the N-terminus of the HTT gene, which encodes huntingtin, a 350 kDa protein with ubiquitous expression [6]. The poly-Q extension is due to the abnormal CAG trinucleotide repeats in the mutant HTT (mHTT). The highest HTT expression level is observed in the neurons of the central nervous system with cytoplasmic-dominant localization and is associated with vesicle membranes [7]. Although HTT is known to be necessary for embryonic development and acts as a transcriptional regulator and protein scaffold in the synapse [8], the HD pathogenesis is still elusive [9]. To better understand the HD pathogenesis, we adopted machine learning (ML) on gene profiling dataset of the prefrontal cortex brain tissues of HD patients and controls and identified 66 disease-predicting genes. Their interaction network and potential roles in the HD pathogenesis are also discussed. ML refers to computer algorithms that predict relying on the patterns of the data without using explicit instructions [10]. ML's application on HD is focused on the diagnosis of HD from neuroimaging [11, 12]. Even though the emergence of a wide range of biological data of HD, including genomic profiling and electronic health records, a comprehensive understanding of the mechanism of HD from ML is so far unrealized, majorly due to the lack of needed data density [13]. For example, a previous ML study on RNA profiling of HD reported 4433 candidate genes from 16 samples [14], which is a typical high dimension, low sample size (HDLSS) situation, and ML may suffer from overfitting and low convergence. In this study, to harness the knowledge of the HD mechanism from the existing data, we tackled the data density issue by rationally reducing the dimension size, and identified the enriched pathways of HD by ML. A gene profiling database of an essential sample size of HD and control is critical to this study. From the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), with the criteria "("Huntington's disease" AND "brain") AND "Homo sapiens" [porgn:__txid9606]", there were 342 series at the access date of June 18th, 2020. Out of them, there were four series with sample size > 100, including GSE72778, GSE33000, GSE25925, and GSE26927. We chose GSE33000 in this study since it provided the largest sample size of brain tissue profiling. The gene expression profile of the prefrontal cortex brain tissues of 157 HD patients and 157 non-demented control samples were retrieved from the GSE33000 dataset [15], which was profiled by microarray. This dataset contains 39,279 detected probes, of which 13,798 were annotated, and a total of 10,000 genes were profiled. In solving equations, the number of parameters (in this case, age, sex, and gene profiles) should not exceed the number of equations (the sample size). Therefore, a preliminary screen of genes was essential. Since there were 10,000 genes profiled in GSE33000, the top 2.5% would yield approximately the gene numbers close to the total sample size 314. A criterion of fold change > 1.2 or < 0.85 resulted in 271 genes, which were selected along with HTT, as the input to build the prediction models. This fold-change criterion was chosen so that (1) the number of the selected genes was less than the number of total samples, and (2) the numbers of up-regulated and down-regulated genes were approximately equal (139 up and 132 down). Those genes with non-significant fold-change, i.e., p value of T test > 0.05 were neglected. After transposition (sample in the row and attributes in the column) and conversion of the disease status to binomials (1 = HD, 0 = control), the input dataset was constructed (Additional file 1: Table S1). Software and role assignment RapidMiner Studio version 9.5 (WIN64 platform) was registered to Jack Cheng and was executed under the Windows 10 operating system with Intel® Core™ i3-3220 CPU and 8 GB RAM. In addition to the age and sex of the samples, out of the 10,000 profiled genes, those expression fold change > 1.2 or < 0.85 of HD to control were assigned as the regular attributes (potential contributing factors to be analyzed in modeling operator) in the modeling. The disease status (1 = HD; 0 = CTRL) was assigned as the Label attribute (the predicted class in modeling operator). The sample ID was assigned as the ID attribute (not be used in modeling). Four models (decision tree, rule induction, random forest, and generalized linear model) of RapidMiner were used respectively with cross-validation to identify potential contributing genes of HD. The study design and over all workflow is shown in Fig. 1. The study design and workflow. FC denotes the fold change of gene profiling. The curly brackets indicate the number of genes that passed the criteria or were identified in the data science models. The Venn diagram shows the number of genes in the enriched pathways A decision tree is a tree-like collection of nodes, representing a splitting rule for attributes to create a decision on the prediction class. The following parameters were used in RapidMiner modeling. Criterion: gain ratio; Maximal depth: 4; Prepruning and Pruning applied; Confidence: 0.01; Minimal gain: 0.01; Minimal leaf size: 2; Minimal size for a split: 4; Number of pre-pruning alternatives: 3. The program workflow is illustrated in Fig. 2a. The decision tree model. a The program workflow. b The receiver operating characteristic (ROC) curve showing the performance of the prediction power of the model. c The modeled decision tree. A decision tree plots the "If… then" splitting of samples for prediction. The nodes denote the attributes, while the arrows denote the split, which meets a certain criterion. The number in the result box denotes the prediction result of the model (1 = HD; 0 = control), and the bar denotes the actual sample disease characteristic, bar sickness for sample size and bar segment for the proportion of HD samples (red = HD, blue = control). d The sample distribution in the 3-dimensional eigenspace of gene profiling. Red = HD, blue = control Rule induction The Rule Induction model develops a set of hypotheses that account for the most positive examples, but the least negative examples. The following parameters were used in RapidMiner modeling. Criterion: information gain; Sample ratio: 0.9; Pureness: 0.9; Minimal prune benefit: 0.25. A random forest is an ensemble of random decision trees. The following parameters were used in RapidMiner modeling. The number of trees: 30; Criterion: gain ratio; Maximal depth: 4; Apply pruning with Confidence: 0.01; Apply pre-pruning with Minimal gain: 0.01; Minimal size for a split: 31 (~ 1/10 sample size); Minimal leaf size: 8; Number of pre-pruning: 3; Voting strategy: confidence vote. Generalized linear model RapidMiner executes the GLM algorithm using H2O 3.8.2.6., which fits generalized linear models to the data by maximizing the log-likelihood and determines predictors with non-zero coefficients. These parameters were used in the modeling. Family: binomial; Solver: IRLSM; Use regularization; Do lambda search with the number of lamdas = 31 (~ 1/10 sample size) and early stopping of tolerance 0.01 after three rounds; Standardize and add interception. Cross-validation of models In RapidMiner, the cross-validation has two subprocesses: a training subprocess and a testing subprocess. The training subprocess produces a trained model to be applied to the testing subprocess for the performance evaluation. In this study, the samples were randomly divided into ten subsets, with an equal number of samples. Each of the ten subsets was iterationaly used in the testing subprocess to evaluate the trained model from the other nine subsets. The performance of a model can be evaluated by its accuracy, precision, and recall, which are defined as below: $$\begin{aligned} & {\text{Accuracy}}\, = \,\left( {{\text{TP}}\, + \,{\text{TN}}} \right)/({\text{TP}}\, + \,{\text{FP}}\, + \,{\text{FN}}\, + \,{\text{TN}}) \\ & {\text{Precision}}\, = \,{\text{TP}}/({\text{TP}}\, + \,{\text{FP}}) \\ & {\text{Recall}}\, = \,{\text{TP}}/({\text{TP}}\, + \,{\text{FN}}) \\ \end{aligned}$$ where T = true, F = false, P = positive, and N = negative. A receiver operating characteristic (ROC) curve represents the sensitivity, or true positive rate, vs. false positive rate. It is calculated by first ordering the classified examples by confidence. Then all the examples are taken into account with decreasing confidence. The x-axis represents the false positive rate, and the y-axis represents the true positive rate. For optimistic (red) possibilities to calculate ROC curves, the correct classified examples are taken into account before looking at the false classifications, and the area in the red denotes the confidence interval. For pessimistic (blue) possibilities to calculate ROC curves, the wrong classifications are taken into account before looking at correct classifications, and the area in the blue denotes the confidence interval. Gene enrichment analysis and interaction network For gene enrichment analysis, the identified gene symbols were used as the input to KOBAS 3.0 [16] (http://kobas.cbi.pku.edu.cn/kobas3/), utilizing the gene-list enrichment tool with default statistical criteria and specifying Homo sapiens species. For the gene interaction network, the identified gene symbols were used as the input to STRING: functional protein association networks [17] (https://string-db.org/). Decision tree identified EPHX1, ALDH1A1, and GLI1 A decision tree is a machine-learning algorithm to split rule for attributes (genes in this study) to create a decision on the prediction class (whether the sample is HD or not). A cross-validation strategy was used to train the model and to evaluate its performance (Fig. 2a). The machine-learned model is shown in Fig. 2c, which contains five genes, epoxide hydrolase 1 (EPHX1), aldehyde dehydrogenases 1 (ALDH1A1), zinc finger protein GLI1 (GLI1), heat shock protein beta-1 (HSPB1), and Echinoderm microtubule-associated protein-like 2 (EML2). These five genes served as part of the input for the enrichment and network analysis. The performance of this model is shown as a receiver operating characteristic (ROC) curve in Fig. 2b, with an accuracy of 90.79 ± 4.57%, a precision of 87.26 ± 6.95%, and a recall of 96.17 ± 3.30%. The separation of samples in the eigenspace of EPHX1, ALDH1A1, and GLI1 is shown in Fig. 2d. EPHX1 catalyzes epoxides and may play a role in the metabolism of epoxide-containing fatty acids [18]. ALDH1A1 may detoxify aldehydes in the brain [19]. GLI1 acts as a transcriptional activator, which regulates genes of neuroprotection [20]. HSPB1 is a molecular chaperone that maintains denatured proteins in a folding-competent state and exerts a cytoprotective effect by proteostasis [21]. EML2 is a tubulin-binding protein which inhibits microtubule nucleation and growth, and microtubules required for autophagy of aggregated huntingtin [22]. These identified genes participate in catalyzing ROS-producing chemicals, proteostasis, transcriptional regulation of neuroprotective genes. Altogether, the dysregulation of these genes may advance HD pathological progress. Rule induction identified EPHX1, OTP, and ITPKB A rule induction model is a machine-learning algorithm, by judging the gene expression profiling in this study, that account for the most positive examples (HD), but the least negative examples (control). A cross-validation strategy was used to train the rule induction model and to evaluate its performance. The machine-learned model is shown in Fig. 3a, which contains four genes, EPHX1, homeobox protein orthopedia (OTP), inositol-trisphosphate 3-kinase B (ITPKB), and secretory carrier-associated membrane protein 1 (SCAMP1). These four genes also served as part of the input for the enrichment and network analysis. The performance of the rule induction model is shown as a ROC curve in Fig. 3b, with an accuracy of 89.49 ± 5.20%, a precision of 93.74 ± 6.81%, and a recall of 85.25 ± 11.10%. The separation of samples in the eigenspace of EPHX1, OTP, and ITPKB is shown in Fig. 3c. OTP is a homeobox protein with RNA polymerase II-specific DNA-binding transcription factor activity and may involve in the differentiation of hypothalamic neuroendocrine cells [23]. ITPKB is a kinase catalyzing inositol-trisphosphate 3 and may regulate neurite outgrowth by mediating MAPK cascade and RAS signal transduction [24]. SCAMP1 is a component of the recycling carrier that transports between endosomes, and Golgi complex, and the plasma membrane [25]. The rule induction model. a The receiver operating characteristic (ROC) curve showing the performance of the prediction power of the model. b The modeled rule induction. The number after "if … then" denotes the prediction result of the model (1 = HD; 0 = control), and the numbers in the parentheses (X/Y) denote the actual sample disease characteristic, X for the number of HD samples and Y for the number of control. c The sample distribution in the 3-dimensional eigenspace of gene profiling. Red = HD, blue = control Random forest identified 49 genes A random forest model is a machine-learning algorithm of a collection of decision trees with voting hypotheses, by judging the gene expression profiling in this study, that account for the most positive examples (HD), but the least negative examples (control). A cross-validation strategy was used to train the random forest model and to evaluate its performance. The identified 30 decision trees and 49 non-redundant genes of the random forest are listed in Additional file 2: Table S2. These 49 genes served as part of the input for the enrichment and network analysis. One example of the machine-learned tree model is shown in Fig. 4a, which contains three genes, Kelch-like protein 42 (KLHDC5/KLHL42), POU domain class 4 transcription factor 2 (POU4F2), and forkhead box protein O4 (FOXO4). The performance of the random forest is shown as a ROC curve in Fig. 4b, with an accuracy of 90.45 ± 4.24%, a precision of 87.25 ± 4.72%, and a recall of 94.79 ± 6.10%. The separation of samples in the eigenspace of KLHDC5, POU4F2, and FOXO4 is shown in Fig. 4c. KLHDC5 is a component of the BTB-CUL3-RBX1 E3 ubiquitin-protein ligase complex, which mediates the ubiquitination of KATNA1 and regulates the microtubule dynamics in mitotic progression and cytokinesis [26]. POU4F2 is an RNA polymerase II specific transcription factor, which cooperates with TP53 to increase transcriptional activation of BAX promoter activity mediating neuronal cell apoptosis [27]. FOXO4 is a transcription factor, which regulates insulin signaling pathway, hypoxia-induced response, cell cycle, and proteasome activity [28]. The random forest model. a The receiver operating characteristic (ROC) curve showing the performance of the prediction power of the model. b An example of a decision tree in the modeled random forest. A decision tree plots the "If… then" splitting of samples for prediction. The nodes denote the attributes, while the arrows denote the split, which meets a certain criterion. The number in the result box denotes the prediction result of the model (1 = HD; 0 = control), and the bar denotes the actual sample disease characteristic, bar sickness for sample size and bar segment for the proportion of HD samples (red = HD, blue = control). c The sample distribution in the 3-dimensional eigenspace of gene profiling. Red = HD, blue = control Generalized linear model identified 53 genes A generalized linear model (GLM) is a machine-learning algorithm that maximizes the log-likelihood (prediction power of whether a sample is an HD) and determines predictors (the gene profiling) with non-zero coefficients indicating a linear contribution of the gene profiling to the prediction. A cross-validation strategy was used to train the GLM and to evaluate its performance. The coefficients of the input genes are listed in Additional file 3: Table S3. There are 53 genes with a non-zero coefficient. We further selected more contributive genes by setting a threshold of the absolute value of the coefficient greater than 1. These 22 genes are also listed in Additional file 3: Table S3, and served as part of the input for the enrichment and network analysis. The top ten genes of coefficients are shown in Fig. 5a. The performance of the GLM is shown as a ROC curve in Fig. 5b, with an accuracy of 97.46 ± 3.26%, a precision of 95.96 ± 5.14%, and a recall of 99.38 ± 1.98%. The separation of samples in the eigenspace of gene profiling of the top 3 genes, OTP, EML2, and synaptic vesicle glycoprotein 2C (SV2C), is shown in Fig. 5c. SV2C regulates secretion in neural cells by enhancing selectively low-frequency neurotransmission [29]. The generalized linear model. a The receiver operating characteristic (ROC) curve showing the performance of the prediction power of the model. b The top ten genes with the highest absolute coefficients in the model. The bar color denotes the polarity of the coefficient (orange = negative; blue = positive). c The sample distribution in the 3-dimensional eigenspace of gene profiling. Red = HD, blue = control Gene enrichment and interaction network analysis The union of the identified 66 non-redundant genes from machine learning is summarized in Additional file 4: Table S4, and served as the input for the enrichment and network analysis. The significant enrichment in Gene Ontology, KEGG disease/ NHGRI GWAS catalog, and KEGG pathway are listed in Additional file 5, 6, 7: Tables S5, S6, and S7, respectively. As summarized in the lower part of Fig. 1, the enriched characteristics of the genes are transcription (16 genes), immune (12), neuron (11), signaling (11), and microtubule/actin binding. While Fig. 6 shows the interaction network, which indicates HSPB1, ITPKB, CRYAB, ACTN2, FERMT3, NEFL, POU4F2, RIT2, and PLXNB3 are closely related to HTT and may serve as pivotal points exerting consequences of HTT-polyQ mutation in HD. CRYAB is a chaperone preventing aggregation of proteins under stress conditions [30]. ACTN2 is an F-actin cross-linking protein that participates in cell adhesion, MAPK cascade, apoptosis, and the regulation of NMDA receptor activity [31]. FERMT3 is an integrin-binding protein that plays a part in cell adhesion and activation of the integrin-mediated signaling pathway [32]. NEFL is an intermediate filament protein that maintains neuronal caliber essential for sensorimotor function and spatial orientation [33]. RIT2 is a small GDP-binding protein which acts as molecular switches for intracellular signaling cascades in neuron and is regulated by POU4 transcription factors [34]. PLXNB3 is a SEMA receptor regulating cell adhesion, chemotaxis, and neuron projection [35]. Noticeably, HSPB1, ITPKB, and POU4F2 are also key attributes in the machine-learning models. Gene ontology and interaction network. The enriched gene ontology is represented by different colors of the nodes. The strength of the evidence of the interaction is represented by the darkness of the edges. Genes identified in this study are labeled with black font. Genes manually added are labeled with grey font In this study, from the profiling of 157 HD and 157 controls, we identified 66 potential contributing genes of HD using machine learning models of the decision tree, rule induction, random forest, and generalized linear model. The identified genes enriched the gene ontology of transcriptional regulation, inflammatory response, neuron projection, and cytoskeleton (Fig. 6). These pathways are connected by hubs of microtubule/actin binding, which may imply that mutant HTT mediates the HD pathological progress through these pathways via its interaction with the cytoskeleton or via transcriptional regulation capacity. We will discuss the enriched biological functions and the relevant genes in HD pathogenesis. C20orf54 (SLC52A3) encodes a plasma membrane transporter mediating the uptake of vitamin B2/riboflavin that is vital in biochemical oxidation–reduction reactions [36]. The mutation of SLC52A3 may cause degenerative disorders like Brown-Vialetto-Van-Laere syndrome (BVVL) [37] and Amyotrophic lateral sclerosis (ALS) [38]. Although the role of oxidative damage in HD pathogenesis has been discussed for decades [39]. The potential role of SLC52A3 in the riboflavin-related oxidative damage in HD has not been noticed yet. The other two genes with detoxifying ability identified in this study are ALDH1A1 and MT1H, which detoxifies aldehydes [19] or copper ions [40], respectively. Whether aldehydes or copper ion detoxification participates in HD pathogenesis requires further study. One of the hallmark pathological features of HD is the intracellular aggregates of mutant HTT, termed inclusion bodies (IBs). The insufficient clearance of toxic forms of mutant HTT is postulated as one hypothesis of HD pathogenesis [41]. Three genes involving in protein degradation were identified in this study: CRYAB, HSPB1, and KLHDC5. Expression of CRYAB influences autophagy and protein aggregation [42]. HSPB1 mutation may impair autophagy and cause neuropathy [43]. KLHDC5 is an adapter of the BTB-CUL3-RBX1 E3 ubiquitin-protein ligase and regulates the ubiquitin–proteasome system [44]. Currently, there is a lack of knowledge of the roles played by CRYAB, HSPB1, and KLHDC5 in HD pathogenesis. Although it is unclear whether neuroinflammation has an active influence or is a reactive process during the HD pathogenesis, both innate and adaptive immune systems may play important roles in HD [45]. The former includes activation of microglia, increased proinflammatory cytokines, impaired translocation of macrophages, and complement factors. The later includes T-cell priming by dendritic cells (DCs). In this study, the identified innate immunity genes include C4B, DARC, RAB20, SBNO2, SCAMP1, SERPINA3, and S100A8, while the adaptive immunity genes include PNP, TCIRG1, and TMEM176A. More specifically, C4B is one complement factor [46]; DARC is a chemokine receptor [47], RAB20 involves in endocytosis [48]; SBNO2 regulates the transcription of NF-κB in macrophages [49]; SCAMP1 regulates the neutrophil degranulation [50]; SERPINA3 inhibits neutrophil cathepsin G and mast cell chymase [51]. S100A8 induces neutrophil chemotaxis and therefore participates in both innate and adaptive immune systems [52], while PNP regulates T cell proliferation [53]; TCIRG1 isoform b is an inhibitory receptor on T cells [54]; TMEM176A regulates the dendritic cell differentiation [55]. Since the discovery of the involvement of HTT in the transcription regulation of P53 and CREB [56], dysregulation of transcription by mHTT becomes a popular hypothesis of HD pathogenesis [9]. In this study, we identified several transcription regulatory genes, including CIDEA, CIRBP, FOXO4, GLI1, KLF10, NUPR1, OTP, POU4F2, PRKAR1A, RIT2, SFRS5, TBX15, and TEAD2. Notably, OTP, RIT2, and POU4F2 also regulate neurogenesis (Fig. 6), while CIRBP, FOXO4, GLI1, and NUPR1 regulate gene expression under stress circumstances [57,58,59,60]. Furthermore, CIDEA, KLF10, PRKAR1A, TBX15, and TEAD2 regulate gene expression of apoptosis control [61,62,63,64,65]. Whether these genes are driving forces or merely passengers in HD pathogenesis requires further investigation. Wild-type HTT is a scaffolding protein interacting with β-tubulin and microtubules [66]. It also interacts with the dynactin complex and regulates intracellular trafficking processes [67]. In this study, we identified several microtubule/actin binding genes, including ABBA-1, ACTN2, CNN2, FAM110C, KIAA1949, and SEMA3E. Likewise, dysregulation of these genes may disturb intracellular trafficking processes with mHTT. Wild-type HTT also plays a critical role at the synapse. It is associated with the synaptic vesicles at the pre-synapse [7] and is associated with the scaffolding protein PSD95 at the postsynaptic density [68]. Moreover, HTT is required during the formation of cortical and striatal excitatory synapses [69]. However, the role of HTT in the neuron is still obscure. In this study, we identified several neuronal genes, including GOT1, HTR2C, PLXNB3, and SV2C. GOT1 synthesizes and regulates the quantity of glutamate [70], which is a key neurotransmitter. Besides, HTR2C is a serotonin receptor mediating excitatory neurotransmission [71], while PLXNB3 is a SEMA5A receptor mediating axon guidance [72]. Moreover, SV2C is a synaptic vesicle glycoprotein mediating low-frequency neurotransmission [29]. The dysregulation of these genes may provoke HD symptoms. We also identified three genes in the cognitive, sensory, and perceptual systems: DOPEY2, EML2, and NEFL. The deficits in these domains are the hallmark symptoms in HD and may serve as diagnostic cues [73,74,75]. The overexpression of DOPEY2 may contribute to mental retardation [76], while EML2 has a role in visual perception [77]. Moreover, mutations in NEFL cause inherited motor and sensory neuropathy [78]. Although thousands of paper report HD and sensorimotor dysfunction, no one notices their potential roles in HD pathological symptoms, especially sensorimotor dysfunction. In this study, we revealed that HTT mutation might exert pathological interference on NEFL by two independent routes, as shown in Fig. 6. One route is by dysregulation of transcription through POU4F2. The other route is by dysregulation of the cytoskeleton through ACTN2 and CRYAB. Finally, we compared our results with the existing ML-based method [14] for identifying HD-contributing genes and checked whether these 66 contributing genes are included in the known HD gene set. Out of the 66 genes, 13 genes are mutually identified in both ML studies. Furthermore, 21 of the 66 genes have been identified in previous HD studies. This information was provided in Additional file 8: Table S8. Machine learning using the decision tree, rule induction, random forest, and generalized linear model identified 66 potential contributing genes of HD from the expression profiles of postmortem prefrontal cortex samples of 157 HD and 157 controls. These genes participate in oxidation–reduction reactions, protein degradation, immunity, transcription, neural transduction, and perception. The mHTT may interfere with both the expression and transport of these genes to promote the HD pathogenesis. The dataset supporting this article's conclusions is available in the NCBI GEO repository, with accession number GSE33000 in https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE33000. The data supporting the conclusions of this article are included within the article and its additional files. The machine learning platform RapidMiner Studio is available at https://rapidminer.com/. poly-Q: Polyglutamine mHTT: Mutant HTT GLM: True positive TN: True negative FN: False negative Rosenblatt A. Neuropsychiatry of Huntington's disease. Dialogues Clin Neurosci. 2007;9(2):191. Yohrling G, Raimundo K, Crowell V, Lovecky D, Vetter L, Seeberger L: Prevalence of Huntington's disease in the US (954). In: AAN Enterprises; 2020. Ohlmeier C, Saum K-U, Galetzka W, Beier D, Gothe H. Epidemiology and health care utilization of patients suffering from Huntington's disease in Germany: real world evidence based on German claims data. BMC Neurol. 2019;19(1):318. Reiner A, Albin RL, Anderson KD, D'Amato CJ, Penney JB, Young AB. Differential loss of striatal projection neurons in Huntington disease. Proc Natl Acad Sci. 1988;85(15):5733–7. Rosas H, Liu A, Hersch S, Glessner M, Ferrante R, Salat D, van Der Kouwe A, Jenkins B, Dale A, Fischl B. Regional and progressive thinning of the cortical ribbon in Huntington's disease. Neurology. 2002;58(5):695–701. MacDonald ME, Ambrose CM, Duyao MP, Myers RH, Lin C, Srinidhi L, Barnes G, Taylor SA, James M, Groot N. A novel gene containing a trinucleotide repeat that is expanded and unstable on Huntington's disease chromosomes. Cell. 1993;72(6):971–83. DiFiglia M, Sapp E, Chase K, Schwarz C, Meloni A, Young C, Martin E, Vonsattel J-P, Carraway R, Reeves SA. Huntingtin is a cytoplasmic protein associated with vesicles in human and rat brain neurons. Neuron. 1995;14(5):1075–81. Jimenez-Sanchez M, Licitra F, Underwood BR, Rubinsztein DC. Huntington's disease: mechanisms of pathogenesis and therapeutic strategies. Cold Spring Harbor Perspect Med. 2017;7(7):a024240. Nissley DA, O'Brien EP. Altered co-translational processing plays a role in Huntington's pathogenesis—a hypothesis. Front Mol Neurosci. 2016;9:54. Marsland S. Machine learning: an algorithmic perspective. Boca Raton: CRC Press; 2015. Rizk-Jackson A, Stoffers D, Sheldon S, Kuperman J, Dale A, Goldstein J, Corey-Bloom J, Poldrack RA, Aron AR. Evaluating imaging biomarkers for neurodegeneration in pre-symptomatic Huntington's disease using machine learning techniques. Neuroimage. 2011;56(2):788–96. Odish OF, Johnsen K, van Someren P, Roos RA, van Dijk JG. EEG may serve as a biomarker in Huntington's disease using machine learning automatic classification. Sci Rep. 2018;8(1):1–8. Perakslis E, Riordan H, Friedhoff L, Nabulsi A, Pich EM. A call for a global 'bigger' data approach to Alzheimer disease. Nat Rev Drug Discov. 2019;18(5):319. Jiang X, Zhang H, Duan F, Quan X. Identify Huntington's disease associated genes based on restricted Boltzmann machine with RNA-seq data. BMC Bioinform. 2017;18(1):447. Narayanan M, Huynh JL, Wang K, Yang X, Yoo S, McElwee J, Zhang B, Zhang C, Lamb JR, Xie T. Common dysregulation network in the human prefrontal cortex underlies two neurodegenerative diseases. Mol Syst Biol. 2014;10(7):743. Xie C, Mao X, Huang J, Ding Y, Wu J, Dong S, Kong L, Gao G, Li C-Y, Wei L. KOBAS 2.0: a web server for annotation and identification of enriched pathways and diseases. Nucleic Acids Res. 2011;39(suppl_2):W316–22. Szklarczyk D, Gable AL, Lyon D, Junge A, Wyder S, Huerta-Cepas J, Simonovic M, Doncheva NT, Morris JH, Bork P. STRING v11: protein–protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Res. 2019;47(D1):D607–13. Decker M, Adamska M, Cronin A, Di Giallonardo F, Burgener J, Marowsky A, Falck JR, Morisseau C, Hammock BD, Gruzdev A. EH3 (ABHD9): the first member of a new epoxide hydrolase family with high activity for fatty acid epoxides. J Lipid Res. 2012;53(10):2038–45. Wey MC-Y, Fernandez E, Martinez PA, Sullivan P, Goldstein DS, Strong R. Neurodegeneration and motor dysfunction in mice lacking cytosolic and mitochondrial aldehyde dehydrogenases: implications for Parkinson's disease. PLoS ONE. 2012;7(2):31522. Suwelack D, Hurtado-Lorenzo A, Millan E, Gonzalez-Nicolini V, Wawrowsky K, Lowenstein P, Castro M. Neuronal expression of the transcription factor Gli1 using the Tα1 α-tubulin promoter is neuroprotective in an experimental model of Parkinson's disease. Gene Ther. 2004;11(24):1742–52. Ojha J, Masilamoni G, Dunlap D, Udoff RA, Cashikar AG. Sequestration of toxic oligomers by HspB1 as a cytoprotective mechanism. Mol Cell Biol. 2011;31(15):3146–57. Iwata A, Riley BE, Johnston JA, Kopito RR. HDAC6 and microtubules are required for autophagic degradation of aggregated huntingtin. J Biol Chem. 2005;280(48):40282–92. Simeone A, D'Apice MR, Nigro V, Casanova J, Graziani F, Acampora D, Avantaggiato V. Orthopedia, a novel homeobox-containing gene expressed in the developing CNS of both mouse and Drosophila. Neuron. 1994;13(1):83–101. Koenig S, Moreau C, Dupont G, Scoumanne A, Erneux C. Regulation of NGF-driven neurite outgrowth by Ins (1, 4, 5) P3 kinase is specifically associated with the two isoenzymes Itpka and Itpkb in a model of PC 12 cells. FEBS J. 2015;282(13):2553–69. Fernández-Chacón R, Achiriloaie M, Janz R, Albanesi JP, Südhof TC. SCAMP1 function in endocytosis. J Biol Chem. 2000;275(17):12752–6. Cummings CM, Bentley CA, Perdue SA, Baas PW, Singer JD. The Cul3/Klhdc5 E3 ligase regulates p60/katanin and is required for normal mitosis in mammalian cells. J Biol Chem. 2009;284(17):11663–75. Hudson CD, Podesta J, Henderson D, Latchman D, Budhram-Mahadeo V. Coexpression of Brn-3a POU protein with p53 in a population of neuronal progenitor cells is associated with differentiation and protection against apoptosis. J Neurosci Res. 2004;78(6):803–14. Liu W, Li Y, Luo B. Current perspective on the regulation of FOXO4 and its role in disease progression. Cell Mol Life Sci. 2019;77:1–13. Janz R, Südhof T. SV2C is a synaptic vesicle protein with an unusually restricted localization: anatomy of a synaptic vesicle protein family. Neuroscience. 1999;94(4):1279–90. Ousman SS, Tomooka BH, Van Noort JM, Wawrousek EF, O'Conner K, Hafler DA, Sobel RA, Robinson WH, Steinman L. Protective and therapeutic role for αB-crystallin in autoimmune demyelination. Nature. 2007;448(7152):474–9. Ratzliff A, Soltesz I. Differential immunoreactivity for alpha-actinin-2, an N-methyl-D-aspartate-receptor/actin binding protein, in hippocampal interneurons. Neuroscience. 2001;103(2):337–49. Lu C, Cui C, Liu B, Zou S, Song H, Tian H, Zhao J, Li Y. FERMT3 contributes to glioblastoma cell proliferation and chemoresistance to temozolomide through integrin mediated Wnt signaling. Neurosci Lett. 2017;657:77–83. Dubois M, Strazielle C, Julien JP, Lalonde R. Mice with the deleted neurofilament of low molecular weight (Nefl) gene: 2. Effects on motor functions and spatial orientation. J Neurosci Res. 2005;80(6):751–8. Zhang L, Wahlin K, Li Y, Masuda T, Yang Z, Zack DJ, Esumi N. RIT2, a neuron-specific small guanosine triphosphatase, is expressed in retinal neuronal cells and its promoter is modulated by the POU4 transcription factors. Mol Vis. 2013;19:1371. Pasterkamp RJ. Getting neural circuits into shape with semaphorins. Nat Rev Neurosci. 2012;13(9):605–18. Subramanian VS, Sabui S, Teafatiller T, Bohl JA, Said HM. Structure/functional aspects of the human riboflavin transporter-3 (SLC52A3): role of the predicted glycosylation and substrate-interacting sites. Am J Physiol Cell Physiol. 2017;313(2):C228–38. Udhayabanu T, Subramanian VS, Teafatiller T, Gowda VK, Raghavan VS, Varalakshmi P, Said HM, Ashokkumar B. SLC52A2 [p. P141T] and SLC52A3 [p. N21S] causing Brown-Vialetto-Van Laere Syndrome in an Indian patient: first genetically proven case with mutations in two riboflavin transporters. Clin Chim Acta. 2016;462:210–4. Khani M, Alavi A, Shamshiri H, Zamani B, Hassanpour H, Kazemi MH, Nafissi S, Elahi E. Mutation screening of SLC52A3, C19orf12, and TARDBP in Iranian ALS patients. Neurobiol Aging. 2019;75:225. Browne SE, Beal MF. Oxidative damage in Huntington's disease pathogenesis. Antioxid Redox Signal. 2006;8(11–12):2061–73. Roelofsen H, Balgobind R, Vonk RJ. Proteomic analyzes of copper metabolism in an in vitro model of Wilson disease using surface enhanced laser desorption/ionization-time of flight-mass spectrometry. J Cell Biochem. 2004;93(4):732–40. Arrasate M, Finkbeiner S. Protein aggregates in Huntington's disease. Exp Neurol. 2012;238(1):1–11. Lu S-Z, Guo Y-S, Liang P-Z, Zhang S-Z, Yin S, Yin Y-Q, Wang X-M, Ding F, Gu X-S, Zhou J-W. Suppression of astrocytic autophagy by αB-crystallin contributes to α-synuclein inclusion formation. Transl Neurodegener. 2019;8(1):3. Haidar M, Asselbergh B, Adriaenssens E, De Winter V, Timmermans J-P, Auer-Grumbach M, Juneja M, Timmerman V. Neuropathy-causing mutations in HSPB1 impair autophagy by disturbing the formation of SQSTM1/p62 bodies. Autophagy. 2019;15(6):1051–68. Genschik P, Sumara I, Lechner E. The emerging family of CULLIN3-RING ubiquitin ligases (CRL3s): cellular functions and disease implications. EMBO J. 2013;32(17):2307–20. Ellrichmann G, Reick C, Saft C, Linker RA. The role of the immune system in Huntington's disease. Clin Dev Immunol. 2013;2013:541259. https://doi.org/10.1155/2013/541259. Blom AM, Villoutreix BO, Dahlbäck B. Complement inhibitor C4b-binding protein—friend or foe in the innate immune system? Mol Immunol. 2004;40(18):1333–46. Horuk R, Martin A, Hesselgesser J, Hadley T, Lu ZH, Wang ZX, Peiper S. The Duffy antigen receptor for chemokines: structural analysis and expression in the brain. J Leukoc Biol. 1996;59(1):29–38. Egami Y, Araki N. Rab20 regulates phagosome maturation in RAW264 macrophages during Fc gamma receptor-mediated phagocytosis. PLoS ONE. 2012;7(4):e35663. El Kasmi KC, Smith AM, Williams L, Neale G, Panopolous A, Watowich SS, Häcker H, Foxwell BM, Murray PJ. Cutting edge: a transcriptional repressor and corepressor induced by the STAT3-regulated anti-inflammatory signaling pathway. J Immunol. 2007;179(11):7215–9. Faurschou M, Borregaard N. Neutrophil granules and secretory vesicles in inflammation. Microbes Infect. 2003;5(14):1317–27. Gettins PG. Serpin structure, mechanism, and function. Chem Rev. 2002;102(12):4751–804. Ryckman C, Vandal K, Rouleau P, Talbot M, Tessier PA. Proinflammatory activities of S100: proteins S100A8, S100A9, and S100A8/A9 induce neutrophil chemotaxis and adhesion. J Immunol. 2003;170(6):3233–42. Toro A, Grunebaum E. TAT-mediated intracellular delivery of purine nucleoside phosphorylase corrects its deficiency in mice. J Clin Investig. 2006;116(10):2717–26. Utku N, Boerner A, Tomschegg A, Bennai-Sanfourche F, Bulwin G-C, Heinemann T, Loehler J, Blumberg RS, Volk H-D. TIRC7 deficiency causes in vitro and in vivo augmentation of T and B cell activation and cytokine response. J Immunol. 2004;173(4):2342–52. Condamine T, Le Texier L, Howie D, Lavault A, Hill M, Halary F, Cobbold S, Waldmann H, Cuturi MC, Chiffoleau E. Tmem176B and Tmem176A are associated with the immature state of dendritic cells. J Leukoc Biol. 2010;88(3):507–15. Steffan JS, Kazantsev A, Spasic-Boskovic O, Greenwald M, Zhu Y-Z, Gohler H, Wanker EE, Bates GP, Housman DE, Thompson LM. The Huntington's disease protein interacts with p53 and CREB-binding protein and represses transcription. Proc Natl Acad Sci. 2000;97(12):6763–8. Gotic I, Omidi S, Fleury-Olela F, Molina N, Naef F, Schibler U. Temperature regulates splicing efficiency of the cold-inducible RNA-binding protein gene Cirbp. Genes Dev. 2016;30(17):2005–17. Araujo J, Breuer P, Dieringer S, Krauss S, Dorn S, Zimmermann K, Pfeifer A, Klockgether T, Wuellner U, Evert BO. FOXO4-dependent upregulation of superoxide dismutase-2 in response to oxidative stress is impaired in spinocerebellar ataxia type 3. Hum Mol Genet. 2011;20(15):2928–41. Ji H, Zhang X, Du Y, Liu H, Li S, Li L. Polydatin modulates inflammation by decreasing NF-κB activation and oxidative stress by increasing Gli1, Ptch1, SOD1 expression and ameliorates blood–brain barrier permeability for its neuroprotective effect in pMCAO rat brain. Brain Res Bull. 2012;87(1):50–9. Xu X, Huang E, Tai Y, Zhao X, Chen X, Chen C, Chen R, Liu C, Lin Z, Wang H. Nupr1 modulates methamphetamine-induced dopaminergic neuronal apoptosis and autophagy through CHOP-Trib3-mediated endoplasmic reticulum stress signaling pathway. Front Mol Neurosci. 2017;10:203. Ito M, Nagasawa M, Hara T, Ide T, Murakami K. Differential roles of CIDEA and CIDEC in insulin-induced anti-apoptosis and lipid droplet formation in human adipocytes. J Lipid Res. 2010;51(7):1676–84. Hsu C-F, Sui C-L, Wu W-C, Wang J-J, Yang DH, Chen Y-C, Winston C, Chang H-S. Klf10 induces cell apoptosis through modulation of BI-1 expression and Ca2+ homeostasis in estrogen-responding adenocarcinoma cells. Int J Biochem Cell Biol. 2011;43(4):666–73. Robinson-White AJ, Leitner WW, Aleem E, Kaldis P, Bossis I, Stratakis CA. PRKAR1A inactivation leads to increased proliferation and decreased apoptosis in human B lymphocytes. Can Res. 2006;66(21):10603–12. Arribas J, Giménez E, Marcos R, Velázquez A. Novel antiapoptotic effect of TBX15: overexpression of TBX15 reduces apoptosis in cancer cells. Apoptosis. 2015;20(10):1338–46. Malt AL, Cagliero J, Legent K, Silber J, Zider A, Flagiello D. Alteration of TEAD1 expression levels confers apoptotic resistance through the transcriptional up-regulation of Livin. PLoS ONE. 2012;7(9):e45498. Hoffner G, Kahlem P, Djian P. Perinuclear localization of huntingtin as a consequence of its binding to microtubules through an interaction with β-tubulin: relevance to Huntington's disease. J Cell Sci. 2002;115(5):941–8. Caviston JP, Ross JL, Antony SM, Tokito M, Holzbaur EL. Huntingtin facilitates dynein/dynactin-mediated vesicle transport. Proc Natl Acad Sci. 2007;104(24):10045–50. Sun Y, Savanenin A, Reddy PH, Liu YF. Polyglutamine-expanded huntingtin promotes sensitization of N-methyl-D-aspartate receptors via post-synaptic density 95. J Biol Chem. 2001;276(27):24713–8. McKinstry SU, Karadeniz YB, Worthington AK, Hayrapetyan VY, Ozlu MI, Serafin-Molina K, Risher WC, Ustunkaya T, Dragatsis I, Zeitlin S. Huntingtin is required for normal excitatory synapse development in cortical and striatal circuits. J Neurosci. 2014;34(28):9455–72. Ruban A, Malina KCK, Cooper I, Graubardt N, Babakin L, Jona G, Teichberg VI. Combined treatment of an amyotrophic lateral sclerosis rat model with recombinant GOT1 and oxaloacetic acid: a novel neuroprotective treatment. Neurodegener Dis. 2015;15(4):233–42. Iwamoto K, Bundo M, Kato T. Serotonin receptor 2C and mental disorders: genetic, expression, and RNA editing studies. RNA Biol. 2009;6(3):248–53. Hartwig C, Veske A, Krejcova S, Rosenberger G, Finckh U. Plexin B3 promotes neurite outgrowth, interacts homophilically, and interacts with Rin. BMC Neurosci. 2005;6(1):53. Hayward L, Zubrick SR, Hall W. Early sensory-perceptual changes in Huntington's disease. Aust N Z J Psychiatry. 1985;19(4):384–9. Kirkwood SC, Siemers E, Stout JC, Hodes M, Conneally PM, Christian JC, Foroud T. Longitudinal cognitive and motor changes among presymptomatic Huntington disease gene carriers. Arch Neurol. 1999;56(5):563–8. Harrington DL, Smith MM, Zhang Y, Carlozzi NE, Paulsen JS. Group P-HIotHS: cognitive domains that predict time to diagnosis in prodromal Huntington disease. J Neurol Neurosurg Psychiatry. 2012;83(6):612–9. Rachidi M, Delezoide A-L, Delabar J-M, Lopes C. A quantitative assessment of gene expression (QAGE) reveals differential overexpression of DOPEY2, a candidate gene for mental retardation, in Down syndrome brain regions. Int J Dev Neurosci. 2009;27(4):393–8. Kothapalli KS, Anthony JC, Pan BS, Hsieh AT, Nathanielsz PW, Brenna JT. Differential cerebral cortex transcriptomes of baboon neonates consuming moderate and high docosahexaenoic acid formulas. PLoS ONE. 2007;2(4):e370. Jordanova A, De Jonghe P, Boerkoel C, Takashima H, De Vriendt E, Ceuterick C, Martin JJ, Butler I, Mancias P, Papasozomenos SC. Mutations in the neurofilament light chain gene (NEFL) cause early onset severe Charcot–Marie–Tooth disease. Brain. 2003;126(3):590–7. This work was supported by grants from the Ministry of Science and Technology in Taiwan (MOST108-2320-B-039-031-MY3, MOST 109-2314-B-039-030) and grants from China Medical University and Hospital (CMU109-MF-85, CMU108-MF-68, CMU107-S-08, DMR-109-150, DMR-106-119). The funders had no role in this study. Jack Cheng and Hsin-Ping Liu have contributed equally to this work Graduate Institute of Integrated Medicine, College of Chinese Medicine, China Medical University, Taichung, 40402, Taiwan Jack Cheng & Wei-Yong Lin Department of Medical Research, China Medical University Hospital, Taichung, 40447, Taiwan Jack Cheng, Wei-Yong Lin & Fuu-Jen Tsai Graduate Institute of Acupuncture Science, College of Chinese Medicine, China Medical University, Taichung, 40402, Taiwan Hsin-Ping Liu Brain Diseases Research Center, China Medical University, Taichung, 40402, Taiwan Wei-Yong Lin School of Chinese Medicine, China Medical University, Taichung, 40402, Taiwan Fuu-Jen Tsai Department of Biotechnology, Asia University, Taichung, 41354, Taiwan Children's Medical Center, China Medical University Hospital, Taichung, 40447, Taiwan Jack Cheng WYL and FJT initiated, supervised this study, and substantively revised the manuscript. JC and HPL contributed to the acquisition, analysis, and interpretation of data. All authors discussed and drafted the manuscript. All authors read and approved the final manuscript. Correspondence to Wei-Yong Lin or Fuu-Jen Tsai. The authors have no conflict of interest. Additional file 1: Table S1. The input file to RapidMiner program of this study. Columns are attributes, while rows are samples. The column "HD" is a binominal attribute, i.e., 1 or 0, describing whether the sample is diagnosed with HD or not, respectively. The identified 30 decision trees and 49 non-redundant genes of the random forest. The coefficients of the generalized linear model. The list of the identified genes of this study. A tick sign or blank denotes whether this gene was identified in the corresponding algorithm or not, respectively. The enriched Gene Ontology of the identified genes. The enriched KEGG DISEASE, NHGRI GWAS Catalog, and OMIM of the identified genes. The enriched KEGG PATHWAY, Reactome, and PANTHER database of the identified genes. The comparison with the existing ML-based HD study, and whether these 66 genes are included in the known HD studies. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Cheng, J., Liu, HP., Lin, WY. et al. Identification of contributing genes of Huntington's disease by machine learning. BMC Med Genomics 13, 176 (2020). https://doi.org/10.1186/s12920-020-00822-w DOI: https://doi.org/10.1186/s12920-020-00822-w Enrichment analysis Bioinformatic and algorithmical studies
CommonCrawl
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance. ISSN 1088-6850(online) ISSN 0002-9947(print) Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues (1900–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article Whitney's extension problem for multivariate $C^{1,\omega }$-functions Authors: Yuri Brudnyi and Pavel Shvartsman Journal: Trans. Amer. Math. Soc. 353 (2001), 2487-2512 MSC (1991): Primary 46E35 DOI: https://doi.org/10.1090/S0002-9947-01-02756-8 Published electronically: February 7, 2001 MathSciNet review: 1814079 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information We prove that the trace of the space $C^{1,\omega }({\mathbb R}^n)$ to an arbitrary closed subset $X\subset {\mathbb R}^n$ is characterized by the following "finiteness� property. A function $f:X\rightarrow {\mathbb R}$ belongs to the trace space if and only if the restriction $f|_Y$ to an arbitrary subset $Y\subset X$ consisting of at most $3\cdot 2^{n-1}$ can be extended to a function $f_Y\in C^{1,\omega }({\mathbb R}^n)$ such that \[ \sup \{\|f_Y\|_{C^{1,\omega }}:~Y\subset X, ~\operatorname {card} Y\le 3\cdot 2^{n-1}\}<\infty . \] The constant $3\cdot 2^{n-1}$ is sharp. The proof is based on a Lipschitz selection result which is interesting in its own right. References [Enhancements On Off] (What's this?) Yuri Brudnyi and Pavel Shvartsman, Generalizations of Whitney's extension theorem, Internat. Math. Res. Notices 3 (1994), 129 ff., approx. 11 pp.}, issn=1073-7928, review=\MR{1266108}, doi=10.1155/S1073792894000140,. Yuri Brudnyi and Pavel Shvartsman, The Whitney problem of existence of a linear extension operator, J. Geom. Anal. 7 (1997), no. 4, 515–574. MR 1669235, DOI https://doi.org/10.1007/BF02921632 Yuri Brudnyi and Pavel Shvartsman, The trace of jet space $J^k\Lambda ^\omega $ to an arbitrary closed subset of $\mathbf R^n$, Trans. Amer. Math. Soc. 350 (1998), no. 4, 1519–1553. MR 1407483, DOI https://doi.org/10.1090/S0002-9947-98-01872-8 Georges Glaeser, Étude de quelques algèbres tayloriennes, J. Analyse Math. 6 (1958), 1–124; erratum, insert to 6 (1958), no. 2 (French). MR 101294, DOI https://doi.org/10.1007/BF02790231 P. A. Shvartsman, Lipschitz sections of multivalued mappings, Studies in the theory of functions of several real variables (Russian), Yaroslav. Gos. Univ., Yaroslavl′, 1986, pp. 121–132, 149 (Russian). MR 878806 P. A. Shvartsman, $K$-functionals of weighted Lipschitz spaces and Lipschitz selections of multivalued mappings, Interpolation spaces and related topics (Haifa, 1990) Israel Math. Conf. Proc., vol. 5, Bar-Ilan Univ., Ramat Gan, 1992, pp. 245–268. MR 1206505 Elias M. Stein, Singular integrals and differentiability properties of functions, Princeton Mathematical Series, No. 30, Princeton University Press, Princeton, N.J., 1970. MR 0290095 H. Whitney, Analytic extension of differentiable functions defined in closed sets, Trans. Amer. Math. Soc. 36 (1934), 63–89. ---, Differentiable functions defined in closed sets. I., Trans. Amer. Math. Soc. 36 (1934), 369–387. Yu. Brudnyi and P. Shvartsman, Generalizations of Whitney's extension theorem, Internat. Math. Res. Notices, N3 (1994), 129–139. ---, The Whitney problem of existence of a linear extension operator, J. Geom. Anal. 7 (1997), no. 4, 515–574. ---, The trace of jet space $J^{k}\Lambda ^{\omega }$ to an arbitrary closed subset of $\mathbf {R}^{n}$, Trans. Amer. Math. Soc. 350 (1998), 1519–1553. G. Glaeser, Étude de quelques algèbres Tayloriennes, J. d'Analyse Math. 6 (1958), 1–125. P. Shvartsman, "Lipschitz sections of multivalued mappings�, in Studies in the Theory of Functions of Several Real Variables, Yaroslav. State. Univ., Yaroslavl, 1986, 121–132 (Russian). ---, "$K$-functionals of weighted Lipschitz spaces and Lipschitz selections of multivalued mappings�, in Interpolation Spaces and Related Topics, Israel Math. Conf. Proc. 5, Weizmann, Jerusalem, 1992, 245–268. E. Stein, Singular Integrals and Differentiability Properties of Functions. Princeton Univ. Press, Princeton, 1970. Retrieve articles in Transactions of the American Mathematical Society with MSC (1991): 46E35 Retrieve articles in all journals with MSC (1991): 46E35 Yuri Brudnyi Affiliation: Department of Mathematics, Technion-Israel Institute of Technology, 32000 Haifa, Israel Email: [email protected] Pavel Shvartsman Email: [email protected] Keywords: Extension of smooth functions, Whitney's extension problem, finiteness property, Lipschitz selection Received by editor(s): June 26, 2000 Additional Notes: The research was supported by Grant No. 95-00225 from the United States–Israel Binational Science Foundation (BSF), Jerusalem, Israel and by Technion V. P. R. Fund - M. and M. L. Bank Mathematics Research Fund. The second named author was also supported by the Center for Absorption in Science, Israel Ministry of Immigrant Absorption. Dedicated: Dedicated to the memory of Evsey Dyn'kin Article copyright: © Copyright 2001 American Mathematical Society Join the AMS AMS Conferences News & Public Outreach Math in the Media Mathematical Imagery Mathematical Moments Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Programs for Students Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Jobs at AMS Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl